Senior Data Engineer - Tempe, United States - Virtual

    Default job background
    Description
    Vaco is on the move to hire a Senior Data Engineer for a global client.

    In this role, your primary responsibility will be to spearhead the design, development, and implementation of data solutions aimed at empowering our organization to derive actionable insights from intricate datasets.

    You will take the lead in guiding a team of data engineers, fostering collaboration with cross-functional teams, and spearheading initiatives geared towards fortifying our data infrastructure, CI/CD pipelines, and analytics capabilities.

    We are looking for data focused engineers who have strong experience with Python, SQL and Spark and have focused on Azure environments (Azure Data Factory, Databricks).

    Additionally, experience within enterprise data lake and warehouse solutions are imperative within this group. If this aligns with your experience and you'd like to learn more. Please reach out to the Vaco Arizona team today


    Core responsibilities:


    Apply advanced knowledge of Data Engineering principles, methodologies and techniques to design and implement data loading and aggregation frameworks across broad areas of the organization.

    Gather and process raw, structured, semi-structured and unstructured data using batch and real-time data processing frameworks.


    Implement and optimize data solutions in enterprise data warehouses and big data repositories, focusing primarily on movement to the cloud.


    Drive new and enhanced capabilities to Enterprise Data Platform partners to meet the needs of product / engineering / business.

    Experience building enterprise systems especially using Databricks, Snowflake and platforms like Azure, AWS, GCP etc

    Leverage strong Python, Spark, SQL programming skills to construct robust pipelines for efficient data processing and analysis.

    Implement CI/CD pipelines for automating build, test, and deployment processes to accelerate the delivery of data solutions.

    Implement data modeling techniques to design and optimize data schemas, ensuring data integrity and performance.

    Drive continuous improvement initiatives to enhance performance, reliability, and scalability of our data infrastructure.

    Collaborate with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions.

    Implement best practices for data governance, security, and compliance to ensure the integrity and confidentiality of our data assets.


    Qualifications:


    Proven experience (8+) in a data engineering role, with expertise in designing and building data pipelines, ETL processes, and data warehouses.

    Strong proficiency in SQL, Python and Spark programming languages.

    Strong experience with cloud platforms such as AWS, Azure, or GCP is a must.

    Hands-on experience with big data technologies such as Hadoop, Spark, Kafka, and distributed computing frameworks.


    Knowledge of data lake and data warehouse solutions, including Databricks, Snowflake, Amazon Redshift, Google BigQuery, Azure Data Factory, Airflow etc.

    Experience in implementing CI/CD pipelines for automating build, test, and deployment processes.

    Solid understanding of data modeling concepts, data warehousing architectures, and data management best practices.


    Excellent communication and leadership skills, with the ability to effectively collaborate with cross-functional teams and drive consensus on technical decisions.

    Bachelor's or master's degree in computer science, Engineering, or a related field.

    Relevant certifications (e.g., Azure, Databricks, Snowflake) would be a plus.

    #J-18808-Ljbffr