Data Engineer - San Diego, United States - Itility US

    Itility US
    Itility US San Diego, United States

    2 weeks ago

    Default job background
    Description

    WHO WE ARE.

    We believe in merging technology and data to drive our customers one step beyond. Itility digital consultants are experts in data, cloud, software, and IT infrastructure. Acting as the 'digital twin' of customers, we work shoulder-to-shoulder to exceed business goals and push the boundaries of what you thought was possible. We combine an agile way of working with proven methods and building blocks. This enables teams to act quickly and shape, deliver, and run innovative digital solutions. With a continuous focus on implementing your strategy and generating results.

    ABOUT THE ROLE.

    Are you adept at coding to capture data? Are you thrilled when you dig into various data sources and transform them into valuable, user-friendly information? Are your skills at the intersection of data and software? Is your day made when data flows flawlessly based on the carefully constructed code you've created? Then we have the perfect opportunity for you For numerous enterprise clients, we develop data connectors to streamline data flow from multiple sources to analytics platforms, such as Splunk, Databricks, Hadoop, or others. One thing these systems share is that the data will be utilized in a production environment, which means it must flow consistently and monitored for any interruptions. Ensuring data validity and quality is also crucial.

    WHAT YOU'LL DO.

    • Developing data connectors with Python or other programming languages.
    • Setting up data validation tests within the data pipeline.
    • Establishing monitoring and alert systems for data flow interruptions or corruption.
    • Leading incident resolution efforts to minimize impact on end users.
    • Teaming up to build, deploy, maintain, and optimize data ingestion and connectors for data flow to the data storage system.
    • Creating and managing distributed systems for data extraction, ingestion, and processing of large, diverse data sets.
    • Developing data products in stages, integrating and managing data from various sources.
    • Collaborating with software engineers and data scientists to design data sets for diverse applications, from proof-of-concept to production.
    • Partnering with Business Analysts for requirement collection, pipeline implementation decisions, data identification, and tooling selection.
    • Working with ETL/data services and application teams to support data solution development.
    • Collaborating with software engineers and data scientists to design data sets for diverse applications, from proof-of-concept to production.

    QUALIFICATIONS.

    • Possess a bachelor's or master's degree.
    • Experience in creating data ingestion scripts.
    • Exhibit team spirit and good communication skills.
    • Good understanding of SQL and Python.
    • Experience with data platforms and data lakes in an enterprise setting is a plus.
    • Demonstrate a hands-on approach, strong customer focus, problem-solving skills, and quick results.
    • Prior experience with Linux is necessary.
    • Familiarity with continuous integration & delivery tools, e.g. Jira, Git, Jenkins, Bamboo.
    • 3+ years' experience with analytics platforms and tools, such as Databricks, Snowflake, Teradata, Spark, Kafka.
    • 3+ years' cloud experience and demonstrated proficiency working with AWS, GCP, and/or Azure Cloud DevOps Services is preferred.
    • Belief in scrum/agile work methodologies and software practices for professional data flow.

    BENEFITS & TOTAL REWARDS.

    • 100% employer paid medical, dental and vision insurance
    • 401k with up to 4% employer match
    • Paid vacation and sick time
    • Paid company holidays
    • HAS Accounts
    • Flexible Spending Accounts
    • Life Insurance
    • Professional Training and Development Programs
    #J-18808-Ljbffr