Data Engineer - Scottsdale, United States - Ignitec Inc

    Default job background
    Technology / Internet
    Description

    Ignitec infuses industry standards and leading technology capabilities to solve complex problems and deliver value with increased quality and lower performance risks. Our solutions combine top technology personnel, the latest cutting-edge technology, and Agile approaches to bring innovative ideas to life. We do not seek to meet expectation, we continuously strive to exceed them.

    We have received our MBE Certification from NMSDC as a certified Minority Small Business Enterprise. We take pride in the MBE certification and partner with organizations to meet their Minority (D&I) Small Business goals. We are also a certified Minority Business Enterprise by the USPAACC, which recently awarded Ignitec The FAST 50 Asian American Business Award in 2022. We are also DBE certified by the Virginia Department of SBSD.

    Title: ETL/Data Engineer

    Salary: $130,000 - $140k/yr on W2

    Location: Scottsdale, Arizona

    Job Requirements

    • 6-8 years of IT experience focusing on enterprise data architecture and management.
    • Experience in Conceptual/Logical/Physical Data Modeling & expertise in Relational and Dimensional Data Modeling
    • Must have : Skillset: Java, Scala, S3, Glue, aws , Redshift.
    • Experience with Databricks & on Prem, Structured Streaming, Delta Lake concepts, and Delta Live Tables required
    • Experience with Spark Scala and Java programming
    • Data Lake concepts such as time travel and schema evolution and optimization
    • Structured Streaming and Delta Live Tables with Databricks as a bonus
    • Experience leading and architecting enterprise-wide initiatives specifically system integration, data migration, transformation, data warehouse build, data
    • mart build, and data lakes implementation/support
    • Advanced level understanding of streaming data pipelines and how they differ from batch systems
    • Formalize concepts of how to handle late data, defining windows, and data freshness
    • Advanced understanding of ETL and ELT and ETL/ELT tools such as Data Migration Service etc
    • Understanding of concepts and implementation strategies for different incremental data loads such as tumbling window, sliding window, high watermarks, etc.
    • Familiarity and/or expertise with Great Expectations or other data quality/data validation frameworks a bonus
    • Familiarity with concepts such as late data, defining windows, and how window definitions impact data freshness
    • Advanced level SQL experience (Joins, Aggregation, Windowing functions, Common Table Expressions, RDBMS schema design performance optimization)
    • Indexing and partitioning strategy experience
    • Debug, troubleshoot, design, and implement solutions to complex technical issues
    • Experience with large-scale, high-performance enterprise big data application deployment and solution
    • Architecture experience in AWS environment a bonus
    • Familiarity working with Lambda specifically with how to push and pull data, how to use AWS tools to view data for processing massive data at scale a bonus
    • Experience with Gitlabs and CloudWatch and ability to write and maintain Gitlabs for supporting CI/CD pipelines
    • Experience working with AWS Lambdas for configuration and optimization and experience with S3
    • Familiarity with Schema Registry, and message formats such as Avro, ORC, etc.
    • Ability to thrive in a team-based environment
    • Experience briefing the benefits and constraints of technology solutions to technology partners, stakeholders, team members, and senior-level of management