Principal DATA Engineer - Canyon, United States - Verdant Infotech Solutions

    Verdant Infotech Solutions
    Verdant Infotech Solutions Canyon, United States

    2 weeks ago

    Default job background
    Description

    Title:
    Principal DATA Engineer - Data Analytics PlatformDuration: Direct HireVisa: USC and GC only ( No Fake , Need Genuine Only)

    Location:
    Hybrid available in the following cities: Phoenix, AZ / Chicago, IL / New York, NY / San Francisco, CA ( Need local of these cities only)

    Interview: 2 Videos

    Need:
    Updated LinkedIn with profile pic

    Job Description:


    This position is a hands-on, individual contributor and technical leader involved in setting the standards and ensuring excellence in quality of outputs across multiple teams to create an Analytics Data Platform.

    This initiative consists of creating an appropriate replica of various bespoke operational data stores and customer contribution data into a data platform that is fit for analytics and data science workloads.

    Scale and velocity should excite you, not scare you. We are looking for someone who has successfully built and delivered an Analytics Data Platform from the ground up.
    This is a principal level, individual contributor role responsible for architecting a Data Analytics Platform based on AWS technologies.

    They will also be responsible for acting as a player/coach with our existing analytics technical platform team consisting of senior and junior data engineers.

    They will have the support of our Sr. Director, Analytics Technical Platform.

    It is critical to have actual, hands-on experience architecting and building a data analytics platform for medium to large enterprise based on AWS technologies.

    Theoretical knowledge is not enough.


    Key Responsibilities:
    Minimum 15 or more years of experience in designing and developing complex software projects.

    Experience in influencing cross-functional teams to create high throughput, high velocity data pipelines using both streaming and batch processing paradigms.

    Effective communicator with exceptional public speaking skills. Comfortable presenting to all levels within the company.

    Experience designing, implementing and operationalizing an analytics data platform in AWS.Networking experience relevant to moving data between on-prem and cloud environments.

    Experience with data handling processes such as retention policies, masking and encryption for compliance and visibility purposes.
    Current development skills using python, java or Scala.
    Experience with multiple data stores such as Oracle, MS-SQL Server, Postgres and noSql variety.
    Experience running a vendor selection process for relevant analytics platform needs.
    Expertise in terraform, k8s and/or other cloud orchestration technologies.
    Experience creating AWS observability and monitoring systems.


    Experience with several of the following AWS technologies:
    Glue/Glue Catalog/Glue Crawler, Kinesis/Firehose, RedShift, DynamoDB, Athena.
    Experience with ETL tools.
    Experience with data processing orchestration tools.
    Experience with data cataloging tools.


    Essential Functions:
    Partners with product management to craft product strategy, create product descriptions and ensure alignment to technology roadmaps

    Be a thought leader:

    a senior point of expertise on Data Engineering, Data Science, Business Intelligence, software engineering issues, industry trends and developing technologies.

    Be a role model to others on the team; coach and mentor team members.
    Takes ownership for creating technical product design and architecture - evaluating buy vs. build decisions and bought product maturity and fit for purpose.
    Works closely with customers to understand their needs and create a partnership for making products better.

    Documents SDLC artifacts so that other team members can understand and "follow the leader" - such as Confluence documentation, jira artifacts and associated MS-office documents (Excel, Word, PPT).

    Designs, implements and operationalizes a data analytics platform using AWS tools such as:

    S3 storageGlue and related toolsRedshift, DynamoDBAthenaCrawlersObservability matters using CloudWatch and CloudTrailCoach and develop others to use the above tools by creating small technical demonstrators or POCs.

    Creates and conducts presentations for small-to-medium size groups.
    Support the company's commitment to risk management and protecting the integrity and confidentiality of systems and data.
    The above job description is not intended to be an all-inclusive list of duties and standards of the position. Incumbents will follow instructions and perform other related duties as assigned by their supervisor.


    Minimum Qualifications:
    Education and/or experience typically obtained through a bachelor's degree in computer science, or related technical field.
    Minimum 15 or more years of experience in designing and developing complex software projects.

    Experience in influencing cross-functional teams to create high throughput, high velocity data pipelines using both streaming and batch processing paradigms.

    Effective communicator with exceptional public speaking skills. Comfortable presenting to all levels within the company.
    Knowledge of Software Development Lifecycle (SDLC) best practices, software development methodologies (Agile, Scrum, LEAN etc) and DevOps practices.

    Experience designing, implementing and operationalizing an analytics data platform in AWS.Networking experience relevant to moving data between on-prem and cloud environments.

    Experience with data handling processes such as retention policies, masking and encryption for compliance and visibility purposes.
    Current development skills using python, java or Scala.
    Experience with multiple data stores such as Oracle, MS-SQL Server, Postgres and noSql variety.
    Experience running a vendor selection process for relevant analytics platform needs.
    Expertise in terraform, k8s and/or other cloud orchestration technologies.
    Experience creating AWS observability and monitoring systems.


    Experience with several of the following AWS technologies:
    Glue/Glue Catalog/Glue Crawler, Kinesis/Firehose, RedShift, DynamoDB, Athena.
    Experience with ETL tools.
    Experience with data processing orchestration tools.
    Experience with data cataloging tools.
    Background and drug screen.


    Preferred Qualifications:

    • Master's degree in computer science or related field.
    | , 5208 Windsor Ln, Copper Canyon, Texas, 75077
    #J-18808-Ljbffr