- Strong knowledge for development of Extract-Transform Load (ETL) processes, including end to end pipelines with data loading from multiple sources15 years
- Ability to gather and document requirements for data extraction, transformation and load processes15 years
- Understanding of data warehousing, data lake, business intelligence and information management concepts and standards.
- Ability to advise internal and external customers on appropriate tools and systems for complex data processing challenges.
ETL Data Engineer - Washington, United States - The Midtown Group
Description
Job Description Job Description Our client is hiring a ETL Data Engineer to support data operations for its Cloud Data Exchange onsite in Washington DC for $60/hour.
Responsibilities:
Analyzes, designs, develops, implements, replicates, and supports complex enterprise data projects.
Interfaces with other agencies, consult with and inform user departments on system requirements, advise on environment constraints and operating difficulties for the current state and advise and resolve problems using cloud solutions and develop and replicate future enhancements to Districts data systems.
Strong knowledge of Extraction, Transformation and Loading (ETL) processes using frameworks like Azure Data Factory, Synapse, Databricks, Informatica by gathering requirements from the stake holders or analyzing the existing code and perform enhancements or new development.
Establishing the cloud and on-premise connectivity in different systems like ADLS, ADF, Synapse, DatabricksHands on experience in Azure cloud services like Azure Data Factory or Azure Synapse, MSSQL Db, Azure SQL DB, Azure Data Lake Storage Gen2, Blob Storage, Python etc.
Worked on creating end to end pipelines to load data by reading it from multiple sources or source systems and load to landing layer or SQL tables.
Familiarity/experience with data integration and data pipeline tools (e.g., Informatica, Synapse, Apache NiFi, Apache Airflow)Familiarity/experience with various data formats including database specific (Oracle, SQL Server, DB2, Quickbase), text formats(CSV, XML) and Binary(Parquet, AVRO)Develops, standardizes and optimizes existing data workflow/pipelines adhering to best practices.
Adhere and contribute to enterprise data governance standards by ensuring data accuracy, consistency, security, and reliability.Automates, monitors, alerts, and manages data pipelines and workflows.
Analyzes and evaluates system changes to determine feasibility, provides alternative solutions, back-up, and rollback procedures.
Works on the development of new systems, upgrades and enhancements to existing systems and ensure systems follow approved standards and consistency after the changes.
Develops complex programs and reports in database query language.Familiarity/experience with data visualization tools.
Familiarity/experience handling and securing sensitive data based on the level of the sensitivityRequirements:15 years