- Develop robust data platforms for analytics platforms
- Develop and perform ETL on large datasets
- Develop data visualizations to showcase valuable insights extracted from large volumes of data
- Create and optimize data pipeline architectures
- Support infrastructure and data querying process
- Work in an Agile environment
Data Engineer - Chantilly, United States - Qbase
1 month ago
Description
Clearance:
TS/SCI with POLYGRAPH REQUIRED
Location:
Chantilly, VA
ONSITE EVERY DAY
Data Engineer
Finch AI is a fast-growing, fast-paced software development organization; our mission is to build new ways of interacting with information.
As a Finch AI Data Engineer, you join a dynamic and agile team in the development of new Finch products.
We look for strong Data Engineers that thrive on solving challenges associated with creating new products and developing intellectual property.
Our teams are comprised of successful people that enjoy solving problems, engaging in substantive technical discussions and have passion for their work.
We have very high expectations in terms of skill, motivation, self-organization and productivity.We look for people who excel working in groups, virtual and collocated, as well as those who are comfortable with fast paced agile development.
We are seeking multiple Data Engineers to work with a federal client to rapidly develop innovative solutions for the clients' immediate mission challenges.
In your role, you will work with a team of developers, data scientists, SMEs, and cyber analysts to design, develop, build, and analyze data management systems.
You will be asked to analyze our client's challenges and provide solutions by identifying and applying new tools and technologies to help design new data repositories.
Working alongside the team and cyber analysts, you will perform data engineering to identified data sets. As a Data Engineer your responsibilities will include designing, developing, optimizing, and maintaining data architecture and ETL pipelines.Responsibilities:
Required Skills:
Open to all levels:
Junior through Expert
Programming languages include:
Python, SQL, Java.
Database experience with any of the following: PostgreSQL, MySQL, Oracle, MongoDB
Experience working with Docker, Kubernetes
Proficient with git and pull request workflows.
ETL experience for data pipelines
Ability to perform API service development.
Bachelors degree in related field
Preferred:
Hadoop and Spark