No more applications are being accepted for this job
- The Data Architect must be aware of AWS Glue, AWS S3, Redshift, Lambda, AWS Airflow and other AWS services.
- Strong hands on with Object oriented programming in python as Data transformation for Data Engineering.
- Candidate should have experience in Glue and Redshight and DynamoDB
- Must be aware of writing python/pyspark to ingest data from different file types (txt, xls, Parquet,csv, json etc)
- The architect should have consulting, design and implementation of serverless distributed solutions experience.
- Build, Enhance, optimized Data Pipelines using Reusable frameworks to support data need for the analytics and Business team using Python/PySpark
- Data specialist, design and architecture experience.
- Software development with object-oriented language experience. Several years of AWS cloud and on-premises integration experience.
- Continuous integration and continuous delivery (CI/CD) experience.
- Cloud-based solution (AWS preferable,)system, network and operating system experience.
- Several of external or internal customer facing, complex and large-scale project management experience.