
ARAVIND CHENNURI
Technology / Internet
Services offered
- Strategic and results-driven Data Engineerwith over 7 years of hands-on experience in analytics, business intelligence, and data scienceacross diverse industries, including Ecommerce, Retail, Telecom, and Information technology.
- Extensive hands-on experience with Google Cloud Platform (GCP) services, Amazon Web Services (AWS) including BigQuery, Google Pub/Sub, Data Form, Data fusion, Cloud Composer, Dataproc, and Dataflow.
- Strong foundation in BigQuery and Teradata, excel in data migration, transformation, and integration processes
- Skilled in implementing DevOps practices, utilizing orchestration tools like Apache Airflow (DAGs), and optimizing data processing and query performance using technologies such as GCS, Data Proc,and BigQuery.
- Strategic and results-driven hands-on experience in Analytics, Business intelligence,and Data science. Proficient in translating complex datasets into actionable Insights to drive Strategic decision-making.
- Strong understanding of Data governanceprinciples and proven experience in designing scalable data systems while enforcing data governance frameworks.
- Adept at driving Automation initiatives, collaborating with cross-functional teams, mentoring junior engineers, and staying updated with the latest industry trends.
- Adept in utilizing SQL (including complex stored procedures, views, and functions), Python, Google BigQuery, Google Analytics, Adobe Analytics, and BI/Reporting Tools such as PowerBI, Looker, and Tableau.
- Hands-on experience with GCP Services, GCS, S3, DynamoDB, Kinesis, Redshift, and EMR to build real-time and batch data pipelines using Apache Airflow (DAGS).
- Proven ability to design and implement Advanced Analytics Dashboards, ensuring real-time visibility into critical metrics. Expertise extends to statistical analysis using Python and R, elevating data-driven strategies.
- Experience with Big Data Architecture and Frameworks, such as Hadoop, involving Hadoop Distributed File System and its components such as PySpark, Hive, Sqoop, MapReduce frameworks, and Hue.
- Knowledge of Amazon Web Services (AWS) services such as S3, EC2, Glue, AWS Lambda, Athena, AWS Step Function, and Redshift.
- Working knowledge of NO SQL databases and their interaction with HBase and Dynamo DB.
- Worked with Airflow scheduling and locking tools to schedule and arrange job workflows.
- Knowledge of Microsoft Azure services including Streams Analytics, ADW, HDInsight cluster, and Azure Data Factory.
- Worked on developing ETL processes on Azure utilizing Azure data bricks and Data Factory.
- Previous experience writing Spark apps in Python.
- Previous experience fine-tuning and debugging spark programmers with optimization approaches.
- Used Sqoop to import and export data from RDBMS to HDFS, Hive, and vice versa.
- Knowledge of the Hive query language, as well as performance optimization techniques like static partitioning, dynamic partitioning, bucketing, and parallel execution.
- Created UDF and UDAFfunctions and used them in Hive queries.
- Knowledge of MapReduce and the Apache Hadoop API for analyzing structured and unstructured data.
Experience
- Strategic and results-driven Data Engineer with over 7 years of hands-on experience in analytics, business intelligence, and data science across diverse industries, including Ecommerce, Retail, Telecom, and Information technology.
- Extensive hands-on experience with Google Cloud Platform (GCP) services, Amazon Web Services (AWS) including BigQuery, Google Pub/Sub, Data Form, Data fusion, Cloud Composer, Dataproc, and Dataflow.
- Strong foundation in BigQuery and Teradata, excel in data migration, transformation, and integration processes
- Skilled in implementing DevOps practices, utilizing orchestration tools like Apache Airflow (DAGs), and optimizing data processing and query performance using technologies such as GCS, Data Proc, and BigQuery.
- Strategic and results-driven hands-on experience in Analytics, Business intelligence, and Data science. Proficient in translating complex datasets into actionable Insights to drive Strategic decision-making.
- Strong understanding of Data governance principles and proven experience in designing scalable data systems while enforcing data governance frameworks.
- Adept at driving Automation initiatives, collaborating with cross-functional teams, mentoring junior engineers, and staying updated with the latest industry trends.
- Adept in utilizing SQL (including complex stored procedures, views, and functions), Python, Google BigQuery, Google Analytics, Adobe Analytics, and BI/Reporting Tools such as PowerBI, Looker, and Tableau.
- Hands-on experience with GCP Services, GCS, S3, DynamoDB, Kinesis, Redshift, and EMR to build real-time and batch data pipelines using Apache Airflow (DAGS).
- Proven ability to design and implement Advanced Analytics Dashboards, ensuring real-time visibility into critical metrics. Expertise extends to statistical analysis using Python and R, elevating data-driven strategies.
- Experience with Big Data Architecture and Frameworks, such as Hadoop, involving Hadoop Distributed File System and its components such as PySpark, Hive, Sqoop, MapReduce frameworks, and Hue.
- Knowledge of Amazon Web Services (AWS) services such as S3, EC2, Glue, AWS Lambda, Athena, AWS Step Function, and Redshift.
- Working knowledge of NO SQL databases and their interaction with HBase and Dynamo DB.
- Worked with Airflow scheduling and locking tools to schedule and arrange job workflows.
- Knowledge of Microsoft Azure services including Streams Analytics, ADW, HDInsight cluster, and Azure Data Factory.
- Worked on developing ETL processes on Azure utilizing Azure data bricks and Data Factory.
- Previous experience writing Spark apps in Python.
- Previous experience fine-tuning and debugging spark programmers with optimization approaches.
- Used Sqoop to import and export data from RDBMS to HDFS, Hive, and vice versa.
- Knowledge of the Hive query language, as well as performance optimization techniques like static partitioning, dynamic partitioning, bucketing, and parallel execution.
- Created UDF and UDAF functions and used them in Hive queries.
- Knowledge of MapReduce and the Apache Hadoop API for analyzing structured and unstructured data.
Education
EDUCATION:
Master’s in Data Engineering Analytics.
University of North Texas | Denton, Texas.
Bachelor’s in Computer Science and Engineering.
Jawaharlal Nehru Technological University Hyderabad| College of Engineering Hyderabad (Autonomous)
Kukatpally, Hyderabad, Telangana India.
Professionals in the same Technology / Internet sector as ARAVIND CHENNURI
Professionals from different sectors near McKinney, Collin
Other users who are called ARAVIND
Jobs near McKinney, Collin
-
Analyze large volumes of configuration data from routers, switches, firewalls and other network devices to detect inconsistencies, misconfigurations or policy deviations. · Analyze large volumes of configuration data from routers, switches, firewalls and other network devices to ...
Dallas1 week ago
-
A leader in IT staffing seeks an immediate perm Azure Data Engineer II for a preferred client. · Design and development of scalable data pipelines. · Collaborate with cross-functional teams to deliver reliable, high-quality data solutions. · ...
Plano3 weeks ago
-
Data Analyst III role involves collecting and analyzing data from internal and external sources using statistical techniques to identify trends and correlations. The position requires proficiency in SQL, Excel, Python, R, and data visualization tools. · Collecting data from inter ...
Plano3 days ago