beBee background
Professionals
>
Technology / Internet
>
Chandler
Asifkumar Iarthineni

Asifkumar Iarthineni

Senior DevOps Engineer
Chandler, Maricopa

Social


About Asifkumar Iarthineni:

Cloud/DevOps Engineer having 11+ years of work experience in Cloud Infra and Architecting variety of DevOps applications like cloud factory IaC in Azure, AWS and SRE practices, web application CI/CD, cloud migration including Project Management expertise in wide range of tools on configuration management, alerting and monitoring along with strong incident management and escalation mechanisms.

Experience

Professional Summary

 

➢ Cloud/DevOps Engineer having 11+ years of work experience in Cloud Infra and Architecting variety of DevOps applications like cloud factory IaC in Azure, AWS and  SRE practices, web application CI/CD, cloud migration including Project Management expertise in wide range of tools on configuration management, alerting and monitoring along with strong incident management and escalation mechanisms.
➢ Working towards serverless computing and the recent distribution in Container and Container Orchestration technologies like Docker, K8s, Cloud Native, Hybrid and serverless application development with integrating IaC tools such Ansible and Terraform.
➢ Design, development, shipping and operating reliable, world class distributed software services in cloud, building key partnerships with a set of engineers, program managers and customer to drive planning and execution. 
➢ Bringing simplicity to the customers creating challenging opportunities in areas like Scalability, High-density, multi-tenancy, high availability.
➢ Effective polished interaction with customer to gather information.
➢ Bring clarity, create energy and drive result with a vision set, rally the team behind our vision and deliver it for customer.
➢ Improve customer experience by analyzing signals from various sources, driving RCA's and service improvements involving bug fixes.
➢ Analyze RCAs and outcomes of post-mortem discussions in filling right set of technical gaps being expedited from associated and plan related learning path for team. 
➢ Periodical collaboration with IM lead, planning lead, SBU Manager and CITM in improving the process, system performance and in turn customer satisfaction. 
➢ Partner with Product Management, IT & OT community, customers, and other stake holders to define requirements, scope projects and ship products in rapid, iterative cycles.
➢ Plan and prioritize work for team, including collaboration with partner organization and continuous improvements by incorporating feedback from internal/external stakeholders.
➢ Participate in design of architecture for cloud infrastructure services, focusing on strategic customer support scenarios.
➢ Ability to engage in Site Reliability Practices and supporting scalable live site services with specific focus on Crisis management and Customer Focus.
➢ Experience working with geo-distributed engineering team partners.
➢ Lead, Hire, and grow a diverse world-class team of cloud engineers and also Create an exclusive environment that attracts and retain high performance Associates.
➢ Lead others by exemplifying innovative mindset, inclusiveness, teamwork, customer obsession and passion towards shipping high quality products.
➢ Stood up to date on industry trends around cloud native, open-source development and DevOps processes, Leading efforts on innovation, modern design, and reliability engineering.
➢ Commitment to collaboration, teamwork, and ability to deliver via influence in order not undermining organizational and time management skills that works well together.
➢ Ability in managing all aspects of software configuration management process including code compilation, packaging / deployment / release methodology, and application configurations.

 

 

Technical Hands-On

➢ Key role as SRE, having primary focus on site reliability in adherence to service Level agreements (SLA) promised to customer by monitoring SRE metrics, service level indicators (SLI) and setting SLOs.
➢ Creating Chaos free environment with risk prediction and error budgeting.
➢ Reduce Toil using different DevOps and Infrastructure as Code (IaC) tools such Ansible, Terraform, AWS Cloud Formation and Azure ARM Templates.
➢ Gain trust with clear visibility on performance improvements on agreed SLAs.
➢ Construction of effective alerting and notification mechanism to reduce time to recovery in turn customer impact time as well.
➢ Infrastructure metrics collection and logs analysis using Splunk, Integration of Datadog with Splunk in facilitating customer dashboards and auto identification of trends in applications.
➢ Have good experience in monitoring tools Grafana and Prometheus.
➢ Ability to understand Migration requirements and can bridge gaps.
➢ Written Templates for provisioning Azure and AWS services as a code using Terraform maintaining states files through Terraform Cloud and Terraform Business as well. 
➢ Experience in setup and configuring Ansible Framework, having good experience in preparing ansible roles which also uses ansible modules from galaxy to provision instances on Azure & AWS and host applications using Ansible.
➢ Utilized Kubernetes to orchestrate deployments, scale and management of Docker Containers.
➢ Docker containers orchestration using Kubernetes manifest files.
➢ Extensive experience in writing Docker images in the process of virtualizing the servers using Docker for dev/test environments needs, also configuration automation using Docker containers.
➢ Experience in creating Docker containers leveraging existing Linux containers and AMI's in addition to creating Docker Containers from scratch.
➢ Good Understanding in setting up CI/CD in both GO-CD & Gitlab. 
➢ Responsible for installation & configuration of Jenkins to support various Java builds through Jenkins’s plugins to achieve continuous integration and publish builds to repository.
➢ Structured multi-branch build and deploy pipelines on Jenkins for continuous integration and for End-to-End automation for all build and deployments.
➢ Integrated SonarQube with Jenkins for continuous inspection of code quality and analysis with SonarQube scanner for Maven.
➢ Good knowledge in managing Nexus repository for maven artifacts and dependencies.
➢ Well versed with Azure services –App Config, Functions, API Management, Traffic Management, ARM Templates, Key-vault, VMs, Active Directory, Cloud Services, Block Blob, Azure Files, Delivery Network, Azure Functions, Container Services, Kubernetes Services and Auto scaling.
➢ Expert in deploying the code through web application servers like Apache Tomcat/JBOSS, 
➢ Experienced in creating multiple VPC’s and public, private subnets as per requirement and 
Distributed them as groups into various availability zones of the VPC. 
➢ Hands on experience with EC2, ECS, ELB, S3, VPC, IAM, LANBDA, Cloud watch, Cloud formation and Auto scaling.
➢ Integrated NAT gateways to allow communication from the private instances to the internet.
➢ Created snapshots to take backups of the volumes and images to store launch configurations of the EC2 instances.
➢ Own monthly/quarterly business review meetings with internal and external stakeholders and prepare reports and presentations.
➢ Experience in working with managerial Tools TFS, Jira, Confluence, Miro, Azure Boards, Version One and Service Now.

 


Technical Knowledge


Cloud Platforms  : AWS, AZURE, 
IasC & CM   : Terraform, Ansible, Cloud Formation, ARM Templates 
Orchestration   : Kubernetes (EKS, AKS)
Containers   :           Docker, Azure Container Instances
SRE Monitoring & Tracing :           Datadog, Splunk, Grafana, Prometheus
Scripting   : Python, Bash, HCL, YAML, JSON
SCM Tool   : Gitlab, Azure Repo, GIT, Bitbucket, TFS
CI Tools   : Jenkins, Go-CD, Gitlab Runners
Code Quality   : SonarQube
Artifactory Repo  : Nexus, Azure Container Registry, ECR
Database              : PstgreSQL, MSSQL
Programming   : powershell, python, YAML, Json
Build Tools   : Gradle, Maven
Application Servers  : Apache Tomcat, JBOSS, IIS
Management Boards  : Miro, Azure Boards, Trello
Ticketing & Tracking  : Jira, Confluence, ServiceNow, Version One
IM Escalation Matix  : Pager Duty
Operating Systems  : Linux, Windows


Professional Experience


▪ Lead DevOps Engineer at Medidata,  from September 2022 to Till Date.
▪ Principal Infra Developer [internal role Infra Architect] (grade M & Equivalent), at Cognizant, from Nov 2021 to August 2022.
▪ Senior DevOps Engineer, at Clarivate, from July 2019 to October 2021.
▪ DevOps Engineer, at IQVIA, from July 2012 to June 2019.
▪ Software Engineer, at Savvysoft, from April 2011 to Nov 2011.

Education


▪ M.Tech from Gudlavalleru Engineering College, JNTU University, Kakinada, India in 2012.
▪ B. Tech from Prakasam Engineering College, JNTU University, Hyderabad, India in 2007.


Most Recent Projects


Client: Medidata– New York, NY
Duration: October 2022 to Present
Role: Lead DevOps Engineer

Description: Medidata Solutions is a technology company that develops and markets software as a service (SaaS) for clinical trials. These include protocol development, clinical site collaboration and management, capturing patient data through web forms, mobile health (mHealth) devices, laboratory reports, and imaging systems  and monitoring and business analytics.
I am working as a Lead DevOps Engineer to support ML & AI Teams and other Module engineering teams by creating pipelines for automating the builds and providing solutions for their requirements.
Roles and Responsibilities:
• Design the data pipelines and engineering infrastructure to support ML & AI development teams.
• Work with the development teams and deploy scalable tools and services to handle machine            learning training models and inference.
• Identify and evaluate new technologies to improve performance, maintainability, and reliability of machine learning systems
• Scope management and scope tracking, bringing an approved acceptance criteria to user stories.
• Worked on automation scripting in Powershell, Python to automate all deployment activities.
• Responsible for build generation through Go-CD pipelines in facilitating the deployment process of Mobile, web application and APIs deployment.
• Working extensively with EC2, Lambda, CloudFront, S3, VPC, IAM, Route53, Cloud, Elastic Load balancers, Cloud watch, PostgresSQL and Auto scaling.
• Experience in setup Continuous Integration and Continuous delivery pipelines using Go-CD and Medistrano. Using Continuous Delivery to handle releases in all pre-production and Production environments.
• Coordinate with different software vendors to build the applications.
• Monitoring the build and deployment jobs and fixig up the issues.
• Provisioning the infrastructure using Cloud Formation and deploy using Medistrano.
• Own the code promotion process, including source control management (Github), branch management, and build management.
• Managing AWS accounts and generating cost analysis reports using Cost explorer.
• Collaborating with and providing assistance to engineering teams to work together enhance existing solutions.
• Design, develop, improve operational processes including automated backup and recovery procedures, security and patch management.

Environments: AWS, Cloud Formation, Go-CD, Medistrano, Python, Powershell, Ansible, Terraform, ECS, Docker, Jira, Confluence, GIT, DVC(Data version Control).

Client: BHP– Washington, DC
Duration: October 2021 to September 2022
Role: Senior Devops Engineer

Description: BHP, a multinational, multi-listed world-leading resources company, extract and process minerals, oil and gas and its products are sold worldwide. 
Under the core concept of Cloud Factory Project, we automate infrastructure provisioning to different clients and departments within BHP to host their application through Service now, Ansible and Terraform integrations. Powerful scripts and pipelines structured can provision resources in both AWS and Azure from a single Service now ticket initiation.

Responsibilities:
• Refine requirements with major contributions to Product Increment and PBR (Backlog refinement) sessions.
• Elaborated interaction with customer and other stake holders in shaping OKRs, bringing Epics to the OKR and refinement up to user story level. 
• Analyze, finetune and finalize the scope of enhancements to the product and freezing into achievable phases (OKRs).
• Scope management and scope tracking, bringing an approved acceptance criteria to user stories.
• Draw a customized architectural final call to each EPIC brought into the scope using organizational standard templates of Landscape, Blueprints and TF Components.
• Analyze technical dependency and estimate story points for the granular user stories identified.
• In accordance to the interests and technical capabilities of associates, Co-ordinate with Scrums in identifying right set of mapping between associate and User story to achieve early delivery. 
• Facilitate and participate in technical discussions for regular technical gaps, achievements, and roadblocks resolutions between cross functional teams.
• Drive the team towards innovative introductions in the product and contribute thoughts for future enhancements through innovation lab program.
• Managing Multibranch pipelines in Gitlab and deploy applications in dev, test and production environments.
• Setup Terraform Business workspace, implement pipes execution in workspace through both auto and manual mechanisms for Plan and apply respectively.
• Write auto scripts in bash and Pythons for creating, updating terraform workspace, along with Plan, apply and destroy executions in terraform business.
• Key member in management and production deployment process.
• Close monitoring of infrastructure logs in Datadog dashboards through Splunk integration.
• Application performance monitoring, measuring and acting up on using App Dynamics.
• Work closely with MIM team in facilitating services to customer tickets, provision resources at the earliest to indicated SLA and gain trust.

Technologies: Azure, AWS, Terraform, Ansible, Python, Gitlab & Gitlab Runners, Service Now.

Client: Clarivate– Philadelphia, PA
Duration: Apr 2020 to October 2021
Role: Sr.DevOps Engineer

Roles and Responsibilities:
• Setting up Development and Functional Continuous Integration environments for the application teams using Jenkins.
• Integrated in Infrastructure Development and Operations involving AWS Cloud platforms, EC2, IAM, ECS, EBS, S3, VPC, RDS, ELB, Auto scaling, Cloud Front, Cloud Watch, SQS, SNS.
• Configured AWSECS for deploying and orchestrating containers by defining tasks and services. 
• Configured Elastic Load Balancers with EC2 Auto scaling groups and Worked on Cloud Watch alerts for instances and using them in Auto-scaling launch configurations. 
• Eliminated the states that are accumulated in Jenkins server by developing the scripted pipeline in Groovy to version control and make it distributable across organization.
• Used Jenkins pipelines to drive all micro services builds out to the Docker registry and then deployed to ECS cluster.
• Built EAR and WAR files with custom configuration settings using Gradle and stored artifacts into artifactory.
• Worked on Dockers to containerize the Application and all its dependencies by writing Docker file.
• Researched and implemented an Agile work flow for continuous integration and testing of applications using Jenkins.
• Defined branching, labeling, and merge strategies for all applications in Git.
• Experienced in using Terraform for building, changing, and manage existing and cloud infrastructure as well as custom in-house solutions.
• Used Cloud Front to deliver content from AWS edge locations to users, allowing for further reduction of load on front-end server
• Actively participant in scrum meetings, reporting the progress and maintain good communication with each team member and mangers.
• Good development experience in Windows Environment and Sound knowledge of distributed systems architecture.
• Focus on continuous integration and deployment, promoting Enterprise Solutions to target environments.
• Responsible for providing development and operation support for various modules
• Extensive knowledge on working with the AWS CLI and AWS console.
• Monitoring the Build jobs and fixing up issues 
• Managing the servers from backend on Linux platform
• Attending the Scrum calls on daily basis.
• Developed, maintained and distributed release notes for each sprint release and uploading the same into ARC.
Environment: AWS (EC2, VPC, ELB, S3, RDS, IAM, Cloud Trail, ECS,SQS, SNS, Redis Cache and Route 53), Dockers, Kubernetes, Terraform, Apache Tomcat,  Jenkins, Maven, Sonarqube, Bash Scripts, Gradle, Groovy, Ansible, Packer

Client: Clarivate– Philadelphia, PA
Duration: June 2019 to April 2020
Role: Sr.Devops Engineer

Description:
I am worked as a Sr.DevOps Engineer to support QA Team and other Module engineering teams by creating pipelines for automating the builds and providing solutions for their requirements. I have created an ECS cluster and required resources and deployed into production.
Roles and Responsibilities:
• Deployed QA  applications developed on serenity framework into docker containers.
• Setting up Development and Functional Continuous Integration environments for the application QA teams using Jenkins.
• Focus on continuous integration and deployment, promoting Enterprise Solutions to target environments.
• Created containers using Ubuntu docker images.
• Created Linux and Windows EC2  instances to deploy QA applications.
• Taken the ownership on DEV,INT, UAT and Prod environments.
• Created ECS clusters and deployed tasks into containers.
• Created API’ and accessed through Cloudfront.
• Building/Maintaining Docker containers using ECS clusters.
• Responsible for taking backup Instances in AWS in higher environment.
• Responsible for providing development and operation support for various modules
• Monitoring the Build jobs and fixing up issues 
• Managing the servers from backend on Linux platform
• Worked on exposing the source code in GIT repository for the customer which helps them in customizing the application.
• Attending the Scrum calls on daily basis.
• Developed, maintained and distributed release notes for each sprint release and uploading the same into Confluence.

Environments: Git, Maven, Jenkins, LINUX, AWS,  Sonar, Docker, Kubernetes, Airflow, S3, Cloud front, Terraform

Client: Iqvia– Bangalore, India
Duration: Apr 2012 to June 2019
Devops/Build Release Engineer

Description: Iqvia, a multinational, multi-listed world-leading resources company, extract and process minerals, oil and gas and its products are sold worldwide. 
Under the core concept of GCB Project, we automate infrastructure provisioning to different clients and departments within Iqvia to host their application through Service now, Ansible and Terraform integrations. Powerful scripts and pipelines structured can provision resources in both AWS and Azure from a single Service now ticket initiation.

Responsibilities:
• Analyze, finetune and finalize the scope of enhancements to the product and freezing into achievable phases.
• Scope management and scope tracking, bringing an approved acceptance criteria to user stories.
• Draw a customized architectural final call to each EPIC brought into the scope using organizational standard templates of Landscape, Blueprints and TF Components.
• Deployed Azure (laaS) virtual Machines (VMs) & Cloud services (PaaS role instances) into VNets subnets, Provisioning VM’s, Virtual Networks, Deploying Web Apps and creating Web-jobs and modified AWS Cloud formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates.
• Analyze technical dependency and estimate story points for the granular user stories identified.
• In accordance to the interests and technical capabilities of associates, Co-ordinate with Scrums in identifying right set of mapping between associate and User story to achieve early delivery. 
• Facilitate and participate in technical discussions for regular technical gaps, achievements, and roadblocks resolutions between cross functional teams.
• Drive the team towards innovative introductions in the product and contribute thoughts for future enhancements through innovation lab program.
• Managing Multibranch pipelines in Gitlab and deploy applications in dev, test and production environments.
• Setup Terraform Business workspace, implement pipes execution in workspace through both auto and manual mechanisms for Plan and apply respectively.
• Write auto scripts in bash and Pythons for creating, updating terraform workspace, along with Plan, apply and destroy executions in terraform business.
• Key member in management and production deployment process.
• Close monitoring of infrastructure logs in Datadog dashboards through Splunk integration.
• Work closely with MIM team in facilitating services to customer tickets, provision resources at the earliest to indicated SLA and gain trust.

Technologies: Azure, AWS, Terraform, Ansible, Python, Gitlab & Gitlab Runners, Service Now.
 

Education

Masters in Digital electronics

Professionals in the same Technology / Internet sector as Asifkumar Iarthineni

Professionals from different sectors near Chandler, Maricopa

Jobs near Chandler, Maricopa

  • Elsner HR

    Field Service Engineer

    Found in: Lensa US 4 C2 - 3 days ago


    Elsner HR Scottsdale, United States

    Project travel takes about 1-2 weeks (3 weeks would be an exception). Travel makes up approx. 50% of your time. · You will: · Be commissioning (dry and wet) the company systems in the field, testing PLC programming (SCADA), and inspecting if needed. · Be starting up our biolog ...

  • Diverse Lynx

    Java Production Support@Phoenix

    Found in: Lensa US 4 C2 - 2 days ago


    Diverse Lynx Phoenix, United States

    *FULLY REMOTE (8am-5pm PST)* · Job Title: Java Production Support · Duration: 1-year engagement and based on performance expecting the contract to be renewed - Cog. would eventually try to get this hire into an FTE conversion. · Experience: 4-6 years · Job Summary · Part of an on ...

  • IBM

    SAP FIN CO Consultant

    Found in: Lensa US 4 C2 - 14 hours ago


    IBM Phoenix, United States

    Introduction · As a Package Consultant at IBM, get ready to tackle numerous mission-critical company directives. Our team takes on the challenge of designing, developing and re-engineering highly complex application components and integrating software packages using various tools ...