Jobs
>
Iselin

    Azure Databricks - Iselin, United States - Diverse Lynx

    Diverse Lynx
    Diverse Lynx Iselin, United States

    3 weeks ago

    Default job background
    Description
    Develop deep understanding of the data sources, implement data standards, maintain data quality and master data management.

    • Expert in building Databricks notebooks in extracting the data from various source systems like DB2, Teradata and perform data cleansing, data wrangling, data ETL processing and loading to AZURE SQL DB.
    • Expert in building Ephemeral Notebooks in Databricks like wrapper, driver and config for processing the data, back feeding the data to DB2 using multiprocessing thread pool.
    • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
    • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
    • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
    • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
    • Analyzed the SQL scripts and designed it by using Pyspark SQL for faster performance.
    • Worked on reading and writing multiple data formats like JSON, Parquet, and delta from various sources using Pyspark.
    • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
    • Expert in optimizing the Pyspark jobs to run on different Cluster for faster data processing.
    • Developed spark applications in python (Pyspark) on distributed environment to load huge number of CSV files with different schema in to Pyspark Dataframes and process them to reload in to Azure SQL DB tables.
    • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
    • Used Logic App to take decisional actions based on the workflow and developed custom alerts using Azure Data Factory, SQLDB and Logic App.
    • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
    • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
    • Good Knowledge and exposure to the Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
    • Involved in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
    • Expert in understanding current production state of application and determine the impact of new implementation on existing business processes.
    • Involved in Migration of data from On-prem server to Cloud databases (Azure Synapse Analytics (DW) & Azure SQL DB).
    • Good Hands on experience in setting up Azure infrastructure like storage accounts, integration runtime, service principal id, and app registrations to enable scalable and optimized utilization of business user analytical requirements in Azure.
    • Expert in ingesting streaming Digital : Databricks 10 & Above
    • Develop deep understanding of the data sources, implement data standards, maintain data quality and master data management.
    • Expert in building Databricks notebooks in extracting the data from various source systems like DB2, Teradata and perform data cleansing, data wrangling, data ETL processing and loading to AZURE SQL DB.
    • Expert in building Ephemeral Notebooks in Databricks like wrapper, driver and config for processing the data, back feeding the data to DB2 using multiprocessing thread pool.
    • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
    • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
    • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
    • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
    • Analyzed the SQL scripts and designed it by using Pyspark SQL for faster performance.
    • Worked on reading and writing multiple data formats like JSON, Parquet, and delta from various sources using Pyspark.
    • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
    • Expert in optimizing the Pyspark jobs to run on different Cluster for faster data processing.
    • Developed spark applications in python (Pyspark) on distributed environment to load huge number of CSV files with different schema in to Pyspark Dataframes and process them to reload in to Azure SQL DB tables.
    • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
    • Used Logic App to take decisional actions based on the workflow and developed custom alerts using Azure Data Factory, SQLDB and Logic App.
    • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
    • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
    • Good Knowledge and exposure to the Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
    • Involved in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
    • Expert in understanding current production state of application and determine the impact of new implementation on existing business processes.
    • Involved in Migration of data from On-prem server to Cloud databases (Azure Synapse Analytics (DW) & Azure SQL DB).
    • Good Hands on experience in setting up Azure infrastructure like storage accounts, integration runtime, service principal id, and app registrations to enable scalable and optimized utilization of business user analytical requirements in Azure.
    • Expert in ingesting streaming
    • Develop deep understanding of the data sources, implement data standards, maintain data quality and master data management.
    • Expert in building Databricks notebooks in extracting the data from various source systems like DB2, Teradata and perform data cleansing, data wrangling, data ETL processing and loading to AZURE SQL DB.
    • Expert in building Ephemeral Notebooks in Databricks like wrapper, driver and config for processing the data, back feeding the data to DB2 using multiprocessing thread pool.
    • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
    • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
    • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
    • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
    • Analyzed the SQL scripts and designed it by using Pyspark SQL for faster performance.
    • Worked on reading and writing multiple data formats like JSON, Parquet, and delta from various sources using Pyspark.
    • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
    • Expert in optimizing the Pyspark jobs to run on different Cluster for faster data processing.
    • Developed spark applications in python (Pyspark) on distributed environment to load huge number of CSV files with different schema in to Pyspark Dataframes and process them to reload in to Azure SQL DB tables.
    • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
    • Used Logic App to take decisional actions based on the workflow and developed custom alerts using Azure Data Factory, SQLDB and Logic App.
    • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
    • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns.
    • Good Knowledge and exposure to the Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Driver Node, Worker Node, Stages, Executors and Tasks.
    • Involved in performance tuning of Spark Applications for setting right Batch Interval time, correct level of Parallelism and memory tuning.
    • Expert in understanding current production state of application and determine the impact of new implementation on existing business processes.
    • Involved in Migration of data from On-prem server to Cloud databases (Azure Synapse Analytics (DW) & Azure SQL DB).
    • Good Hands on experience in setting up Azure infrastructure like storage accounts, integration runtime, service principal id, and app registrations to enable scalable and optimized utilization of business user analytical requirements in Azure.
    • Expert in ingesting streaming
    • Develop deep understanding of the data sources, implement data standards, maintain data quality and master data management.
    • Expert in building Databricks notebooks in extracting the data from various source systems like DB2, Teradata and perform data cleansing, data wrangling, data ETL processing and loading to AZURE SQL DB.
    • Expert in building Ephemeral Notebooks in Databricks like wrapper, driver and config for processing the data, back feeding the data to DB2 using multiprocessing thread pool.
    • Expert in developing JSON Scripts for deploying the Pipeline in Azure Data Factory (ADF) that process the data.
    • Expert in using Databricks with Azure Data Factory (ADF) to compute large volumes of data.
    • Performed ETL operations in Azure Databricks by connecting to different relational database source systems using jdbc connectors.
    • Developed Python scripts to do file validations in Databricks and automated the process using ADF.
    • Analyzed the SQL scripts and designed it by using Pyspark SQL for faster performance.
    • Worked on reading and writing multiple data formats like JSON, Parquet, and delta from various sources using Pyspark.
    • Developed an automated process in Azure cloud which can ingest data daily from web service and load in to Azure SQL DB.
    • Expert in optimizing the Pyspark jobs to run on different Cluster for faster data processing.
    • Developed spark applications in python (Pyspark) on distributed environment to load huge number of CSV files with different schema in to Pyspark Dataframes and process them to reload in to Azure SQL DB tables.
    • Analyzed data where it lives by Mounting Azure Data Lake and Blob to Databricks.
    • Used Logic App to take decisional actions based on the workflow and developed custom alerts using Azure Data Factory, SQLDB and Logic App.
    • Developed Databricks ETL pipelines using notebooks, Spark Dataframes, SPARK SQL and python scripting.
    • Developed Spark applications using Pyspark and Spark-SQL for data extraction, transfo
    Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination.

    All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role.

    We promote and support a diverse workforce across all levels in the company.


  • Diverse Lynx Parsippany, United States

    Job Title: Azure Databricks Developer · Location: Parsippany, NJ (Day 1 onsite) · Duration: Fulltime · Job Description:Azure data bricks (Core), Azure Admin and platform services, Python, SQL, SSIS, ETL · Roles and responsibilities: · Propose good and optimized solutions and ar ...


  • Tata Consultancy Services Piscataway, United States

    Azure data bricks (Core), Azure Admin and platform services, Python, SQL, SSIS, ETL · Roles and responsibilities: · Propose good and optimized solutions and architecture for new/existing projects · Solutions should primarily focus on saving latency and costs in batch and streamin ...


  • Saxon Global Jersey City, United States

    Role Objectives: · Experience migrating workloads to Microsoft Azure · Experience designing, provisioning, and supporting Azure PaaS and IaaS components (Azure Compute, Networks, Storage, PaaS Services, AI & ML services) · Experience with Infrastructure-as-Code tooling, includi ...


  • Siri InfoSolutions Inc Weehawken, United States

    Job Description · Job DescriptionTitle: Azure Databricks Engineer · Location: Weehawken, NJ (Day 1 Onsite) · Job Description: · Primary: Databricks, Spark (batch streaming solutions ( delta lake and lake house ) , Python · Good to have: Perl, SSIS, SSAS with java and oracle · b ...


  • ProIT Inc. Parsippany, United States

    **Role: Azure Databricks** · **Location: Parsippany, NJ** · **Fulltime bases** · - Azure data bricks (Core), Azure Admin and platform services, Python, SQL, SSIS, ETL · - Roles and responsibilities: · - Propose good and optimized solutions and architecture for new/existing projec ...


  • Jconnect Infotech Inc Weehawken, United States

    Job Description · Job DescriptionHello, · Greetings · I am reaching out to you on an exciting job opportunity with one of our clients. · Job Title Senior Databricks Engineer · Location Weehawken, New Jersey - hybrid mode - 3 days/week · Type Contract · Responsibilities · We are l ...


  • Delta System & Software, Inc. Jersey City, United States

    Position: LEAD Level Azure Databricks Data Engineer with Finance Experience · Duration: 12+ months · Location: Hybrid In Jersey City, NJ (Wednesday & Thursday) from DAY 1 · Rate: $75/hr – W2 · Job Descriptions · AZURE experience: Azure Data Factory (ADF), ADLS Gen 2, Databrick ...


  • Siri InfoSolutions Inc Piscataway, United States Full time

    Job Description · Job DescriptionHi Professional,I hope you are doing good.Please find below requirement and let me know your interest?Azure Databricks DeveloperLocation: Piscataway, NJ OnsiteFulltime/ Permanent roleJob DescriptionAzure data bricks (Core), Azure Admin and platfor ...


  • Synechron Iselin, United States

    About the job · Summary: · We are looking to hire Sr. Snowflake Data Warehouse Specialist with a financial background. Who will play a crucial role in leveraging Snowflake, a cloud-based data warehousing platform, to manage and analyze financial data. · Primary Responsibilities: ...

  • Diverse Lynx

    Data Engineer

    6 days ago


    Diverse Lynx Iselin, United States

    JD: · Role name: · Engineer · Role Description: · 1 Developing/design solution from detail design specification. 2 Playing an active role in defining standard in coding, system design and architecture.3 Revise, refactor, update and debug code. 4 Customer interaction.5 Must have s ...

  • Webologix Ltd/ INC

    Senior Data Engineer

    4 weeks ago


    Webologix Ltd/ INC Woodbridge Township, United States

    Job Title: Sr. Data Engineer · Locations: Iselin, NJ / NYC, NY · Type of hire: Fulltime · Experience: 10+ years · Job Description: · Azure, ADF, Data bricks, Python, Snow SQL, and Snowflake. · Primary Responsibilities: · Strong understanding or Snowflake on Azure Architecture, de ...


  • eTeam South Plainfield, United States

    Role: Azure Data Eng/Dev · Location: Brampton, CA (Remote) · Start date: Immediate availability · Background check MANDA TORY · Request ID: · Job Description: · Work closely with client technical heads, business heads, and business analysts to understand and document business an ...


  • Hexaware Technologies Iselin, United States

    Data Warehousing Specialist II (HEX461; multiple positions; full-time)Hexaware seeks Data Warehousing Specialists II to work in Iselin, NJ and various unanticipated locations throughout the US to design and implement scalable data solutions. Research and develop technical pattern ...

  • ValueMomentum

    Snowflake Architect

    3 days ago


    ValueMomentum Piscataway, United States

    Qualifications · 3 years' experience in Snowflake, Data Modeling and Architecture Understanding of Data Sharing in Snowflake · Experience in Performance Management in Snowflake · Excellent communication skills both verbal and written. · At least 5 years' experience in hands on ex ...


  • Synechron Iselin, United States

    About the job · Summary: · We are looking to hire Sr. Snowflake Data Warehouse Specialist with a financial background. Who will play a crucial role in leveraging Snowflake, a cloud-based data warehousing platform, to manage and analyze financial data. · Primary Responsibilities: ...


  • SANS Edison, United States

    Job Title - Azure Data Solutions Architect · Location - Edison, NJ · Job Summary: · We are seeking an experienced Azure Data Solutions Architect with P&C Industry experience. · If you are data freak and looking for a professionally challenging and financially rewarding career th ...


  • PVH Piscataway, United States

    Design Your Future at PVH · VP, Enterprise Planning Solutions - PVH Corp.POSITION SUMMARY: · PVH is one of the world's largest global apparel companies. With a history going back over 135 years, PVH has excelled at growing brands and businesses with rich American heritages, beco ...


  • eSolutionsFirst, LLC Edison, United States

    Azure Data Engineer · Edison ,NJ (or) McLean, VA · 12 Months Contract (Possible extension) · Required Skills: · Azure Data Engineer · Azure Data Factory to the more kind of like a custom development side of things deployed on Azure Function app Azure Databricks, workflows usin ...

  • The Dignify Solutions LLC

    Data Architect

    2 days ago


    The Dignify Solutions LLC Edison, United States

    Data Architect with 6 to 8 years of relevant experience in Microsoft Azure designing, architecting, and implementing large scale modern data platform. · 3-5 years' experience in BFSI domain ( D&A BFSI experience will be preferred) · Strong understanding of Data & Analytics platfo ...

  • Tata Consultancy Services

    Engineer

    1 week ago


    Tata Consultancy Services Edison, United States

    Skill: Cloud Engineer · Experience building and managing enterprise cloud infrastructure. · Strong hands-on experience on AWS and/or Azure environments. · Strong hands-on experience with Infrastructure as Code (IaC) using Terraform and CloudFormation tools. · Strong hands-on exp ...