Data/Big Data Solution Architect - Aurora, United States - ScienceSoft

    ScienceSoft
    ScienceSoft Aurora, United States

    1 month ago

    Default job background
    Description
    The digital platform will serve as a host to Customer Data Utilization and Customer Business Intelligence.

    For internal and external customers are currently utilizing disparate systems to track information relevant to, but not limited to, key-to-key operations, profits, and inventory.

    Digital platform will include data-lake, data-warehouse, analysis and data-science platform, Customer portal, interactive analytics and KPI dashboards.

    Information from several siloed systems will be fed via server integration services to a data lake and data-warehouse, which will in turn provide accessible data for the Customer Data Utilization and Customer Business Intelligence.

    Responsibilities

    Collaborate with Customers to gather, assess and interpret client needs and requirements.
    Analyse structural requirements for new software and applications, translate into specifications to develop solutions.
    Evaluating pros/cons among the identified options before arriving at a recommended solution optimized for the client's needs.

    Advising on data solutions architecture, performance, altering the ETL process, providing SQL transformations, API integration, and deriving business and technical KPIs.

    Developing and delivering data solutions.
    Re-engineering business intelligence processes, designing and developing data models, DWH and Data Lake design.
    Sharing your expertise throughout the deployment and RFP process.
    Installing and configuring information systems to ensure functionality.
    Involvement in RFP and pre-sales activities.
    Requirements

    Excellent verbal and written communication skills.
    Experience gathering and analyzing system requirements, analytical skills.
    Experience in Data solution architecture, design and deployment skills.
    Experience with Microsoft Azure ecosystem.
    Experience with AWS ecosystem (EMR, EC2, S3, RDS, Redshift, Aurora, Athena, RDS, PostgreSQL).
    Engineering experience in large data systems on SQL, Hadoop, etc.
    Expertise in Microsoft SQL, Oracle and/or other transactional databases.
    Experience with data warehousing.
    Experience with ETL tools, ETL tech designs, Data Flows.
    Experience with business intelligence tools, Enterprise Reporting and data visualization tools (PowerBI, Tableau, Qlik etc.).
    Experience in Data Science and Machine learning, data mining and segmentation techniques.
    Experience with NoSQL-based, SQL-like technologies (e.g., Hive, Pig, Spark SQL/Shark, Impala, BigQuery).
    Experience in building Distributed Big Data Solutions including ingestion, caching, processing, consumption, logging & monitoring.
    Development Experience in streaming platforms (Kinesis, Kafka, Spark Streaming, Apache Flink).
    Development Experience in batching platforms.
    Proficiency using one or more programming or scripting languages such as: Python, Java, Scala, C#.
    7+ years of relevant work experience required.
    Bachelor's degree in computer science, engineering or related field.
    We propose challenging and interesting projects with a modern stack in a nice team.

    #J-18808-Ljbffr