AI / ML System Software Engineer, Principal - Santa Clara, CA, United States - d-Matrix

    Default job background
    Description
    d-Matrix has fundamentally changed the physics of memory-compute integration with our digital in-memory compute (DIMC) engine. The "holy grail" of AI compute has been to break through the memory wall to minimize data movements.

    Having secured over $154M, $110M in our Series B offering, d-Matrix is poised to advance Large Language Models to scale Generative inference acceleration with our chiplets and In-Memory compute approach.

    We are on track to deliver our first commercial product in 2024. We are poised to meet the energy and performance demands of these Large Language Models. Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.

    The role requires you to be part of the team that helps productize the SW stack for our AI compute engine.

    As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the next-generation AI Deployment software.

    You have had past experience working across all aspects of the full stack tool chain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design.

    You are able to build and scale software deliverables in a tight development window.

    You will work with a team of compiler experts to build out the compiler infrastructure working closely with other software (ML, Systems) and hardware (mixed signal, DSP, CPU) experts in the company.

    Computer Science, Engineering, Math, Physics or related degree.
    Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals.
    Proficient in C/C++/Python development in Linux environment and using standard development tools.
    Experience with distributed, high performance software design and implementation.
    MS or PhD in Computer Science, Electrical Engineering, or related fields.
    Work experience at a cloud provider or AI compute / sub-system company.
    Experience with deep learning frameworks (such as PyTorch, Tensorflow).
    Experience with deep learning runtimes (such as ONNX Runtime, TensorRT,...).
    Experience with MLOps from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment.
    Experience training, tuning and deploying ML models for CV (ResNet,..), Equal Opportunity Employment Policy
    We're committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work.

    We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status.