SoC Device Driver Engineer, Machine Learning Accelerators - Cupertino, United States - Amazon

    Amazon background
    Description

    Custom silicon chips live at the heart of AWS Machine Learning servers, and our team builds the backend software to run these servers. Were looking for someone to lead our system-on-chip (SoC) driver software team and help us deliver at scale, as we build the next generation of driver software.

    As the lead for the SoC driver team, you will:

    Build and manage a small, strong team of developers

    Work with hardware designers to write drivers for newly developed hardware modules

    Refactor and maintain existing codebases throughout the device lifecycle

    Continuously test and deploy your software stack to multiple internal customers

    Innovate on the tooling you provide to customers, making it easier for them to use and debug our SoCs

    Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be managed, tested, and monitored. The SoC drivers are a critical piece of the AWS management software stack that ensure the chip is functional, performant, and secure.

    You will thrive in this role if you:

    Enjoy building and managing small teams

    Are familiar with modular driver architectures (such as the Linux or Windows driver stacks)

    Are proficient in C++ and familiar with Python

    Know how to build effective abstractions over low-level SoC details

    Have strong opinions about software architecture, and are able to apply them effectively

    Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization

    Although we build and deploy machine learning chips, no machine learning background is needed for this role. Your team (and your software) wont be doing machine learning. Our driver stack lives at the lowest level of the backend AWS infrastructure responsible for managing our ML servers. You and your team will develop drivers for components used by machine learning (example: PCIe, HBM, etc.), but wont need to deeply understand ML yourselves.

    This role can be based in either Cupertino, CA or Austin, TX. The team is split between the two sites, with no preference for one over the other.

    This is a fast-paced role where you'll work with thought-leaders in multiple technology areas. You'll have high standards for yourself and everyone you work with, and you'll be constantly looking for ways to improve your software, as well as our products' overall performance, quality, and cost.

    We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning

    We are open to hiring candidates to work out of one of the following locations:

    Austin, TX, USA | Cupertino, CA, USA

    Basic Qualifications

    6+ years of programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby experience

    6+ years of non-internship professional software development experience

    4+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience

    Experience leading the design, build and deployment of complex and performant (reliable and scalable) software solutions in production

    C++ development experience

    Experience developing low-level software for hardware (SoC, ASIC, GPU, CPU, etc.)

    Preferred Qualifications

    Knowledge of engineering practices and patterns for the full software/hardware/networks development life cycle, including coding standards, code reviews, source control management, build processes, testing, certification, and livesite operations

    Experience taking a leading role in building complex software or computing infrastructure that has been successfully delivered to customers

    Experience managing a small team of developers, including, but not limited to: scheduling, prioritizing, recruiting, coaching

    Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit

    Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $134,500/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit This position will remain posted until filled. Applicants should apply via our internal or external career site.