Machine Learning - Compiler Engineer, Annapurna Labs
New Yesterday
Machine Learning Compiler Engineer II
The product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet. AWS Neuron and Inferentia are used at scale with customers like Snap, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The team: As a whole, the Amazon Annapurna Labs team is responsible for silicon development at AWS. The team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations.
The AWS Neuron team works to optimize the performance of complex neural net models on our custom-built AWS hardware. More specifically, the AWS Neuron team is developing a deep learning compiler stack that takes neural network descriptions created in frameworks such as TensorFlow, PyTorch, and MXNET, and converts them into code suitable for execution. As you might expect, the team is comprised of some of the brightest minds in the engineering, research, and product communities, focused on the ambitious goal of creating a toolchain that will provide a quantum leap in performance.
You: Machine Learning Compiler Engineer II on the AWS Neuron team, you will be supporting the ground-up development and scaling of a compiler to handle the world's largest ML workloads. Architecting and implementing business-critical features, publish cutting-edge research, and contributing to a brilliant team of experienced engineers excites and challenges you. You will leverage your technical communications skill as a hands-on partner to AWS ML services teams and you will be involved in pre-silicon design, bringing new products/features to market, and many other exciting projects. A background in Machine Learning and AI accelerators is preferred, but not required.
About Us
Inclusive Team Culture: Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
Work/Life Balance: Our team puts a high value on work-life balance. It isn't about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth: Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
- Location:
- Seattle
We found some similar jobs based on your search
-
New Today
Sr. Machine Learning - Compiler Engineer III, AWS Neuron, Annapurna Labs
-
Seattle
Sr. Machine Learning Compiler Engineer III The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML infer...
More Details -
-
New Yesterday
Sr. Machine Learning - Compiler Engineer III, AWS Neuron, Annapurna Labs
-
Seattle, WA, United States
-
$200,000 - $250,000
- IT & Technology
The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in cloud...
More Details -
-
New Yesterday
Machine Learning - Compiler Engineer II, AWS Neuron, Annapurna Labs
-
Seattle
Software Engineer In The Compiler Team For AWS Neuron Do you want to be part of the AI revolution? At AWS our vision is to make deep learning pervasive for everyday developers and to democratize access to AI hardware and software infrastructure. In ...
More Details -
-
New Yesterday
Machine Learning - Compiler Engineer, Annapurna Labs
-
Seattle
Machine Learning Compiler Engineer II The product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference ...
More Details -
-
2 Days Old
Machine Learning Compiler Engineer, Annapurna Labs
-
Seattle, WA, United States
-
$200,000 - $250,000
- Engineering
Our team is responsible for the AWS Neuron Compiler, a cutting-edge deep learning compiler stack that powers Generative AI and other advanced ML workloads on AWS’s custom-built ML accelerators — Inferentia and Trainium. These accelerators deliver bes...
More Details -
-
4 Days Old
Machine Learning - Compiler Engineer II, Annapurna Labs
-
Seattle, WA, United States
- Engineering
Description The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation and one of several AWS tools used for building Generative AI on AWS. The Inferentia chip delivers best-in-class ML inference performance at the lowest c...
More Details -