DE Jobs

Search from over 2 Million Available Jobs, No Extra Steps, No Extra Forms, Just DirectEmployers

Job Information

Meta Research Scientist Intern, Systems ML and HPC - SW/HW Co-Design in Bellevue, Washington

Summary:

AI System SW/HW Co-design team’s mission is to explore, develop and help productize high-performance software and hardware technologies for AI at datacenter scale. We achieve this via concurrent design and optimization of many aspects of the system such as models, algorithms, numerics, performance and AI hardware including compute, networking and storage. In essence, we drive the AI HW roadmap at Meta and ensure our existing and future AI workloads and software are well optimized and suited for the hardware infrastructure.Meta is seeking Research Scientist Interns to join our AI & Systems Co-Design Training team to help us explore and productionize state-of-the-art modeling, algorithmic, and systems technologies to achieve industry-leading results and efficiency in large-scale Generative AI and ranking/recommendation training jobs. In this role, you will work cross-functionally with internal software and platforms engineering teams to understand machine learning workloads, optimize their performance and communications, and iterate with infrastructure teams on hardware systems. You will employ cutting-edge optimization and data parallelization strategies to maximize training throughput for the next generations of LLMs and deep recommendation models. You will also work with external industry partners to influence their roadmaps and build the best products for Meta’s Infrastructure.Join our team and help shape one of the largest infrastructure footprints which powers Meta’s applications used by billions of people across the globe.Our team at Meta AI offers twelve (12) to sixteen (16) weeks long internships and we have various start dates throughout the year. To learn more about our research, visit https://ai.facebook.com.

Required Skills:

Research Scientist Intern, Systems ML and HPC - SW/HW Co-Design Responsibilities:

  1. Develop tools and methodologies for large scale workload analysis and extract representative benchmarks (in C++/Python/Hack) to drive early evaluation of upcoming platforms.

  2. Analyze evolving Meta workload trends and business needs to derive requirements for future offerings. Apply in depth knowledge of how the AI/ML systems interact with the compute and storage systems around.

  3. Utilize extensive understanding of CPUs (x86/ARM), Flash/HDD storage systems, networking, and GPUs to identify bottlenecks and enhance product/service efficiency. Collaborate closely with software developers to re-architect services, improve codebase through algorithm redesign, reduce resource consumption, and identify hardware/software co-design opportunities.

  4. Identify industry trends, analyze emerging technologies and disruptive paradigms. Conduct prototyping exercises to quantify the value proposition for Meta and develop adoption plans. Influence vendor hardware roadmap and broader ecosystem to align with Meta's roadmap requirements.

  5. Work with Software Services, Product Engineering, and Infrastructure Engineering teams to find the optimal way to deliver the hardware roadmap into production and drive adoption.

Minimum Qualifications:

Minimum Qualifications:

  1. Currently is in the process of obtaining a PhD degree in the field of Computer Science or a related STEM field.

  2. Experience with hardware architecture, compute technologies and/or storage systems

  3. Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment.

  4. Intent to return to degree program after the completion of the internship/co-op.

Preferred Qualifications:

Preferred Qualifications:

  1. Track record of achieving results as demonstrated by grants, fellowships, patents, as well as first-authored publications at leading workshops or conferences such as MICRO, ISCA, HPCA, ASPLOS, ATC, SOSP, OSDI, MLSys or similar.

  2. Architectural understanding of CPU, GPU, Accelerators, Networking, Flash/HDD Storage systems.

  3. Experience with distributed AI training and inference with a focus on performance, programmability, and efficiency.

  4. Some experience with large-scale infrastructure, distributed systems, full stack analysis of server applications.

  5. Experience or knowledge in developing and debugging in C/C++, Python and/or PyTorch.

  6. Experience driving original scholarship in collaboration with a team.

  7. Interpersonal experience: cross-group and cross-culture collaboration.

  8. Experience in theoretical and empirical research and for answering questions with research.

  9. Experience communicating research for public audiences of peers.

Public Compensation:

$7,800/month to $11,293/month + benefits

Industry: Internet

Equal Opportunity:

Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.

Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.

DirectEmployers