Micron

Systems Performance Engineer

Austin, TX Full time

Our vision is to transform how the world uses information to enrich life for all.

Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever.

The engineer will work with senior engineers and researchers on AI training and inference systems, with a strong focus on LLM execution engines, data and KV‑cache management, and multi‑tier memory hierarchies across modern data‑center platforms. The role centers on end‑to‑end performance characterization and optimization of large‑scale AI workloads, spanning single‑node GPUs to rack‑scale inference deployments.
 
Responsibilities include systems software development, workload engineering, performance analysis, and memory‑centric optimization for LLM training, serving, and agentic AI frameworks. The work emphasizes real customer inference and training workloads, emerging memory technologies (HBM, LP/DRAM, CXL, NVMe, remote memory fabrics), and the economics and token‑level efficiency of large‑scale inference systems.
 
This role combines hands‑on engineering with applied systems research, directly influencing next‑generation AI platforms and memory‑driven system architectures.

Key Responsibilities

  • Build, develop, and improve systems software tools for profiling, tracing, and analyzing LLM training and inference workloads
  • Design and evaluate KV‑cache and state‑management strategies for LLM serving, including reuse, eviction, compression, tiering, and lifecycle management
  • Build and extend benchmarking, simulation, and emulation frameworks for AI inference and training across heterogeneous memory tiers
  • Develop and evaluate data placement, migration, and prefetching algorithms across HBM, LP/DRAM, CXL memory pools, NVMe, and remote memory systems
  • Characterize and optimize LLM execution engines (prefill/decode), including attention behavior, batching strategies, and token‑level performance
  • Analyze rack‑scale and cluster‑scale inference deployments, focusing on throughput, latency, utilization, cost, and token economics
  • Develop workloads that reflect real customer AI systems, including LLM serving, agentic pipelines, retrieval‑augmented generation, multimodal inference, and long‑context workloads
  • Instrument and analyze performance across GPUs, CPUs, memory subsystems, interconnects, and storage, identifying end‑to‑end bottlenecks
  • Evaluate system interactions across OS, runtime layers, containerized deployments, and distributed inference stacks
  • Automate performance measurement, experimentation, and analysis workflows to improve repeatability and scale
  • Summarize findings into clear methodologies, internal reports, and technical presentations for engineering and leadership audiences
  • Collaborate across engineering, architecture, and research teams, and with external academic and industry partners
  • Provide actionable feedback to product, architecture, and platform teams to influence future AI systems and memory designs

Required Qualifications

  • Bachelor’s or Master’s degree, or equivalent experience, in Computer Science, Electrical Engineering, or a related field
  • Strong foundation in operating systems, memory systems, parallel computing, or distributed systems
  • Proficiency in systems programming and analysis using C/C++ and Python
  • Experience working in Linux environments, including debugging, profiling, and automation
  • Solid understanding of modern server architectures, including GPUs, CPUs, cache hierarchies, NUMA, and memory subsystems
  • Experience analyzing performance data and reasoning about system‑level behavior
  • Strong written and verbal communication skills
  • Ability to work independently on scoped problems and collaboratively on larger system efforts

Preferred Qualifications

  • Experience with LLM training and inference systems, including execution runtimes and serving frameworks
  • Hands‑on experience with KV cache management, long‑context execution, or stateful inference workloads
  • Familiarity with GPU architectures and AI accelerators, including memory and interconnect behavior
  • Experience with multi‑tier memory systems, including HBM, LP/DRAM, CXL‑attached memory, NVMe, and remote/disaggregated memory
  • Experience profiling and optimizing AI inference pipelines, including batching, scheduling, and latency‑sensitive workloads
  • Familiarity with agentic AI frameworks, multi‑agent systems, or workflow‑based inference pipelines
  • Experience with distributed AI systems, rack‑scale deployments, or cluster‑level performance analysis
  • Exposure to memory or system simulators (e.g., gem5, Ramulator) or analytical performance modeling
  • Familiarity with containers, orchestration, and AI infrastructure stacks
  • Experience applying machine learning techniques to systems optimization or performance analysis

As a world leader in the semiconductor industry, Micron is dedicated to your personal wellbeing and professional growth. Micron benefits are designed to help you stay well, provide peace of mind and help you prepare for the future.  We offer a choice of medical, dental and vision plans in all locations enabling team members to select the plans that best meet their family healthcare needs and budget.  Micron also provides benefit programs that help protect your income if you are unable to work due to illness or injury, and paid family leave.  Additionally, Micron benefits include a robust paid time-off program and paid holidays.  For additional information regarding the Benefit programs available, please see the Benefits Guide posted on micron.com/careers/benefits.

Micron is proud to be an equal opportunity workplace and is an affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, age, national origin, citizenship status, disability, protected veteran status, gender identity or any other factor protected by applicable federal, state, or local laws.

To learn about your right to work click here.

To learn more about Micron, please visit micron.com/careers

For US Sites Only: To request assistance with the application process and/or for reasonable accommodations, please contact Micron’s People Organization at  hrsupport_na@micron.com or 1-800-336-8918 (select option #3)

Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards.

Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron.

AI alert: Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification.   

Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.