Micron

PRINCIPAL ENGINEER, GPU PERFORMANCE, SMAI

Taichung - AATT, Taiwan Full time

Our vision is to transform how the world uses information to enrich life for all.

Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever.

The Smart Manufacturing and AI team at Micron Technology is looking for GPU Performance Engineer. Our mission is to deliver industry-winning machine learning, custom GenAI, and Agentic AI solutions to power Micron’s dominance in the highly competitive memory solutions market. Qualified applicants will have experience in a variety of data and cloud technologies and have extensive practice modeling data, querying, and deploying scalable data pipelines to execute machine learning models and AI agents. You will collaborate with Data Scientists, Data Engineers, and expert users to build and deploy scalable AI/ML solutions that drive value and insight from Micron’s manufacturing processes and systems.

Responsibilities include, but not limited to:

  • Architect and execute large-scale custom model training and fine-tuning jobs (SFT, RLHF) on multi-node, multi-GPU clusters.
  • Optimize training throughput and memory efficiency using distributed training strategies (FSDP, DeepSpeed, Megatron-LM) and mixed-precision techniques (FP16/BF16).
  • Design and develop autonomous AI Agents capable of multi-step reasoning, planning, and tool execution to automate complex manufacturing workflows.
  • Analyze and profile complex workloads (e.g., LLM training, Rendering pipelines) to identify bottlenecks in compute, memory bandwidth, and latency.
  • Write and optimize high-performance kernels using CUDA, HIP, or custom assembly (PTX/SASS) to unlock hardware capabilities.
  • Collaborate with Hardware Architects to define features for next-generation GPUs based on workload characterization.
  • Design and implement performance regression testing suites to catch degradations in drivers or compilers.
  • Mentor junior engineers on parallel programming paradigms and optimization techniques.

Education Qualifications:

  • Technical Degree required. Ph.D. in Computer Science or Statistics background highly desired.

Minimum Qualifications:

  • Deep understanding of GPU architecture (memory hierarchy, tensor cores, interconnects like NVLink) and experience managing GPU resources in both cloud environments and on-prem.
  • Hands-on experience with Distributed Data Parallel (DDP), Fully Sharded Data Parallel (FSDP), and model parallelism techniques.
  • Proficiency in fine-tuning Large Language Models using PEFT techniques (LoRA, QLoRA) and optimizing inference engines (vLLM, TensorRT-LLM).
  • Experience developing GenAI applications and AI Agents using frameworks like LangChain, LangGraph, LlamaIndex, or AutoGen.
  • Proficiency with Large Language Models (LLMs), including prompt engineering, function calling/tool use, and Chain-of-Thought (CoT) reasoning.
  • Experience in building and executing end-to-end ML systems automating training, testing and deploying Machine Learning models.
  • Familiarity with machine learning frameworks (PyTorch is required, TensorFlow, scikit-learn, etc.).
  • Software development skills and the desire to work on cutting edge development in a Cloud environment.
  • Strong scripting and programming skills in one of the following, Python or Java (Python preferred).
  • Experience with continuous integration/continuous delivery (CI/CD) tools (Jenkins, Git, Docker, Kubernetes).
  • 5+ years of experience in performance optimization, parallel computing, or low-level systems programming.
  • Deep expertise in C++ and at least one GPGPU framework (CUDA is preferred, but HIP/OpenCL/Metal are acceptable).
  • Outstanding analytical thinking, interpersonal, oral and written communication skills.
  • Ability to prioritize and meet critical project timelines in a fast-paced environment.

Preferred:

  • Experience with HPC job schedulers (e.g., Slurm) or orchestrating GPU workloads on Kubernetes (Ray, KubeFlow).
  • Knowledge of lower-level optimization (CUDA programming, Triton kernels, or custom C++ extensions for PyTorch).
  • Experience with Multi-Agent Systems and orchestrating collaboration between specialized agents.
  • Deep knowledge of math, probability, statistics and algorithms.
  • Demonstrated ability to study and transform data science prototypes into production solutions.
  • Knowledge of computer vision and/or signal processing including techniques for classification and feature extraction.

About Micron Technology, Inc.

We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience.

To learn more, please visit micron.com/careers

All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

To request assistance with the application process and/or for reasonable accommodations, please contact at hrsupport_taiwan@micron.com.

Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards.

Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron.

AI alert: Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification.   

Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.