Work Schedule
Standard (Mon-Fri)Environmental Conditions
OfficeJob Description
Company Information:
Thermo Fisher Scientific Inc. (NYSE: TMO) is the world leader in serving science, with annual revenue of approximately $40 billion. Our Mission is to enable our customers to make the world healthier, cleaner and safer. Whether our customers are accelerating life sciences research, solving complex analytical challenges, increasing efficiency in laboratories, improving patient health through diagnostics, or developing life-changing therapies, we are here to support them.
Our global team of more than 100,000 colleagues delivers innovative technologies, purchasing convenience, and pharmaceutical services through industry-leading brands including Thermo Scientific, Applied Biosystems, Invitrogen, Fisher Scientific, Unity Lab Services, Patheon, and PPD.
We are committed to Integrity, Intensity, Innovation, and Involvement. We value diverse perspectives and foster an inclusive environment where colleagues can grow and contribute meaningfully.
If you are interested in meaningful work that supports scientific advancement, we encourage you to explore opportunities with us at http://jobs.thermofisher.com
Position Summary:
This role is part of the Digital Platform Engineering (DPE) group supporting Fisher Scientific and Thermo Fisher Scientific eCommerce platforms. The Engineer III will design and implement scalable Generative AI solutions that enhance customer-facing digital experiences.
The role focuses on building reliable Python-based services that integrate Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and AI-driven workflows into enterprise commerce systems. The position works closely with product, engineering, and platform teams to translate business requirements into secure, high-performing technical solutions.
This opportunity offers exposure to enterprise systems including Content Management, Product Information Management, Middleware, ERP platforms, and digital commerce services.
Design, develop, and maintain scalable Python services using FastAPI, Flask, or similar frameworks.
Implement and support LLM-powered capabilities, including prompt orchestration, structured outputs, and tool integration.
Develop Retrieval-Augmented Generation (RAG) pipelines, including embeddings, retrieval strategies, and response generation.
Integrate vector databases to enable semantic search and AI-enhanced discovery experiences.
Design and support agent-based workflows using established orchestration patterns and standardized tool interfaces (e.g., Model Context Protocol or similar frameworks).
Collaborate with cross-functional teams to define requirements and deliver customer-focused AI features.
Ensure solutions meet standards for performance, reliability, scalability, and security.
Contribute to testing, CI/CD processes, documentation, and production monitoring.
Requirements:
6+ years of relevant professional experience in Python backend development.
Strong proficiency in Python and modern web frameworks such as FastAPI or Flask.
Experience working with Large Language Models (LLMs) and NLP-based applications.
Hands-on experience with Retrieval-Augmented Generation (RAG) and vector databases.
Understanding of AI workflow orchestration frameworks (e.g., LangChain, LangGraph, LlamaIndex, or similar).
Experience building and integrating services within enterprise or microservices architectures.
Familiarity with cloud platforms such as AWS and modern DevOps practices.
Strong collaboration and communication skills.