As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact.
Blue Planet, a division of Ciena, develops intelligent automation software that enables communications service providers and enterprises to operate more adaptive, autonomous, and resilient networks. This role advances the development of production‑grade, AI‑driven software that underpins scalable, secure, and reliable systems. The position blends deep backend engineering expertise with applied Generative Artificial Intelligence (AI) to transform advanced concepts into hardened microservices that deliver real‑world operational impact.
How you will make an impact:
• Design and build high‑performance backend services using object‑oriented Python, FastAPI, and asynchronous patterns to support large‑scale automation platforms.
• Translate advanced Generative AI concepts, including multi‑agent systems and Retrieval‑Augmented Generation (RAG), into production‑ready microservices.
• Architect, deploy, and operate cloud‑native services using Docker and Kubernetes to ensure high availability, resilience, and scalability.
• Orchestrate complex, stateful AI workflows using LangChain and LangGraph to enable reliable, policy‑driven automation.
• Build and optimise data and context pipelines, including Model Context Protocol (MCP) servers, to securely connect large language models (LLMs) with enterprise data sources.
• Apply prompt and context engineering techniques to improve model accuracy, efficiency, and latency in production environments.
• Contribute to system design decisions across distributed systems, microservices architecture, and AI platform foundations.
The must haves:
• 5+ years of professional software engineering experience with expert‑level proficiency in Python and object‑oriented design.
• Proven experience building and operating production APIs using FastAPI or equivalent frameworks.
• Strong experience with containerisation and orchestration using Docker and Kubernetes.
• Solid background in distributed systems, microservices architecture, and database schema design.
• Hands‑on experience with large language models (LLMs), LangChain, LangGraph, and Model Context Protocol (MCP) servers.
• Practical experience designing and deploying Retrieval‑Augmented Generation (RAG) systems and multi‑agent AI workflows.
• Strong understanding of performance, reliability, and cost optimisation for AI‑enabled systems.
Nice to haves:
• Experience with AI observability and tooling such as Langfuse, Langflow, or LiteLLM.
• Familiarity with data or model orchestration tools such as Apache Airflow or MLflow.
• Experience testing and evaluating non‑deterministic AI systems, including unit testing and large language model evaluation frameworks
#LI-FA
At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination.
Ciena is an Equal Opportunity Employer, including disability and protected veteran status.
If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require.