Ionos

DevOps Engineer for Customer Care AI Platform Team (f/m/d)

Hinterm Hauptbahnhof 3-5, 76137 Karlsruhe Full Time

Bei IONOS arbeitest Du bei dem führenden europäischen Anbieter von Cloud-Infrastruktur, Cloud-Services und Hosting-Dienstleistungen partnerschaftlich mit unterschiedlichen Teams zusammen. Wir bieten Dir eine Perspektive in einer der zukunftssichersten Branchen. Uns zeichnen offene Arbeitsstrukturen, Duz-Kultur und flache Hierarchien mit unvergleichlichem Team-Spirit aus. Wir sind fest davon überzeugt, dass Job und Spaß vereinbar sind und bieten Dir hierfür das entsprechende Umfeld. Bei ständigem Wachstum sind wir stets auf der Suche nach neuen Kolleginnen und Kollegen. Werde Teil von IONOS und lass uns gemeinsam wachsen.

About the team:

Our mission is to  build a modern ecosystem used for all IONOS customer support needs. The tools developed by us are used in over 20 locations, by more than 2.000 users, supporting 8 million customer contracts in 10 markets.

The development team has full responsibility for the development lifecycle. This means we plan, develop, test and deploy our software without any other internal or external dependencies.

Our portfolio revolves around an internally built CRM which is now being enhanced with AI capabilities. 

About the product you will be building:

We are building a next-generation AI platform designed to redefine how our company interacts with customers. This isn't just a chatbot; it's a high-performance, multimodal AI ecosystem powered by state-of-the-art Speech-to-Speech (S2S) models, advanced Large Language Models (LLMs), and intelligent orchestration frameworks. Our platform will understand, reason, and respond across text and voice — while seamlessly executing real-time actions to resolve customer needs.

We are aiming for a hybrid architecture of Open Source LLMs, industry-leading proprietary models, and Model Context Protocol (MCP) to enable contextual reasoning, tool invocation, and seamless orchestration across systems. The goal is not just to talk to the customer, but to act on their needs.

What makes this project unique:

The Voice Frontier: We are building low-latency, emotive speech-to-speech pipelines for a truly natural voice channel experience.

Deep System Integration: Our platform connects directly to the company's core systems via MCPs, allowing the AI to access real-time customer context and execute complex workflows.

Self-Evolving Logic: We are developing  an automated QA and evaluation module that continuously analyzes interactions across channels.By programmatically measuring quality, accuracy, latency, and resolution outcomes, we can close the feedback loop, and adapt system behavior in hours, not weeks.

Hybrid Innovation: You’ll work at the intersection of "build vs. buy," integrating the best of the open-source community with custom-built internal infrastructure.

What's in it for you:

You won't just be shipping code; you’ll be part of making this concept evolve and shift.
You’ll join a friendly, experienced team where your voice matters and your contribution shapes real-world outcomes. You’ll work in a modern environment with technologies and practices that help us ship reliable software efficiently.

Role description:

As a DevOps Engineer in this team you will  build the foundation of our internal AI Customer Care platform. You will be responsible for the "heavy lifting"—designing the distributed systems that power real-time speech-to-speech pipelines, orchestrating agentic workflows via MCP, and ensuring our AI scales without breaking a sweat.

Main responsabilities:
  • Design, build, and maintain CI/CD pipelines in collaboration with development teams
  • Improve and gradually redesign our infrastructure toward container orchestration
  • Maintain and optimize Debian-based Linux systems
  • Ensure high availability and monitoring across multiple data centers
  • Contribute to observability, monitoring, logging, and incident response practices
  • Automate infrastructure provisioning and configuration
  • Maintain ISO security standards throughout the infrastructure 
  • Handle vulnerabilities, ensure dependency tracking
  • Work closely with developers to optimize deployment workflows and runtime environments
  • Use AI tooling effectively (Claude, ChatGPT, internal MCP tools) to improve productivity and automation
  • Architect Low-Latency Pipelines: Build and optimize the streaming infrastructure for Speech-to-Speech (S2S), ensuring sub-500ms round-trip latency for natural voice interactions.Experience with scaling and deploying in zones. Minimize hops between services. Experience with WSS & SRTP protocol would be a plus. 
  • Hosting Specialized tooling: Host and maintain the specialized applications needed in our AI pipelines (eq. MCP servers, Vector store databases, Caching apps, etc..) Monitor and respond to unhealthy patterns (high memory, high cpu, low disk space, high latency).
  • Data & Evaluation Plumbing: Host and maintain our Automated QA Module. Schedule jobs and design alerts that would need a rapid response (High hallucinations or low response quality for latest nightly run, etc..).
We are looking for some of:
  • Strong Linux administration experience, preferably Debian-based systems
  • Hands-on experience with Kubernetes in production environments
  • Experience with cloud native architectures (design, build, operations)
  • Solid understanding of networking fundamentals:
    • Subnetting
    • Routing
    • BGP concepts and high-availability design
  • Experience with CI/CD systems and infrastructure automation tools
  • Good scripting skills (Bash, Python or similar)
  • Ability to troubleshoot distributed systems
  • Systems Expertise: DevOps knowledge: Docker or Docker Swarm, Kubernetes, ArgoCD, JFrog Artifactory, Infrastructure-as-Code, CI/CD, Helm, Prometheus, Terraform, Gitlab/Github CI, Grafana, Jenkins
  • Familiar with monitoring stack: Prometheus/Grafana (Metrics), ELK (Logs) & Jaeger (Trace) etc.
  • Best practices on how to secure systems and pipelines: OpenID Connect, OAuth 2, Hashicorp Vault, Keycloak, Keepass, Ansible-vault.
Would be a plus:
  • Experience with telephony gateways (Twilio, Amazon Connect) and SIP/RTP protocols  or other telephony platforms
  • Experience migrating from VM-based infrastructure to container orchestration
  • Exposure to AI-driven development workflows
What we offer:
  • Access to local/international trainings, development and growth opportunities, including access to e-learning platforms, covering both technical and soft skills areas;
  • Modern technologies, product responsibility;
  • Flexible work schedule;
  • Hybrid work option;
  • Medical services package from one of two private providers;
  • 25 vacation days per year;
  • Substitute days off for public holidays that occur on the weekend;
  • Meal tickets;
  • Internal referral program;
  • Team events, networking events organized to promote a passionate, creative and diverse culture;
  • Summerfest and Winterfest parties;
  • Of course, coffee, soft drinks and fresh fruits are on us in the office.
Über IONOS

IONOS ist der führende europäische Digitalisierungs-Partner für kleine und mittlere Unternehmen (KMU). IONOS hat mehr als sechs Millionen Kundinnen und Kunden und ist mit einer weltweit verfügbaren Plattform in 18 Märkten in Europa und Nordamerika aktiv. Mit seinen Web Presence & Productivity-Angeboten agiert das Unternehmen als “One-Stop-Shop" für alle Digitalisierungs-Bedürfnisse - von Domains und Webhosting über klassische Website-Builder und Do-It-Yourself-Lösungen, von E-Commerce bis zu Online-Marketing-Tools. Darüber hinaus bietet IONOS Cloud-Lösungen für Firmen, die im Zuge der Weiterentwicklung ihres Geschäfts in die Cloud wechseln möchten.

Wir wertschätzen Vielfalt und begrüßen alle Bewerbungen – unabhängig von z. B. Geschlecht, Nationalität, ethnischer und sozialer Herkunft, Religion, Behinderung, Alter sowie sexueller Orientierung und Identität, körperlichen Merkmalen, Familienstand oder einem anderen sachfremden Kriterium nach geltendem Recht.