As an AI Platform Engineer (SDE 2), you will be a hands-on developer responsible for building and maintaining the core software components that power our AI and context infrastructure. You will work on the "Context Layer"—the plumbing that connects enterprise data to LLMs—ensuring that our AI agents have the right information at the right time. This role is ideal for a strong software engineer who wants to specialize in the operational side of AI, focusing on high-quality code, automated delivery, and cloud-native systems.
Feature Development: Implement and maintain core services for the AI Data Lakehouse, focusing on efficient data retrieval and storage optimizations for AI workflows.
Pipeline Automation: Build and support CI/CD pipelines to automate the deployment of AI models, prompt templates, and infrastructure updates.
Agentic Support: Develop and test tool-execution environments and API interfaces that allow AI agents to interact with internal business systems safely.
Operational Excellence: Participate in on-call rotations and troubleshooting to ensure platform reliability. Write unit tests, integration tests, and documentation for new features.
Context Retrieval: Work on the "Context Fabric" to implement search and retrieval patterns (like RAG) that help agents access secure enterprise data.
Cloud Management: Assist in managing cloud resources across AWS and Azure, ensuring environments are cost-effective and secure.
Software Engineering Foundation
Experience: 3+ years of professional software development experience.
Core Skills: Strong proficiency in Python and either Java or Scala. You write clean, maintainable, and well-documented code.
API Development: Experience building and consuming RESTful APIs or gRPC services.
Database Basics: Understanding of relational databases (Postgres/MySQL) and familiarity with how data is stored in a distributed environment.
Cloud & CI/CD Mastery
Cloud Consoles: Hands-on experience navigating the AWS or Azure Management Consoles. You should be comfortable managing basic services like IAM, S3/Blob, and compute instances.
Infrastructure-as-Code (IaC): Basic experience with Terraform. You can read, modify, and deploy infrastructure modules.
CI/CD Tools: Familiarity with GitHub Actions, GitLab CI, or Jenkins. You understand how to automate the build-test-deploy lifecycle.
Observability: Basic experience with monitoring tools like Prometheus, Grafana, or cloud-native solutions (CloudWatch/Azure Monitor).
AI & Agentic Interests (Specialized Focus)
LLM Awareness: Familiarity with LLM concepts and frameworks like LangChain or LlamaIndex. You’ve experimented with or built basic RAG-based applications.
Emerging Protocols: A desire to learn and implement new standards like the Model Context Protocol (MCP).
Agentic Workflows: Interest in how autonomous agents function, including tool-use (function calling) and state management.
Data Retrieval: Basic understanding of vector databases (e.g., Pinecone, Milvus) and how search impacts AI performance.
Team Player: Ability to work effectively in an agile environment, participating in sprint planning and daily stand-ups.
Continuous Learner: A strong desire to stay current with the rapidly changing AI and cloud landscape.
Education: Bachelor’s degree in Computer Science, Software Engineering, or a related technical field.