The Opportunity
We’re hiring a Sr. AI Systems Engineer to help support our emerging product, Night Shift, an AI research assistant that amplifies the impact of investigators by automating the tedious, repetitive steps involved in working a case. This role sits within the Machine Learning team and will work closely with partners in Engineering (Backend, Frontend, and Design) in a fast-paced environment. You will be one of the earliest technical contributors to our system architecture for agentic AI, and will own our AI evaluation framework. The outcome we’re after is clear and ambitious: measurably faster, more accurate leads for every officer and every shift.
The Skillset
Familiarity with Agentic Systems: Hands-on experience with LLM agents including:
- LLM API use (e.g. LangChain/LangGraph, vLLM, OpenAI/Gemini/Anthropic APIs)
- Agent Design: tool use (e.g. via MCP), retrieval, memory, grounding/attribution for claims, and guardrails.
- Architectural patterns: planning and hand-off for multi-agent systems, context management
- RAG: vector/hybrid search (e.g. pgvector, turbopuffer, rerankers, etc.)
ML Platform expertise: 5+ years building and shipping ML systems to production; experience in the following areas:
- Backend Python and JS familiarity required; Typescript/Golang familiarity welcome
- Web services (e.g. Express/FastAPI, REST, SSE, JWTs)
- Cloud Infrastructure (e.g. AWS, Terraform, VPC, Networking)
- Backend databases/stores (e.g. Postgres, Redis)
- Observability (e.g. Prometheus, Grafana, OpenTelemetry, LangSmith/Langfuse)
- [Preferred] Durable execution (e.g. Temporal, Hatchet)
- [Preferred] OLAP (e.g. ClickHouse, Bigquery)
- [Preferred] ML Inference (e.g. PyTorch, TensorRT, NVIDIA Triton), ideally in multimodal domains (text/image/video)
- [Preferred] Compute orchestration (e.g. Kubernetes, Prefect, Ray)
Experience with LLM Evaluations at scale: You’ve built offline/online eval harnesses and are familiar with the methodologies and metrics to measure:
- Search, retrieval, and recommendation performance
- Safety & robustness (security, compliance, red-teaming, regression testing)
- Cost, performance and latency trade-offs
- [Preferred] Agentic task success, trajectory quality, preference learning (e.g. SFT, DPO, RLHF, LLM-as-judge)
Feeling uneasy that you haven’t ticked every box? That’s okay; we’ve felt that way too. Studies have shown women and minorities are less likely to apply unless they meet all qualifications. We encourage you to break the status quo and apply to roles that would make you excited to come to work every day.
90 Days at Flock
We are a results-oriented culture and believe job descriptions are a thing of the past. We prescribe 90 day plans and believe that good days lead to good weeks, which lead to good months. This serves as a preview of the 90 day plan you will receive if you were to be hired in this role at Flock Safety.
The First 30 Days
- Immerse yourself in the current system design and agent/tooling landscape. Understand the core customer use cases and data flows.
- Support the team by shipping a few quick wins (e.g., refining tool APIs, prompt engineering, fixing bugs)
- Stand up the foundational eval and observability scaffolding (datasets, metrics, KPIs, reporting)
- Propose a technical architecture and implementation plan for an agent evaluation framework.
The First 60 Days
- Deliver the MVP evaluation harness to produce initial metrics, enable debugging and perform regression testing.
- Take on a system feature that offers demonstrated improvement against your MVP evaluation suite
90 Days & Beyond
- Productionize the evaluation and observability platform and make it the source of truth for quality and safety. (e.g. Online/offline tracing, alerting, dashboards, evaluations and PR-gated regression suite)
- Own the roadmap for evolving the agent evaluation platform
- Lead deeper R&D threads (e.g., lightweight fine-tuned projection layers, specialized embeddings, multimodal understanding) that can improve system performance on core metrics.
If you’re excited to build AI that tangibly amplifies real-world public safety outcomes—and you love making complex systems measurable, dependable, and fast—we’d love to talk.
Salary & Equity
In this role, you’ll receive a starting salary between $200,000 and $225,000 as well as Flock Safety Stock Options. Base salary is determined by job-related experience, education/training, as well as market indicators. Your recruiter will discuss this in-depth with you during our first chat.
Location
We’re building the impossible, together. To drive innovation through in-person collaboration, we’re prioritizing candidates in our key hubs: Atlanta, Boston, Chicago, Denver, Los Angeles, New York City, San Francisco, and Austin. While we value the energy of our hub communities, we embrace remote work and welcome applications from exceptional talent across the United States.
🚀 Y Combinator Company Info
Y Combinator Batch: S17
Team Size: 1000 employees
Industry: B2B Software and Services -> Engineering, Product and Design
Company Description: The first public safety operating system that eliminates crime.
💰 Compensation
Salary Range: $200,000 - $240,000
📋 Job Details
Job Type: Full-time
Experience Level: 6+ years
Engineering Type: Machine learning