VP IT Management - IM04AE
We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.
At The Hartford, we’re building the next generation of AI capabilities that power real business decisions—from predictive models that shape risk and pricing to AI agents that help people work smarter. To lead this effort, we are seeking a Vice President of AI Platform who brings together strong engineering discipline, practical innovation, and sound governance.
This role sits at the center of the company’s AI strategy and is responsible for shaping and operating an enterprise AI platform that supports predictive modeling, generative AI, and agent‑based systems, operating reliably across AWS and Google Cloud. Equally important, the role ensures these capabilities are safe, well governed, and trusted by the business.
The Vice President of AI Platform leads a senior organization of platform engineers, MLOps and reliability specialists, and enablement leaders, partnering closely with Data & Analytics, Security, Risk, Legal, and business leaders to enable rapid delivery aligned with enterprise standards.
This role can have a Hybrid or Remote work arrangement. Candidates who live near one of our office locations (Hartford, CT; Charlotte, NC; Chicago, IL; Columbus, OH) will be expected to work in an office three days per week (Tuesday through Thursday). Candidates who do not live near an office will have a remote work arrangement, with the expectation of coming into an office as business needs arise. Candidates must be authorized to work in the U.S. without company sponsorship.
Key Responsibilities:
- Building and Evolving the AI platform - Setting the direction for a multi‑cloud AI platform that supports a wide range of workloads—from classical predictive models to modern GenAI and multi‑agent systems. This includes establishing clear architectural standards for security, data access, identity, and deployment, while still giving teams the flexibility they need to deliver.
- Predictive Model Enablement - A core part of the platform is making predictive modeling easier to build, deploy, and operate at scale. This role will oversee standardized pipelines for features, training, validation, deployment, and monitoring. Ensuring models meet expectations around performance, explainability, fairness, and auditability – critical requirements in a regulated environment.
- AI Agents and Multi‑agent Systems – Leading the enablement of AI agents, including more advanced multi‑agent patterns where agents collaborate, review each other’s work, or operate with human oversight. Responsibilities include providing reference architectures, shared services, and guardrails so teams can build agent‑based solutions that are effective, observable, and safe.
- Agentic Analytics and Conversational BI - The platform will support conversational analytics and agent‑driven insights grounded in trusted data. This role will help establish and scale a strong semantic layer using Looker and/or Snowflake so metrics, dimensions, and predictions remain consistent, whether they’re surfaced in dashboards or through natural‑language interactions.
- Developer Experience and Enablement – This role heavily invests in developer experience, creating clear “paved paths” for teams building models and agents. This includes templates, APIs, and tooling, such as Antigravity and the Gemini CLI—that shorten the path from idea to production and reduce one‑off engineering work.
- MLOps, LLMOps, and Reliability - Running AI in production requires discipline. This leader will ensure strong practices around CI/CD, versioning, evaluation, monitoring, and rollback for both models and agents. Accountable for platform reliability, with clear SLOs, capacity planning, incident response, and cost visibility.
- Cloud Platform Operations - Supporting and standardizing SageMaker for training, experimentation, and inference on AWS. On Google Cloud, automating Vertex AI environments, pipelines, and deployments, using infrastructure‑as‑code and self‑service patterns wherever possible.
- Gemini Enterprise Enablement – The role will guide how Gemini Enterprise is adopted across the company: defining secure usage patterns, grounding strategies, access controls, and integrations that align with enterprise risk and compliance expectations.
- Governance, Risk, and Responsible AI - Working closely with Risk, Legal, Compliance, and Security to embed governance directly into the platform. This includes model and agent registries, approval workflows, lineage, audit trails, and policy enforcement, designed to protect the company without slowing innovation.
- Training and Adoption - Overseeing training and enablement for predictive modeling, AI agent development, and safe production practices. Applying a strong customer‑success lens on the platform, using feedback and adoption metrics to guide continuous improvement.
- Leadership and Influence - As a VP-level leader, this role will hire and develop strong leaders, set clear expectations, and create an environment where teams do their best work. It also partners closely with senior executives across technology and the business, communicating clearly and backing decisions with sound judgment and data.
Qualifications:
- 15+ years of experience leading AI, ML, or platform engineering organizations at enterprise scale
- Proven ability to scale teams, platforms, and operating models in complex environments. Bachelor’s or master’s degree in computer science, engineering, or a related field (preferred)
- Deep understanding of how predictive models, LLMs, and AI agents behave in production
- Experience operating AI systems with strong discipline around reliability, performance, and lifecycle management
- Hands‑on experience designing and operating AI/ML platforms on AWS and Google Cloud
- Strong working knowledge of training, experimentation, inference, and pipeline orchestration in cloud environments
- Expertise across feature pipelines, evaluation metrics, CI/CD, monitoring, and rollback strategies
- Ability to make and explain tradeoffs between reliability, scalability, cost, and speed
- Demonstrated experience partnering with Risk, Legal, Compliance, and Security teams
- Strong understanding of governance controls required in regulated enterprise environments
- Ability to communicate complex technical topics clearly to senior executives and business leaders
- Trusted partner to technology and business stakeholders, with sound judgment backed by data
- Experience leading large, distributed teams through clear expectations and strong talent development
- Proven track record of translating strategy into measurable outcomes, with accountability for execution and results.
Compensation
The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:
$222,480 - $333,720
Equal Opportunity Employer/Sex/Race/Color/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age
About Us | Our Culture | What It’s Like to Work Here | Perks & Benefits