AmeriLife

VP of Enterprise Data Platform

Clearwater, FL Full time

Our Company

Explore how you can contribute at AmeriLife.

For over 50 years, AmeriLife has been a leader in the development, marketing and distribution of annuity, life and health insurance solutions for those planning for and living in retirement.

Associates get satisfaction from knowing they provide agents, marketers and carrier partners the support needed to succeed in a rapidly evolving industry.

Job Summary

AmeriLife is seeking a strategic and technically adept Vice President of Enterprise Data Platform to lead the design, delivery, and operation of its enterprise data ecosystem. This role owns the platform architecture, data modeling standards, data quality and observability frameworks, and delivery practices that support SOX-grade controls and M&A scalability. The VP will lead a high-performing team of data engineers and architects, fostering a culture of innovation, accountability, and continuous improvement.

Job Description

Key Responsibilities

  • Architect the Enterprise Data Platform: Define and maintain a reference architecture spanning ingestion, storage, compute, modeling, quality, observability, orchestration, and serving layers.
  • Build Scalable Pipelines: Design and govern resilient pipelines from business applications into the enterprise data platform and downstream analytics services, ensuring schema drift tolerance and backward compatibility. Leverage Spark and PySpark for distributed processing, ETL optimization, and scalable ML workflows.
  • Establish Enterprise Data Standards: Publish and maintain a governed enterprise data model and glossary, including SCD2 dimensions, point-in-time facts, conformed dimensions, lineage, SLAs, and usage policies.
  • Implement SOX-Grade Controls: Deliver immutable logging, segregation of duties, maker-checker workflows, and reconciliation processes to ensure compliance and audit readiness. Expand compliance to include discovery and classification of PII and other sensitive data, encryption/masking, access controls, third-party risk, and audit-ready logging.
  • Create 3rd Party Data Hub: Standardize intake patterns (SFTP, APIs, managed portal extracts) and enforce versioned data contracts per source for consistent 3rd party data onboarding.
  • Partner Across Integration & Analytics: Collaborate with Application and Data Integration teams for API scalability, idempotent event processing, and batch patterns for large carrier files.
  • Enable Secure Access & Hierarchies: Deliver a Hierarchy Service and enforce role-based and attribute-based access across systems and data domains.
  • Power Advanced Analytics & AI: Operationalize workflows and model-serving capabilities to enable anomaly detection, enrichment, and mapping to accelerate AI adoption. Partner directly with Applied AI Engineering to design and operationalize the enterprise feature store for ML feature reuse and governance.
  • Partner on Data Governance: Work closely with the Head of Data Governance to implement data quality frameworks and ensure metadata completeness across domains.
  • Mentoring and Upskilling: Build a learning culture by coaching engineers on Spark and PySpark, cloud-native data engineering, observability, security, and cost-aware design. Provide technical reviews, pairing, and certification pathways to elevate team capabilities.
  • Migrate from On-prem: Execute a phased migration from on-prem ETL to cloud-native pipelines, retiring technical debt while maintaining business continuity and SLAs. Sequence workloads by criticality, implement dual-run cutovers, and decommission legacy systems with clean lineage and documentation.
  • Cost Optimization and Performance Management: Implement FinOps practices for cost baselining, right-sizing, autoscaling, and job-level cost allocation. Govern workloads with cluster policies, quotas, and prioritization. Optimize Spark and PySpark jobs for performance and cost efficiency.

Required Qualifications

  • 10+ years leading data engineering and architecture for complex, multi-system enterprises.
  • Hands-on expertise with Spark and PySpark for distributed compute, ETL optimization, and scalable ML data pipelines.
  • Experience with modern data platforms such as Databricks or Microsoft Fabric for efficient pipelines and analytics enablement.
  • Proven success delivering governed data platforms and semantic layers at scale.
  • Deep expertise in dimensional modeling (SCD2, point-in-time facts, conformed dimensions).
  • Experience with data quality frameworks, observability tooling, schema registry, and data contracts.
  • Strong background implementing SOX-grade controls and sensitive-data protection standards (PII discovery, classification, encryption/masking, access controls, audit logging).
  • Demonstrated leadership managing multi-disciplinary engineering teams and vendor partners.

Preferred Qualifications

  • Experience in insurance distribution or financial services, including producer hierarchies, commissions, and carrier data integration.
  • Familiarity with API integration platforms such as MuleSoft.
  • Exposure to AI/ML enablement within enterprise data platforms, including feature store design and operationalization.
  • Experience with FinOps practices and workload governance at scale.

 

Equal Employment Opportunity Statement

We are an Equal Opportunity Employer and value diversity at all levels of the organization. All employment decisions are made without regard to race, color, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related medical conditions), sexual orientation, gender identity or expression, age, national origin, ancestry, disability, genetic information, marital status, veteran or military status, or any other protected characteristic under applicable federal, state, or local law. We are committed to providing an inclusive, equitable, and respectful workplace where all employees can thrive.

 

Americans with Disabilities Act (ADA) Statement

We are committed to full compliance with the Americans with Disabilities Act (ADA) and all applicable state and local disability laws. Reasonable accommodations are available to qualified applicants and employees with disabilities throughout the application and employment process. Requests for accommodation will be handled confidentially. If you require assistance or accommodation during the application process, please contact us at HR@AmeriLife.com.

 

Pay Transparency Statement

We are committed to pay transparency and equity, in accordance with applicable federal, state, and local laws. Compensation for this role will be determined based on skills, qualifications, experience, and market factors. Where required by law, the pay range for this position will be disclosed in the job posting or provided upon request. Additional compensation information, such as benefits, bonuses, and commissions, will be provided as required by law. We do not discriminate or retaliate against employees or applicants for inquiring about, discussing, or disclosing their pay or the pay of another employee or applicant, as protected under applicable law. Pay ranges are available upon request.

 

Background Screening Statement

Employment offers are contingent upon the successful completion of a background screening, which may include employment verification, education verification, criminal history check, and other job-related inquiries, as permitted by law. All screenings are conducted in accordance with applicable federal, state, and local laws, and information collected will be kept confidential. If any adverse decision is made based on the results, applicants will be notified and given an opportunity to respond.