Synchrony Financial

VP, AI Data Architect (L12)

Hyderabad IN Full time

Job Description:

Role Title: VP, AI Data Architect (L12)

Company Overview:  

Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more.

  • We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies.

  • We provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being.

  • We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.

Organizational Overview: 

Synchrony's Engineering Team is a dynamic and innovative team dedicated to driving technological excellence. As a member of this Team, you'll play a pivotal role in designing and developing cutting-edge tech stack and solutions that redefine industry standards.

The Credit Card that we use every day to purchase our essentials and later settle the bills - A simple process that we all are used to on a day to day basis. Now, consider the vast complexity hidden behind this seemingly simple process, operating tirelessly for millions of cardholders. The sheer volume of data processed is mind-boggling. Fortunately, advanced technology stands ready to automate and manage this constant torrent of information, ensuring smooth transactions around the clock, 365 days a year.

Our collaborative environment encourages creative problem-solving and fosters career growth. Join us to work on diverse projects, from fintech to data analytics, and contribute to shaping the future of technology. If you're passionate about engineering and innovation.

Role Summary/Purpose:

The VP, Data & AI Foundations is the strategic leader responsible for building and governing the data and knowledge fabric that powers the enterprise agentic AI platform and agents. This role owns how data is sourced, modeled, governed, and delivered into RAG pipelines, memory stores, and analytics systems so that agents can reliably access high-quality, compliant information.

The VP partners closely with the AI Platform, Agent Building, Governance/Model Risk, and Enterprise Data teams to define data architecture, standards, and operating models that enable scalable, secure, and cost-effective AI workloads. This leader combines deep data architecture skills, strong understanding of AI/LLM/RAG patterns, and excellent stakeholder management.

Key Responsibilities:

  • Define the target-state Data & AI Foundations architecture supporting agentic AI use cases, including RAG pipelines, enterprise knowledge graph or metadata layer, data products, and AI-ready datasets.

  • Own the strategy and roadmap for making key enterprise data sources "AI-ready": curation, quality, metadata, access patterns, latency requirements, and retention.

  • Partner with source system owners (core servicing, CRM, collections, risk, fraud, etc.) to define data contracts, SLAs, and integration patterns that support downstream RAG and analytics.

  • Design and govern canonical data models and semantic layers used by RAG pipelines, memory stores, and analytics to ensure consistency across agents and domains.

  • Lead the design of RAG data infrastructure on cloud (e.g., PostgreSQL, Redshift, vector stores, object storage) and ensure it aligns with performance, cost, and compliance constraints.

  • Define and implement RAG evaluation strategies including retrieval quality metrics, ranking and re-ranking optimization, relevance scoring, and A/B testing frameworks for continuous improvement.

  • Establish data preparation and curation pipelines for model fine-tuning, including dataset selection, labeling strategies, quality validation, versioning, and compliance with model risk policies.

  • Design and optimize retrieval strategies for RAG systems: chunking approaches, embedding models, indexing techniques, ranking algorithms, re-ranking logic, and hybrid search patterns.

  • Build and maintain robust data pipelines (batch and streaming) that ingest, transform, enrich, and deliver data into RAG systems, vector stores, feature stores, and agent contexts with appropriate SLAs.

  • Collaborate with the Enterprise AI Platform team on how data services (RAG APIs, feature stores, metadata services) are exposed as platform primitives for agent builders.

  • Define and enforce data governance policies for AI: data classification, lineage, access controls, PII handling, retention, and usage logging for AI workloads.

  • Partner with AI Governance/Model Risk and InfoSec/AppSec to ensure data usage in prompts, context, and tools adheres to policies, including regulatory, privacy, and model risk requirements.

  • Establish data quality and observability practices for AI data: data SLAs, freshness, completeness, drift detection, and business rule validation tied to AI outcomes.

  • Drive adoption of metadata and catalog tools so platform and agent teams can discover, understand, and safely consume datasets and RAG endpoints.

  • Define and oversee patterns for integrating external data (third-party, public, partner data) into AI workflows, including licensing checks, quality assessment, and monitoring.

  • Perform other duties and/or special projects as assigned.

Qualifications/Requirements:

  • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience) 12+ years of experience across data engineering, data architecture, or analytics platforms, with at least 5+ years in cloud data platforms and enterprise data leadership roles in lieu of a degree 14+ years of experience across data engineering, data architecture, or analytics platforms, with at least 5+ years in cloud data platforms and enterprise data leadership roles.

  • Strong experience with modern cloud data stacks (e.g., data warehouses like Redshift/Snowflake/BigQuery, relational databases like PostgreSQL, and object storage) and their use in analytics and AI.

  • Hands-on experience with vector databases and search technologies (for example PostgreSQL pgvector, Pinecone, OpenSearch, or similar) to support RAG and semantic search workloads.

  • Demonstrated expertise in designing and governing data models, semantic layers, and data products that serve multiple consuming applications and analytics teams.

  • Hands-on experience designing or supporting RAG architectures including chunking strategies, embedding pipelines, retrieval optimization, ranking/re-ranking, and evaluation frameworks.

  • Solid understanding of LLM and agentic AI patterns (prompts, tools, RAG, memory) and how data quality and structure impact AI behavior and performance.

  • Proven experience building data pipelines for AI/ML use cases including ETL/ELT workflows, streaming data integration, and data preparation for model training and fine-tuning.

  • Strong experience with Lakehouse architecture using S3, Apache Iceberg, Glue Data Catalog, Redshift

  • Strong Python skills for building data processing, evaluation, and automation pipelines, plus familiarity with DevOps practices (CI/CD, infrastructure as code, environment management).

  • Good understanding of enterprise data governance and access controls like AWS Lake Formation, Glue Data catalog and metadata management frameworks.

  • Good understanding of identity and data security architecture - IAM, IAM Identity Center, cross account data access patterns, identity propagation for AI agents and services

  • Good understanding of AWS infrastructure concepts (networking, security, storage, compute) and how they apply to data and AI workloads.

  • Experience working with ETL/ELT pipelines, streaming data, and integration technologies (e.g., CDC, APIs, event buses) for both batch and real-time use cases.

  • Proven ability to lead multi-disciplinary teams and influence across platform, AI, data, and business stakeholders.

  • Excellent communication and storytelling skills, with the ability to explain complex data/AI architecture decisions in business terms and secure buy-in at VP/SVP levels.

Desired Skills:

  • Experience implementing or leveraging knowledge graphs, entity resolution, or semantic search to power AI and RAG use cases.

  • Hands-on exposure to vector databases and LLM-focused data tooling (embedding pipelines, chunking strategies, indexing services, re-ranking models).

  • Background in building data products specifically targeted for AI/ML (feature stores, labeled datasets, evaluation datasets, fine-tuning corpora).

  • Experience with RAG evaluation tools and frameworks for measuring retrieval quality, answer relevance, and grounding accuracy.

  • Familiarity with enterprise architecture frameworks (TOGAF, Zachman) as they apply to data and AI.

  • Prior experience in financial services, credit, payments, or similar domains where data lineage, explainability, and audit trails are critical.

  • Basic AWS solution architecture knowledge including core services, Amazon Bedrock, and Bedrock AgentCore so they can collaborate effectively with platform and agent teams.

Eligibility Criteria:

  • Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (or equivalent experience) 12+ years of experience across data engineering, data architecture, or analytics platforms, with at least 5+ years in cloud data platforms and enterprise data leadership roles in lieu of a degree 14+ years of experience across data engineering, data architecture, or analytics platforms, with at least 5+ years in cloud data platforms and enterprise data leadership roles.

Work Timings: 2 PM – 11 PM IST

 This role qualifies for Enhanced Flexibility offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs)
We are proud to offer flexibility at Synchrony. Our way of working allows you the option to work from home or workspaces in our Regional Engagement Hubs—Hyderabad, Bengaluru, Pune, Kolkata, or Delhi/NCR.
Occasionally you may be required to commute or travel to Hyderabad or one of the Regional Engagement Hubs for in person engagement activities such as business or team meetings, trainings, and culture events.

For Internal Applicants:

  • Understand the criteria or mandatory skills required for the role, before applying

  • Inform your manager and HRM before applying for any role on Workday

  • Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format)

  • Must not be any corrective action plan (Formal/Final Formal)

  • L10+ Employees who have completed 18 months in the organization and 12 months in their current role and level are only eligible to apply for this opportunity

 

Grade/Level: 12

Job Family Group:

Information Technology