Synchrony Financial

AVP, Applied Model Ops Developer (L11)

Hyderabad IN Full time

Job Description:

Role Title: AVP, Applied Model Ops Developer (L11)

Company Overview:

Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more.

  • We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies.

  • We provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being.

  • We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles.

Organizational Overview: 

Our Analytics organization comprises of data analysts who focus on enabling strategies to enhance customer and partner experience and optimize business performance through data management and development of full stack descriptive to prescriptive analytics solutions using cutting edge technologies thereby enabling business growth.
 

Role Summary:

The AVP, Applied Model Ops Developer within the India Analytics Hub (IAH), operating under the Decision Management, Model Operations & Analytics team is responsible for designing and building the data infrastructure, pipelines, and tooling required to support robust, scalable, and automated post-deployment monitoring of models. This role bridges the gap between data science and software engineering by building, deploying, and maintaining production-ready AI/ML systems. The engineer collaborates closely with model developers, product managers, risk partners, and compliance teams to operationalize monitoring strategies aligned with model governance policies.

Key Responsibilities:

  • Engage regularly with model developers, validators, and risk stakeholders to understand their evolving data needs for model development, monitoring, and governance. 

  • Partner with credit analytics, risk, fraud, marketing, and operations functions to identify, define, and prioritize use cases requiring model-ready data. 

  • Build scalable data architectures to support real-time and batch monitoring, including data ingestion, enrichment, and retention practices.

  • Support pipeline development by designing and maintaining automated end-to-end ML pipelines for data collection, preprocessing, feature engineering, and model training.

  • Conduct data transformation by converting raw observations into variables (features) that machine learning models can understand, such as turning timestamps into cyclical time features. Transforming theoretical data science prototypes into robust, high-performance software systems that can handle large volumes of real-time data

  • CI/CD Pipeline Development: Build and maintain automated pipelines that handle not just code, but also data validation, model training, and artifact management

  • Design, develop, and maintain robust pipelines to collect, transform, and store data used in model monitoring workflows (e.g., scoring data, performance metrics, outcomes).

  • Provide thought and technical leadership in generating new signals from raw data by applying techniques such as normalization, scaling and categorical encoding

  • Integrate data pipelines with model lifecycle platforms, MLOps tools, and observability solutions to ensure seamless model performance tracking.

  • Partner with model risk and compliance teams to ensure data lineage, audit trails, and documentation are preserved and accessible for regulatory reviews (e.g., SR 11-7 compliance).

  • Liaise with cloud,  data lake, data warehouse, and model governance engineering teams on delivery execution and backlog prioritization. 

  • Collaborate with data scientists, model validators, and product managers to align monitoring data infrastructure with evolving model monitoring requirements.

  • Optimize data storage and compute performance for large-scale monitoring use cases involving high-frequency scoring or model ensembles.

 

Required Skills/Knowledge

  • Bachelor’s degree in a quantitative, technical, or data-focused field (e.g., Statistics, Mathematics, Computer Science, Data Science, Engineering) with 6+ years’ experience OR in lieu of a degree 8+years of relevant work experience in monitoring, validation, or credit risk strategy

  • Minimum 6+ years of professional experience in model operations, data engineering, or analytics infrastructure Strong proficiency with data engineering tools and frameworks (e.g., Apache Spark, Airflow, Kafka, dbt, PySpark).

  • Proficient in programming languages such as SAS, Python, and SQL for building monitoring pipelines and validation checks.

  • Experience with cloud-based data infrastructure (e.g., AWS, Azure, GCP) and data warehousing (e.g., Snowflake, Redshift, BigQuery).

  • Familiarity with MLOps practices, model metadata tracking (e.g., MLflow), and monitoring toolkits (e.g., Evidently AI, WhyLabs, Prometheus).

  • Understanding of model risk governance requirements and the role of data engineering in ensuring compliant model monitoring.

  • Ability to work in an agile environment and deliver high-quality, production-grade code in collaboration with DevOps and platform engineering teams.

Desired Skills/Knowledge:

  • Advanced Master’s degree or relevant advanced certification preferred

  • Strong problem-solving skills and experience automating repetitive data monitoring tasks.

  • Attention to detail and commitment to maintaining high standards of data quality, integrity, and compliance.

  • Experience building alerting mechanisms and diagnostic logging for monitoring model behaviors.

  • Excellent communication skills, with the ability to explain complex technical concepts to non-technical audiences and collaborate across teams.

  • Exposure to explainability frameworks and the role of data in enhancing model transparency and interpretability.

Eligibility Criteria:

  • Bachelor’s degree in a quantitative, technical, or data-focused field (e.g., Statistics, Mathematics, Computer Science, Data Science, Engineering) with 6+ years’ experience OR in lieu of a degree 8+years of relevant work experience in monitoring, validation, or credit risk strategy

Work Timings:

This role qualifies for Enhanced Flexibility offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs)
We are proud to offer flexibility at Synchrony. Our way of working allows you the option to work from home or workspaces in our Regional Engagement Hubs—Hyderabad, Bengaluru, Pune, Kolkata, or Delhi/NCR.
Occasionally you may be required to commute or travel to Hyderabad or one of the Regional Engagement Hubs for in person engagement activities such as business or team meetings, trainings, and culture events

 

For Internal Applicants:

  • Understand the criteria or mandatory skills required for the role, before applying

  • Inform your manager and HRM before applying for any role on Workday

  • Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format)

  • Must not be any corrective action plan (Formal/Final Formal)

  • L9+ Employees who have completed 18 months in the organization and 12 months in their current role and level are only eligible.

  • Employees at L9+ can only apply for this opportunity.

Grade/Level : 11

Job Family Group:

Data Analytics