CIGNA

Software Engineering Analyst - HIH - Evernorth

Hyderabad, India Full time

ABOUT EVERNORTH: 

Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care,
we solve the problems others don’t, won’t or can’t. 

Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference.

We are always looking upward. And that starts with finding the right talent to help us get there.

Position Overview

Evernorth is seeking a build-and-operate Data Engineer/Developer to code, deploy, and support data pipelines within our Data & Analytics organization. You will build ETL/ELT in Databricks (Python/Spark), write scalable SQL transformations, and integrate data from multiple sources into curated, production-ready datasets. You will own the full pipeline lifecycle development, release, monitoring, and ongoing optimization to keep data flowing reliably for downstream systems.

You will implement end-to-end Databricks jobs from ingestion through transformation and delivery, including reusable frameworks, data-quality checks, and unit/integration tests. You will operate what you build schedule and orchestrate runs, monitor clusters and job health, troubleshoot failures, and remediate data issues to meet delivery SLAs. You will continuously tune Spark code and pipeline design for performance and cost and automate deployments using CI/CD and operational runbooks.

Responsibilities

  • Design, build, and deploy ETL pipelines to ingest, transform, and load data from multiple sources.

  • Develop and maintain data catalogues and metadata management to improve data discovery and governance.

  • Implement automated data-quality validations and monitoring to ensure accuracy, completeness, and consistency.

  • Monitor and troubleshoot Databricks jobs and clusters, including performance, failures, and resource utilization.

  • Tune and refactor existing ETL workflows to improve scalability, reliability, and runtime performance.

  • Define and promote engineering standards and best practices for the data transformation layer to support smooth delivery and ongoing support.

  • Apply security and compliance controls, including role-based access, encryption, and auditability for sensitive data.

  • Manage and optimize cloud services (AWS preferred) for storage, compute, and orchestration to support scalable data processing.

  • Automate pipeline scheduling and operational workflows using orchestration tools and CI/CD practices.

  • Evaluate emerging tools and patterns and deliver proof-of-concepts (POCs) to validate solutions.

  • Stay current with new technologies and apply relevant innovations to improve the platform.

  • Develop and execute test strategies (unit/integration) for pipelines to validate logic and prevent regressions.

  • Partner with cross-functional teams to deliver production-ready data solutions on time.

  • Create and maintain technical documentation, including design notes, operational runbooks, and support guides.

Qualifications

Required Skills:

  • Write and tune complex SQL on OLAP platforms (Teradata) and OLTP platforms (Oracle, DB2, PostgreSQL, SingleStore) to support high-volume transformations and reporting.

  • Programming: Build and support production data-engineering code using Scala or Python, including debugging, refactoring, and writing modular libraries.

  • Big Data & Analytics: Run Spark workloads in Databricks build notebooks/jobs, manage clusters, and troubleshoot performance and failures in distributed processing.

  • ETL Development: Build, schedule, and operate ETL/ELT pipelines end-to-end, including ingestion, transformations, error handling, and monitoring for large-scale systems.

  • Databases: Build schemas, write optimized queries, and troubleshoot performance across relational and analytical data stores.

  • Relational: Implement and optimize SQL workloads in Oracle and PostgreSQL (indexing, query plans, partitioning, and data modelling).

  • CI/CD: Implement build/release automation for data pipelines (branching, packaging, environment promotion, and rollback) using modern DevOps tools.

  • Cloud: Build and operate data workloads on AWS (preferred), configure storage/compute, manage access, and troubleshoot cloud runtime issues.

  • Profile and optimize SQL using execution plans, indexing/partitioning strategies, statistics, and query refactoring.

Required Experience & Education: 

  • Experience: 3+ years in software engineering with a strong focus on data engineering.

  • Bachelor's Degree or higher from an accredited university or a minimum of three (3) years of experience in software development in lieu of the bachelor’s degree education requirement

Desired Experience:  

  • Advanced proficiency with Databricks

  • Strong knowledge of cloud architecture.

Location & Hours of Work

Full-time position, working 45 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required.  

Equal Opportunity Statement

Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations.

About Evernorth Health Services

Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.