Huron

ETL Developer

Bangalore India - Outer Ring Road Full time

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future. 

Join our team as the expert you are now and create your future.

We are looking for a highly skilled Data Integration Engineer to design, build, and manage scalable data pipelines and integration solutions across cloud and on-premises platforms. The role requires strong expertise in ETL/iPaaS tools, APIs, and data platforms, with exposure to AI/ML-driven automation for smarter monitoring, anomaly detection, and data quality improvement.

Requirements:

  • Design, develop, and optimize data integration workflows, ETL/ELT pipelines using IICS, and APIs.
  • Work with iPaaS and ETL tools (Informatica) to integrate enterprise systems.
  • Build pipelines across cloud platforms (AWS, Azure, GCP) and modern warehouses (Snowflake, Databricks, BigQuery, Redshift).
  • Implement data quality, lineage, and governance frameworks to ensure reliable data flow.
  • Leverage AI/ML models for data anomaly detection, pipeline monitoring, and predictive quality checks.
  • Contribute to self-healing pipeline design by incorporating AI-driven automation.
  • Collaborate with architects, analysts, and business teams to integrate structured, semi-structured, and unstructured data sources.
  • Document integration patterns, best practices, and reusable frameworks.
  • 6–8 years of experience in data integration, ETL/ELT design, and data pipelines.
  • Strong expertise in Informatica, or similar ETL/iPaaS tools.
  • Proficiency in SQL, Python, and automation scripting.
  • Experience with cloud data platforms (Snowflake, Databricks, BigQuery, etc.).
  • Familiarity with data governance practices (cataloging, lineage, DQ frameworks).
  • Exposure to AI/ML concepts applied to data quality and pipeline optimization.
  • Understanding of DevOps/CI-CD pipelines for data integration deployments.
     

Preferences:

  • Hands-on experience with Kafka, Spark, Airflow, or event-driven architectures.
  • Knowledge of REST APIs, microservices, and real-time data integration.
  • Conceptual understanding or hands-on exposure to ML frameworks (Scikit-learn, TensorFlow, PyTorch).
  • Experience contributing to AI-augmented/self-healing pipelines.
  • Bachelor’s or master’s in computer science, Data Engineering, Information Systems, or related field

Position Level

Associate

Country

India