We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 7.400 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
We are looking for a highly skilled Senior Data Developer/ DataOps Specialist to join our data engineering team. This role is responsible for designing, developing, and maintaining large-scale data ingestion and transformation pipelines on Databricks. You will be a key contributor in implementing modern DataOps practices, ensuring data reliability, scalability, and alignment with business requirements through the integration of data contracts and automated quality checks.
Position Overview
Design, build, and optimize data ingestion and transformation pipelines using Databricks and other modern cloud-based data platforms.
Implement and enforce data contracts, ensuring schema consistency and compatibility across services.
Develop and integrate data quality checks (validation, anomaly detection, reconciliation) into pipelines.
Apply DataOps best practices, including CI/CD for data workflows, observability, monitoring, and automated testing.
Collaborate with product, analytics, and engineering teams to understand requirements and deliver reliable, production-grade data solutions.
Drive improvements in data performance, cost optimization, and scalability.
Contribute to architectural decisions around data modeling, governance, and integration patterns.
Mentor junior data engineers and developers, providing code reviews, knowledge-sharing, and best practice guidance.
Required Skills and Qualifications:
Must-have Skills:
Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
Strong programming skills in Python and SQL for data processing and transformation.
Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
Strong experience withdata pipeline orchestration tools (e.g., Airflow, Dagster, Prefect, dbt, Azure Data Factory).
Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
Advanced English is essential.
Understanding of the entire business side.
Nice-to-have Skills:
Experience with data contracts, schema evolution, and ensuring compatibility across services.
Experience with Databricks Asset BundlesExpertise in data quality frameworks (e.g., Great Expectations, Soda, dbt tests, or custom-built solutions).
Familiarity with DBT, Atlan and Soda.
Integration with Power BIExperience with Data Streaming.
#LI-BM2