We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
We are looking for a Senior Data Developer with strong knowledge in developing and maintaining data pipelines in Databricks, integrating and transforming data from various sources.
Key Responsibilities
- Develop and maintain data pipelines.
- Integrate data from various sources (APIs, relational databases, files, etc.).
- Collaborate with business and analytics teams to understand data requirements.
- Ensure quality, reliability, security and governance of the ingested data.
- Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD
- Document processes and best practices in data engineering.
Requirements
- Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
- Strong programming skills in Python and SQL for data processing and transformation.
- Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
- Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
- Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
- Proficiency in Git.
- Advanced English is essential (speaking)!
#LI-GP1