We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
We are looking for a Data Engineer to join our global team at CI&T. You will play a critical role in designing, developing, and supporting scalable, high-quality data pipelines and infrastructure. Working with multidisciplinary teams across Brazil, Europe, the US, and other regions, you’ll help transform data into actionable insights that support digital products and business decisions.
Main Responsibilities:
● Design, build, and maintain scalable, high-quality data pipelines for structured and unstructured data.
● Implement robust data ingestion, transformation, and storage using cloud-based technologies.
● Collaborate with stakeholders to understand business goals and translate them into data engineering solutions.
● Monitor, troubleshoot, and optimize data pipelines for reliability and performance.
● Support data validation, testing, and documentation processes.
● Contribute to the design and deployment of modern data architectures (e.g., data lakes, lakehouses, data warehouses).
● Apply Infrastructure-as-Code (IaC) practices for provisioning and managing cloud resources.
● Integrate emerging tools and frameworks to modernize existing data environments.
● Ensure security, governance, and compliance in all stages of data handling.
● Work in agile teams, contributing to continuous improvement and mentoring junior team members.
Requirements:
● Proven experience in data pipeline development using Python and SQL.
● Hands-on experience with cloud platforms such as AWS (Glue, Redshift, EMR) or Azure (ADF, Synapse)
● Proficiency with ETL/ELT tools and frameworks (e.g., Talend, Azure Data Factory, Airflow, DBT).
● Familiarity with modern data warehouses and lakehouses (e.g., Snowflake, BigQuery, Databricks).
● Experience with Infrastructure-as-Code tools (Terraform, CloudFormation, ARM templates).
● Strong understanding of data modeling, database design (SQL and NoSQL), and performance optimization.
● Knowledge of version control (Git) and task management tools (JIRA, Confluence).
● Fluent English communication skills (written and spoken) to operate in global teams.
● Awareness of data security, compliance (e.g., GDPR), and privacy best practices.
Nice to Have:
● Experience with real-time/streaming data architectures (Kafka, EventHub, Spark Streaming).
● Familiarity with BI and visualization tools (Power BI, Tableau, Looker).
● Knowledge of DevOps practices and CI/CD integration for data workloads.
● Exposure to data quality frameworks and observability tools.
● Cloud certification (e.g., AWS Data Analytics, Azure Data Engineer Associate).
● Familiarity with machine learning pipelines or MLOps frameworks.
● Experience with data migration or modernization from legacy systems to cloud-native architectures.
#LI-GP1