We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
We are looking for a Senior Data Engineer to join our team on an exciting Project for a major American investment client.
This position requires a hybrid work model (3x/week) at our office in Campinas, SP, Brazil.
Responsibilities:
- Build and maintain end-to-end data pipelines in AWS, often ingesting data from multiple REST APIs.
- Design reliable, observable, and scalable pipelines, with a strong focus on data quality and monitoring.
Explain and discuss the logic of your solutions with stakeholders, not just the tools used.
Requirements:
- Advanced English, comfortable explaining technical solutions in conversation.
- Proven experience as a Data Engineer
- Strong hands-on experience in Python (data processing, API integration, automation) and advanced SQL (CTEs, window functions, optimization).
- Solid experience designing, building, and maintaining data pipelines on AWS, including:
- Ingestion from REST APIs (authentication, pagination, rate limiting, error handling).
- Transformation and loading into data lakes / data warehouses.
- Orchestration (e.g. Step Functions, Glue Workflows, or similar).
- Strong hands-on experience with AWS Serverless and data ecosystem:
- Lambda, Glue (Glue Jobs / Glue Studio), DynamoDB, ECS/Fargate (if applicable).
- S3, IAM, CloudWatch (logs, metrics, alarms), Step Functions.
- Experience with PySpark or similar distributed processing frameworks (e.g. Spark on EMR/Glue).
Nice to have
- Experience in marketing data use cases (campaigns, events, customer 360, CDP).
- Experience with Adobe Experience Platform (AEP), especially CDP.
- Experience with event-based architectures (e.g. tracking events, customer journeys).
- Good understanding of Data Engineering fundamentals:
- Data modeling (dimensional and relational), data structures.
- ETL vs ELT, batch vs streaming workflows.
- Data quality, data governance, and data observability practices (monitoring, alerting, lineage).
- Familiarity with Git/GitHub and CI/CD workflows for data pipelines.
- Strong problem-solving mindset: Ability to clearly explain the logic behind the solution, trade-offs and troubleshooting steps.
- Strong communication skills and ability to collaborate with cross-functional teams.
#LI-AG4
#mid-senior