Ci&t

[Job - 26026] Principal Data Architect, Brazil

Brazil Full Time
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 7.400 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.

Hey everyone! This is Thai Viapiana from CI&T's Talent Attraction Team :) We are looking for a Data Architect / Principal Data Engineer to join our team in our development center, working with a major US client in a fast-paced, high-impact environment. In this role, you will define, architect, and implement scalable data platforms and end-to-end ELT pipelines aligned with modern Lakehouse principles.

You will work closely with cross-functional teams across the US, Colombia, and Brazil to ensure that our data ecosystem is reliable, future-proof, and aligned with enterprise architecture standards. This position requires deep technical expertise, strong architectural thinking, and the ability to influence and mentor engineering teams. Fluent English communication is essential for collaborating with global stakeholders, presenting architectural recommendations, and ensuring alignment across distributed teams.

Core Technical Expertise:
Expert level SQL, with demonstrated ability to optimize, refactor, and validate large-scale transformations.
Advanced Python (or similar) for automation, orchestration, and pipeline development.
Hands-on architecture and engineering experience with Snowflake, including performance tuning, security, data governance, dynamic tables, and workload management.
Advanced dbt expertise, including transformation logic, testing, documentation, deployment patterns, and CI/CD integration.

Data Architecture & Modeling:
Proven production experience with Data Vault 2.0, including Hubs, Links, Satellites, PIT tables, multi-active satellites, and Business Vault patterns.
Experience with AutomateDV or equivalent frameworks is a strong asset.
Deep understanding of Data Lakehouse architectures, including medallion zone structures, incremental ingestion, and open table formats (Iceberg, Delta, Hudi is a plus).
Solid foundation in data modeling best practices, including normalized models, dimensional modeling, historization, and scalable enterprise patterns.
Ability to translate complex business requirements into robust, extensible architectural designs.

Orchestration & Automation:
Experience orchestrating ELT/ETL workflows using Airflow, including DAG design, dependency strategies, and dynamic task generation.
Familiarity with modern orchestration frameworks such as Prefect, Dagster, or AWS Glue.
DevOps & Platform Engineering:
Comfort with CI/CD pipelines using GitHub Actions or similar tools, integrating dbt testing and Snowflake deployments.
Understanding of infrastructure automation, configuration-as-code, and environment management.

Nice to Have:
Experience with data observability platforms (Monte Carlo, Datafold, Great Expectations).
Knowledge of Docker or Kubernetes for reproducibility and scalable deployments.
Familiarity with Kafka, AMQP, or other message brokers and event-driven architectures.
Experience working with REST/GraphQL APIs, streaming ingestion (Kinesis, Firehose), or real-time processing.
Experience supporting hybrid architectures, multi-cloud designs, or enterprise Lakehouse strategies.

We’re looking for people who:
Are passionate about modern data architecture, distributed systems, and scalable design.
Naturally mentor engineers, uplift teams, and drive technical excellence.
Thrive in collaborative, multicultural environments.
Value diversity, inclusion, and respectful partnership.
Bring a data-driven, continuous improvement mindset and are comfortable challenging the status quo.
Align with our culture of diversity, inclusion, and respectful collaboration, bringing a team-first mindset.

So, are you up for the challenge? Then complete your registration and good luck!

#LI-THAI23