Why Tamara?
We’re proud to be Saudi’s first FinTech unicorn.
Our mission is to help people own their dreams by building the most customer-centric financial super app in the world. & There is no playbook for that; our Tamarians are writing it. Our teams are made up of innovators, problem-solvers, and learners we thrive on curiosity and collaboration.
If this sounds like you: curious, driven, and ready to build, we’d love to meet you
Apply now and join the next generation of Builders!
About the Program:
At Tamara, we believe exceptional talent deserves an exceptional launchpad.
Our Flagship Builders Program is designed for ambitious graduates ready to step into real responsibility from day one. This isn’t a rotational “observer” program, it’s a career accelerator built for those who want to build, own, and raise the bar early.
Designed for recent graduates and early-career talent with up to two years of experience, the program places you directly into high-impact roles across Product, Engineering, Design, and beyond. You’ll contribute immediately and grow at an accelerated pace.
From Product to Engineering, Design to Commercial, you’ll tackle meaningful challenges that shape how millions experience fintech across the region. You’ll be trusted with ownership, surrounded by high-caliber peers, and mentored by leaders who expect excellence.
Our January and June cohorts are your opportunity to move fast, think big, and start building what’s next - not someday, but now.
About the role
We’re looking for a fresh graduate or early-career Data Platform Engineer on a builder track.
This role is for someone who wants to build the foundations behind analytics and AI. You will help develop and run the data platform that makes data available, secure, governed, and fast. That includes pipelines, event streaming ingestion, warehouses and lakes, and the guardrails that keep data trusted.
With the advancement of AI, we care more about strong fundamentals than memorizing tools. Use AI assistants to move faster, but always own correctness, reliability, and security.
Your responsibilities
- Build and maintain the data platform
- Help develop and maintain platform architecture (data warehouse, data lake, governance, protection).
- Keep the platform reliable, observable, and ready to scale.
- Build robust pipelines (ELT/ETL + event-driven)
- Build the infrastructure required for optimal extraction, transformation, and loading using SQL and big data technologies.
- Augment batch pipelines with event-driven patterns where it makes sense (streaming ingestion, CDC, near-real-time processing, and reliable delivery).
- Improve pipeline performance, cost, and reliability.
- Make the platform easier every week
- Identify, design, and implement internal improvements: automate manual processes, optimize data delivery, and redesign components for scalability.
- Enable analytics and AI use cases
- Build platform primitives and data products that power self-serve analytics and AI-assisted insights (not only dashboards).
- Support analytics tooling that uses the pipelines to deliver actionable insights into customer acquisition, operational efficiency, and key business metrics.
- Partner with stakeholders
- Work with stakeholders including Executive, Product, Data, and Design teams to resolve data technical issues and support infrastructure needs.
- Keep data secure across boundaries
- Help keep data separated and secure across national boundaries, including through multiple data centers and appropriate access controls.
Your expertise (must have)
- Fresh graduate or < 1 year of relevant experience (internships and projects count).
- Solid programming fundamentals in Python, Java, Go, or similar.
- Strong SQL fundamentals.
- Understanding of data concepts: schemas, partitions, data quality, SLAs, and basic security/PII awareness.
- A problem-solving mindset and comfort debugging systems end-to-end.
- Clear communication and a collaborative approach.
Nice to have
- Familiarity with cloud data stack concepts (warehouses, object storage, orchestration, streaming).
- Exposure to event streaming (Kafka concepts like topics, partitions, consumer groups) or CDC.
- Experience with dbt, Airflow, Spark, or similar tools (coursework or projects count).
- Basic Terraform/IaC and CI/CD familiarity.
- Experience using AI assistants responsibly for coding, debugging, and documentation.
What success looks like
- You ship at least one production-ready pipeline or platform improvement that reduces toil or improves reliability.
- You can debug a data issue from source to downstream consumption with clear root cause and fix.
- Teams can onboard to data sources faster, with better documentation and fewer manual steps.
- Platform reliability and data quality improve in measurable ways (latency, failures, freshness, or coverage).