Why Tamara?
We’re proud to be Saudi’s first FinTech unicorn.
Our mission is to help people own their dreams by building the most customer-centric financial super app in the world. & There is no playbook for that; our Tamarians are writing it. Our teams are made up of innovators, problem-solvers, and learners we thrive on curiosity and collaboration.
If this sounds like you: curious, driven, and ready to build, we’d love to meet you
Apply now and join the next generation of Builders!
About Tamara's Builders Program
At Tamara, we believe exceptional talent deserves an exceptional launchpad.
Our Flagship Builders Program is designed for ambitious graduates ready to step into real responsibility from day one. This isn’t a rotational “observer” program, it’s a career accelerator built for those who want to build, own, and raise the bar early.
Designed for recent graduates and early-career talent with up to two years of experience, the program places you directly into high-impact roles across Product, Engineering, Design, and beyond. You’ll contribute immediately and grow at an accelerated pace.
From Product to Engineering, Design to Commercial, you’ll tackle meaningful challenges that shape how millions experience fintech across the region. You’ll be trusted with ownership, surrounded by high-caliber peers, and mentored by leaders who expect excellence.
Our January and June cohorts are your opportunity to move fast, think big, and start building what’s next - not someday, but now.
About the role
We’re looking for a fresh graduate or early-career Associate Data Scientist on a builder path.
This role blends product thinking, applied statistics, and production-minded analytics. You will:
- turn ambiguous problems into measurable questions
- build reliable measurement (metrics and experimentation)
- create lightweight models and decision tools that teams can use in real workflows
- partner closely with engineering, product, and risk to improve customer outcomes and business performance
- build AI-assisted decision workflows (for example, LLM-powered analysis copilots backed by a curated metrics layer, with clear guardrails and validation)
With the advancement of AI, we value people who have strong fundamentals and clear thinking. Understanding data generation, measurement, tradeoffs, and how to validate results matters more than memorizing tools. You'll learn how to use AI responsibly to move faster, while still owning correctness, robustness, and interpretation.
Your responsibilities
- Define problems and measurement
- Translate product or business questions into testable hypotheses and clear success metrics.
- Build and maintain metric definitions and analysis templates so decisions are consistent and repeatable.
- Experimentation and causal thinking
- Design, analyze, and interpret A/B tests (and quasi-experiments when randomization is not possible).
- Partner with product and engineering to ensure correct tracking, guardrails, and experiment quality.
- Modeling for decisions (practical ML)
- Build baseline predictive and segmentation models (for example, churn propensity, risk signals, customer clustering) with clear evaluation and limitations.
- Focus on models that are actionable: who will use them, when, and what decision they change.
- Applied AI (LLMs) for analytics and decisioning
- Prototype small, safe LLM use cases that improve how teams explore data (for example, natural-language Q&A over a curated dataset, summarizing experiment results, or generating investigation checklists).
- Help define evaluation and validation approaches for AI-assisted outputs (sanity checks, golden queries, offline test sets, and human review loops).
- Partner with platform and security stakeholders to ensure AI workflows are permissioned correctly and avoid leaking sensitive data.
- Analytics that ships
- Create reusable, well-documented analysis assets: datasets, feature tables, notebooks, dashboards, and simple services or jobs.
- Collaborate with data engineering to productionize reliable pipelines where needed.
- Data quality and reliability
- Validate data inputs, monitor key metrics, and investigate anomalies.
- Document assumptions and build sanity checks so analyses and models are trustworthy.
- Responsible use of AI tools
- Use AI to accelerate coding, documentation, and exploration.
- Validate AI outputs, protect sensitive data, and follow safe data handling practices.
Your expertise (must have)
- Fresh graduate or < 1 year of relevant experience (internships, capstone projects, or part-time roles count).
- Strong SQL fundamentals (joins, aggregations, window functions).
- One programming language for data (preferably Python) with basic skills in:
- Data manipulation (tables/dataframes)
- Statistics fundamentals (distributions, sampling intuition, confidence basics)
- Basic modeling (regression/classification basics and evaluation intuition)
- Strong analytical thinking:
- Ability to define a problem, validate data, and explain results clearly.
- Strong attention to detail and commitment to accurate, reliable outputs.
- Ability to work effectively in a team-oriented environment.
Nice to have
- Exposure to experimentation platforms or frameworks (for example Statsig, Optimizely, internal experimentation tooling).
- Familiarity with modern analytics stacks (dbt, BigQuery, Snowflake, Looker, PowerBI, Tableau) through coursework or projects.
- Exposure to ML tooling (scikit-learn, notebooks, basic MLOps concepts) and version control (Git).
- Familiarity with LLM concepts (prompting basics, retrieval/RAG intuition, and evaluation approaches) through coursework or side projects.
- Understanding of product analytics concepts (funnels, cohorts, retention) and causal pitfalls (selection bias, confounding).
- Experience creating AI-ready data assets (clean semantic layers, metric definitions, data contracts, documentation, and sanity-check checklists).
- Knowledge of responsible data handling (PII basics, access controls, safe sharing).
What success looks like
- You can independently deliver an end-to-end analysis with clear assumptions, validation steps, and a concrete recommendation.
- You help ship at least one decision tool (experiment, metric framework, or lightweight model) that changes a product or business decision.
- Stakeholders can run repeatable analyses with less back-and-forth, and your work reduces ambiguity in key metrics.
- You can spot when results look off, debug quickly, and explain the root cause.