Why Tamara?
We’re proud to be Saudi’s first FinTech unicorn.
Our mission is to help people own their dreams by building the most customer-centric financial super app in the world. & There is no playbook for that; our Tamarians are writing it. Our teams are made up of innovators, problem-solvers, and learners we thrive on curiosity and collaboration. If this sounds like you: curious, driven, and ready to build, we’d love to meet you
Apply now and join the next generation of Builders!
About Tamara's Builders Program
At Tamara, we believe exceptional talent deserves an exceptional launchpad.
Our Flagship Builders Program is designed for ambitious graduates ready to step into real responsibility from day one. This isn’t a rotational “observer” program, it’s a career accelerator built for those who want to build, own, and raise the bar early.
Designed for recent graduates and early-career talent with up to two years of experience, the program places you directly into high-impact roles across Product, Engineering, Design, and beyond. You’ll contribute immediately and grow at an accelerated pace.
From Product to Engineering, Design to Commercial, you’ll tackle meaningful challenges that shape how millions experience fintech across the region. You’ll be trusted with ownership, surrounded by high-caliber peers, and mentored by leaders who expect excellence.
Our January and June cohorts are your opportunity to move fast, think big, and start building what’s next - not someday, but now.
About the role
We’re looking for a fresh graduate or early-career AI Engineer on a builder track.
This role sits between software engineering and applied AI. You will build AI-powered features end-to-end: from turning a product problem into an AI approach, to building pipelines and evaluation, to shipping reliable services in production.
You’ll work with both classical ML and generative AI (LLMs, RAG, agents) where they make sense. With the rapid pace of AI, we care more about fundamentals than buzzwords. Use AI assistants to move faster, but always own correctness, privacy, safety, and user impact.
Your responsibilities
- Build AI products that ship
- Design and implement AI-powered features and internal tools.
- Integrate models into real systems (APIs, workflows, dashboards, and operational processes).
- Work with LLMs in a production-ready way
- Build retrieval-augmented generation (RAG) pipelines: data sourcing, chunking, embeddings, retrieval, and prompt templates.
- Build agent-style workflows when needed (tool use, guardrails, and deterministic fallbacks).
- Own evaluation and quality
- Define what “good” means: success metrics, offline tests, and human-in-the-loop review.
- Set up evaluations for quality, safety, latency, and cost.
- Make AI reliable and safe
- Add observability: logging, tracing, and monitoring for quality regressions.
- Implement privacy and security controls (PII handling, access control, redaction where needed).
- Participate in incident response and postmortems when AI systems misbehave.
- Partner across teams
- Work with product, data, design, and engineering stakeholders to translate messy problems into measurable solutions.
- Collaborate with platform teams to use existing data pipelines, event streams, and AI tooling responsibly.
- Use AI tools thoughtfully
- Use AI assistants for prototyping, debugging, and documentation.
- Validate outputs, document assumptions, and protect sensitive data.
Your expertise (must have)
- Fresh graduate or < 1 year of relevant experience (internships and projects count).
- Solid programming fundamentals in Python (preferred) or another language used for backend services.
- Strong foundations in:
- Software engineering basics (APIs, testing, reliability)
- Data handling (SQL and/or dataframes)
- ML/AI fundamentals (training vs. inference, evaluation, overfitting intuition)
- Clear communication and a collaborative approach.
Nice to have
- Hands-on experience with LLMs and/or RAG from projects (LangChain/LlamaIndex or similar patterns).
- Familiarity with model serving and deployment (REST/gRPC services, batch jobs, streaming consumers).
- Exposure to MLOps concepts (experiment tracking, model registry, monitoring).
- Familiarity with vector databases, search, or information retrieval.
- Understanding of responsible AI, privacy, and security (PII handling, access control, prompt injection awareness).
- Experience using AI assistants responsibly for coding and analysis.
What success looks like
- You ship at least one AI feature or internal tool that is used by real users.
- You set up a simple evaluation and monitoring loop so the system improves over time.
- Your solutions balance quality, latency, and cost, with clear tradeoffs.
- When the AI output is wrong or risky, you can debug, mitigate, and explain why.
All qualified individuals are encouraged to apply.