About Artisan
At Artisan, we’re building real AI employees - not copilots, not assistants, but autonomous teammates.
Our first, Ava, is an AI BDR. She finds and researches leads, writes emails in customers’ tone of voice, runs outbound sequences, self-optimizes, and manages email deliverability infrastructure. She learns, adapts, and improves over time - just like a human would.
We went through Y Combinator (W24) and have raised $35M+ from top investors. We’re at $7M+ ARR, with hundreds of customers including CookUnity, Quora, and SumUp.
We’re currently working on Ava 2.0, pushing the boundaries of what an AI employee can do. And we're hiring.
Role overview
You'll be the first Data Engineer on the Artisan team! We're managing a database of hundreds of millions of leads and creating real-time intent signals which monitor data fields for those leads. You'll own everything data-related at Artisan.
- Design, build, and maintain scalable data pipelines that process and transform large volumes of structured and unstructured data
- Manage ingestion from third-party APIs, internal systems, and customer datasets
- Develop and maintain data models, data schemas, and storage systems optimized for ML and product performance
- Collaborate with ML engineers to prepare model-ready datasets, embeddings, feature stores, and evaluation data
- Implement data quality monitoring, validation, and observability
- Work closely with product engineers to support new features that rely on complex data flows
- Optimize systems for performance, cost, and reliability
- Contribute to early architecture decisions, infrastructure design, and best practices for data governance
- Build tooling that enables the entire team to access clean, well-structured data
Location: San Francisco, New York, or Remote USA
Team: Engineering
Reports to: CPTO, Sam Stallings
Who you are
- 3+ years of experience as a Data Engineer
- Proficiency in Python, SQL, and modern data tooling (dbt, Airflow, Dagster, or similar)
- Comfort working in fast, ambiguous environments
- Experience designing and operating ETL/ELT pipelines in production
- Experience with cloud platforms (AWS, GCP, or Azure)
- Familiarity with data lakes, warehouses, and vector databases
- Experience integrating APIs and working with semi-structured data (JSON, logs, event streams)
- Strong understanding of data modeling and optimization
- Bonus: experience supporting LLMs, embeddings, or ML training pipelines
- Bonus: startup experience
Interview process
- Introductory chat with our recruiter
- 45-minute technical interview with an engineer
- Second 45-minute technical interview with an engineer
- 30-minute interview with Sam, our CPTO
- 30-minute culture and values interview with Jaspar, our CEO
Our culture and values
- Founder mindset. Everyone acts like an owner: take initiative, think big, challenge ideas, and push for 10× outcomes
- Obsessed with impact. We apply the 80/20 rule, kill sunk costs quickly, and focus on what actually moves the needle
- Customer-first, always. Every decision is made with the customer experience at the center
- High standards, every detail. Quality matters in everything we ship, from product and code to copy and design
- Clear, direct communication. We value candor, fast responses, and feedback
- Winning team energy. We bring positive vibes, low ego, zero drama, and genuinely enjoy building together
🚀 Y Combinator Company Info
Y Combinator Batch: W24
Team Size: 37 employees
Industry: B2B Software and Services
Company Description: AI employees called Artisans, starting with an AI BDR
💰 Compensation
Salary Range: $150,000 - $220,000
Equity Range: 0.05% - 0.12%
📋 Job Details
Job Type: Full-time
Experience Level: 3+ years
Engineering Type: Backend
🛠️ Required Skills
Python SQL ETL