Jellyvision

Senior Data Platform Engineer

Remote Full Time

Senior Data Platform Engineer

Who we are

Jellyvision is redefining how organizations experience benefits by bringing everything together in one modern, intelligent home. With ALEX Home, we combine our award-winning ALEX® decision support with a flexible benefits administration platform, giving employers and employees a simpler, smarter way to manage benefits year-round.

Our mission is to help organizations reduce complexity, lighten administrative burden, and drive real employee understanding and utilization without forcing rip-and-replace decisions. We meet teams where they are today and give them a clear path to what’s next.

The people behind Jellyvision are creative problem solvers who care deeply about getting it right. We debate ideas, give real feedback, and sweat the details because those details are what turn complicated problems into great experiences for real humans.

We’re a human-first company that trusts smart people to do great work. We value curiosity, kindness, and willingness to try new things, learn fast, and try again. You won’t just show up to do a job, you’ll help build what’s next, solve real problems, and have some fun doing it. 

What’s the role?

As a Senior Data Platform Engineer, you’ll be a hands-on engineer on a small, high-ownership data team. You’ll work across the full data platform - relational, warehouse, and lakehouse systems - building and operating the pipelines that power compliance, analytics, and reporting workloads.

This is a multi-hat role. Some days you’re building pipelines, other days you’re deep in schema design, improving infrastructure, or jumping into a production issue. You’ll contribute to the platform’s service layer, help evaluate new tools and approaches, and mentor other engineers on the team.

We’re looking for someone who takes ownership of what they build, communicates clearly about system state and risk, and can work independently through ambiguity. 

What you’ll do to be successful
1. Build and operate data pipelines
  • Design and build pipelines that move data across systems - supporting data lake ingestion, compliance workloads, and cross-domain data flows
  • Own pipeline operations end to end: monitoring, incident resolution, data quality, and documentation that lets any team member respond independently
  • Identify technical debt and reliability risks and raise them with clear context and proposed next steps

Success looks like: Pipelines run reliably. Known failure modes get fixed rather than worked around. You flag problems early and follow through on fixes.

2. Build and shape the data platform
  • Design and maintain schemas across relational, warehouse, and lakehouse layers, working with application engineers and product to get data models right
  • Build out the platform’s service layer, infrastructure-as-code, and data quality frameworks - this role spans design and implementation
  • Keep platform documentation at a level where any team member can understand what exists, how it works, and where the risks are
  • Over time, contribute to the analytics engineering layer, including modeling practices and semantic layer development

Success looks like: The parts of the platform you own are well-documented, reliable, and improving over time. Schema changes land cleanly. Infrastructure is managed as code.

3. Inform architectural decisions and tooling evaluations
  • Contribute to evaluations of the current platform against emerging architectures and tooling, helping produce trade-off analyses and recommendations
  • Bring what you see day to day in the systems you operate into the team’s improvement roadmap and technical direction
  • Track and report on platform health metrics: pipeline uptime, failure rates, data freshness, and cost trends

Success looks like: You bring informed perspectives to architectural discussions grounded in hands-on experience. Your research and prototyping help leadership make confident decisions.

4. Mentor and raise the technical bar
  • Mentor peers and junior engineers through code review, pairing, and technical guidance 
  • Help uphold engineering standards and collaborate cross-functionally with application engineering, product, and analytics as a reliable technical partner
  • Share knowledge through documentation and technical discussions

Success looks like: Engineers you’ve reviewed and paired with produce better work over time. Cross-team partners trust you for thoughtful input and follow-through.

Experience & skills you’ll need
Required:
  • 7–9+ years of data engineering or data platform experience with hands-on ownership of production systems
  • Experience building and operating a data lakehouse, data lake, or modern warehouse architecture (Snowflake, Databricks, or comparable)
  • Deep fluency with Apache Airflow or comparable orchestration: DAG design, task dependencies, sensors, and production operations
  • Solid understanding of open table formats (Iceberg, Delta, Hudi) and columnar storage (Parquet, Avro, ORC), including how format choices affect query performance, storage efficiency, and schema evolution
  • Strong Python: production-grade code, testing, packaging, and debugging
  • Advanced SQL: complex transformations, performance tuning, and debugging against a cloud warehouse
  • Hands-on relational schema design, ideally in a multi-tenant SaaS context
  • Terraform or comparable IaC for managing cloud data resources; CI/CD for pipeline or infrastructure deployment
  • Familiarity with AWS data infrastructure: S3, IAM, and relevant managed services
  • Experience using AI-assisted development tools (Claude Code, Cursor, Copilot, or similar) to accelerate engineering workflows
  • Demonstrated ownership of systems you’ve inherited and systems you’ve built from scratch - you can assess an unfamiliar codebase and improve it, and you’re just as effective designing something new
  • Clear written communication: you can describe a system’s state, a problem, or a recommendation in plain language
  • Experience mentoring other engineers through code review, pairing, or technical guidance
Nice to have:
  • Production experience with Apache Spark or comparable distributed processing frameworks
  • Snowflake administration: roles, resource monitors, clustering, and cost controls
  • Experience with dbt or comparable transformation frameworks: building models, understanding grain and dependencies, writing tests
  • Experience with managed ELT tools (Fivetran, Stitch, or similar) including evaluating and retiring them
  • Experience in a regulated industry (healthcare, insurance, financial services, etc.) with familiarity around compliance-driven data requirements
  • SaaS platform experience, particularly with multi-tenant data architectures
  • Familiarity with stream processing technologies (Kafka, Kinesis, Flink)
  • Experience with data cataloging or lineage tools (Monte Carlo, DataHub, Atlan, or similar)
  • Data governance experience: access control frameworks, data quality monitoring, lifecycle management
  • Experience evaluating or migrating between data platform technologies
  • Experience standing up a data platform from scratch, including making early architectural and design decisions that shaped how the system evolved

The Details 

  • Location: Remote 
  • Starting Salary: $127,000 - $156,000

What Jellyvision will give you

Check out our benefits here!

Jellyvision is committed to continuous evolution and fostering a more diverse and inclusive workplace where everyone is welcomed, valued, and respected. It doesn’t matter your race, ethnicity, religion, age, disability, sexual orientation, gender, gender identity/expression, country of origin, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), criminal histories consistent with legal requirements or any other basis protected by law...we just want amazing people who are willing to grow along with us.

Although we have a Chicago-based HQ that employees are welcome to work out of whether they’re local or just visiting, this position is also eligible for work by a remote employee out of CA, CO, FL, GA, IL, IN, KY, MI, MN, NC, NY, OH, OR, PA, SC, TN, TX, UT, VA, WA or WI.