Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using multi-stage pipelines with intricate data structures and advanced queries.
We are responsible for building data pipelines end to end—from raw data ingestion to the creation of actionable datasets—following the bronze, silver, and gold paradigm. This includes business logic, infrastructure, ETLs, optimization, and ongoing maintenance.
The data we deliver drives insights and decision-making across the organization and enhances our product offerings. We leverage technologies such as AWS, Snowflake, Iceberg, Airflow, Spark, and more.
Lead the translation of business and product requirements into scalable data models, transformations, and pipelines.
Design and own datasets across bronze, silver, and gold layers, including defining grain, aggregations, and data contracts.
Develop and maintain SQL-heavy data pipelines and Airflow DAGs (workflow logic, dependencies, backfills, python, and lots of SQL).
Own data correctness for key business metrics (e.g., ARR), including deep root cause analysis and resolution of data issues.
Define and drive best practices for SQL, data modeling, and pipeline design across the team.
Optimize queries and data models for performance, scalability, and cost efficiency.
Collaborate closely with product managers, analysts, and BI developers to refine requirements and ensure high-quality data delivery.
Develop AI-agents to accelerate data analysis by internal and external users.
Work with complex data inputs (e.g., JSON, schemas, logs) and incorporate them into robust data pipelines.