Zuru

Data Engineer

India, Ahmedabad Full Time
About Us  
  
Zuru Tech is digitalizing the construction process of buildings all around the world. We have a multi-national team developing the world’s first digital building fabrication platform, you design, we build it! 
We at ZURU develop the Zuru Home app, a BIM software meant for the general public, architects, and engineers, from here anyone can buy, design, and send to manufacturing any type of building with complete design freedom. Welcome to the future!  
  
What are you Going to do? 

📌Architect and develop complex data pipelines, ETL/ELT workflows, and data models on platforms such as Snowflake, Databricks, Azure Synapse, Redshift, BigQuery, etc.
📌Build scalable data transformation pipelines using the Medallion Architecture (Bronze → Silver → Gold layers).
📌Develop, manage, and optimize Airflow DAGs for orchestration and scheduling.
📌Implement transformation logic and semantic models using DBT, enforcing analytics engineering best practices.
📌Write, optimize, and maintain advanced SQL queries, stored procedures, and performance-tuned transformations.
📌Design and maintain reusable data ingestion and transformation frameworks, including support for geospatial data.
📌Build connectors and integrate streaming/event-driven architectures using Kafka for near real-time data pipelines.
📌Enable downstream analytics by preparing curated datasets and data models for BI consumption, including Power BI dashboards.
📌Collaborate with Architects, Senior Engineers, API teams, and Visualization teams to deliver end-to-end data solutions.
📌Conduct PoCs/PoVs to evaluate cloud data integration tools and modern data engineering technologies.
📌Ensure strong data quality, lineage, governance, metadata management, and cloud security standards.
📌Work within Agile/DevOps methodologies to deliver iterative, high-quality solutions.
📌Troubleshoot and proactively resolve complex pipeline and performance issues.
  
What are we Looking for?  
✔5+ years of hands-on experience as a Data Engineer on cloud-based data transformation and platform modernization projects.
✔Strong experience with at least one full lifecycle implementation of a cloud data lake/data warehouse using Snowflake, Databricks, Redshift, Synapse, or BigQuery.
✔Hands-on experience with Medallion Architecture or other layered data modelling approaches.
✔Proficient in Airflow for workflow orchestration and DBT for SQL-based transformations and modelling.
✔Strong skills in Advanced SQL, data modelling (star, snowflake), ETL/ELT, and performance tuning.
✔Experience supporting BI teams by creating curated datasets, semantic models, and optimized schemas for tools like Power BI.
✔Proficient in Python and PySpark for building scalable data processing pipelines.
✔Experience with cloud object storage (S3, ADLS, GCS, MinIO) and cloud security (IAM/RBAC, networking, resource monitoring).
✔Familiarity with relational and NoSQL databases, distributed frameworks (Spark, Hadoop), and modern data integration patterns.
✔Strong analytical and problem-solving skills with the ability to handle complex data challenges independently.


Required Skills
✔Cloud Platforms: AWS / Azure / GCP (any one).
✔Data Platforms: Snowflake, Databricks, Redshift, Synapse, BigQuery.
✔Orchestration & Transformation: Airflow, DBT, CI/CD, DevOps.
✔Streaming & Monitoring: Kafka, ELK, Grafana.
✔Distributed Processing: Spark, Hadoop ecosystem.
✔Programming: Python, PySpark.
✔BI & Visualization Support: Power BI, DAX basics (optional but beneficial).
✔Data Modelling: Dimensional modelling, Medallion Architecture, SQL optimization.


 What do we Offer? 
   
💰 Competitive compensation  
💰 Annual Performance Bonus  
⌛️ 5 Working Days with Flexible Working Hours   
🌎 Annual trips & Team outings   
🚑 Medical Insurance for self & family   
🚩 Training & skill development programs   
🤘🏼 Work with the Global team, Make the most of the diverse knowledge 
🍕 Several discussions over Multiple Pizza Parties   

A lot more! Come and discover us!