Line of Service
AdvisoryIndustry/Sector
Not ApplicableSpecialism
Data, Analytics & AIManagement Level
ManagerJob Description & Summary
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth.Job Description & Summary:
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.
Responsibilities:
Design, develop, and maintain scalable data pipelines using Python, PySpark, and Spark (Scala/PySpark) across Azure and GCP platforms.
Build and manage data workflows using Databricks Workflows and Apache Airflow / Cloud Composer DAGs.
Develop and optimize Delta Lake tables on Databricks, ensuring data reliability, performance, and governance.
Implement and manage Databricks Unity Catalog for data access control and metadata management.
Work with BigQuery for large-scale data warehousing and analytics.
Develop event-driven and batch data processing solutions using Pub/Sub, Cloud Dataflow, and Cloud Functions.
Implement ML pipelines on Databricks, including experimentation tracking using MLflow.
Collaborate with data scientists, analytics teams, and business stakeholders to deliver end-to-end data solutions.
Ensure best practices for data quality, security, cost optimization, and performance tuning.
Support CI/CD pipelines and production deployments for data engineering workloads.
Mandatory skill sets:
Azure Databricks
Python, PySpark
Advanced Spark (PySpark / Scala)
Databricks Delta Tables
Databricks Workflows
Unity Catalog
Apache Airflow / Cloud Composer DAGs
BigQuery
GCP Native Services:
BigQuery
Cloud Composer
Cloud Functions
Pub/Sub
Preferred skill sets:
Machine Learning on Databricks
MLflow for experiment tracking and model lifecycle management
Google Cloud Dataflow
Cloud Data Fusion
Experience with event-driven architecture
Exposure to multi-cloud (Azure + GCP) data platforms
Knowledge of data governance, security, and compliance best practices
Years of experience required:
6 to 10+ Years
Education qualification:
BE, B.Tech, ME, M,Tech, MBA, MCA (60% above)
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Engineering, Master DegreeDegrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
GCP DataflowOptional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more}Desired Languages (If blank, desired languages not specified)
Travel Requirements
Not SpecifiedAvailable for Work Visa Sponsorship?
NoGovernment Clearance Required?
NoJob Posting End Date
May 14, 2026