Line of Service
AdvisoryIndustry/Sector
Not ApplicableSpecialism
Data, Analytics & AIManagement Level
Senior AssociateJob Description & Summary
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals.Why PWC
At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.
At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations.
We are seeking an experienced Databricks Engineer to design, build, and optimize large-scale data and analytics platforms on Databricks. The ideal candidate will be responsible for developing high-performing Spark and Delta Lake workflows, implementing best practices across governance and observability, and driving performance tuning, cost efficiency, and automation for enterprise-scale workloads.
Responsibilities
· Design and develop high-performance Spark and Delta Lake workflows for data engineering and analytics workloads.
· Lead the modernization and migration of ETL/ELT pipelines to Databricks lakehouse architecture.
· Implement Databricks best practices across Unity Catalog, governance, data quality, and observability.
· Optimize clusters, SQL warehouses, autoscaling, and serverless compute for performance and cost efficiency.
· Perform advanced Spark performance tuning (partitioning, shuffle optimization, caching, AQE, skew mitigation, I/O improvements).
· Develop and maintain observability frameworks and dashboards for Spark jobs, pipelines, and compute usage.
· Automate cluster provisioning, job orchestration, and CI/CD through reusable frameworks and scripts.
· Analyze billing and utilization metrics to drive FinOps-based cost optimizations.
· Evaluate and adopt emerging Databricks features such as Delta Live Tables, Serverless Compute, and Lakehouse Federation.
· Partner with architects and data engineering teams to ensure scalability, security, and operational reliability.
Required Skills & Experience
· Strong expertise in Delta Lake internals, schema evolution, versioning, and data lifecycle management.
· Proven ability in performance tuning and workload optimization on Databricks.
· Experience with Unity Catalog, Delta Live Tables, and Databricks SQL Warehouses.
· Experience in building automation frameworks for deployment and orchestration.
· Familiarity with FinOps, cost governance, and utilization tracking in Databricks.
· Excellent analytical, problem-solving, and communication skills.
Mandatory skill sets:
Databricks
Preferred skill sets:
· Exposure to Azure services (ADLS, ADF, Synapse) or AWS Glue/EMR.
· Experience with MLflow, Feature Store, or MLOps on Databricks.
Years of experience required:
- 5+ years of hands-on experience with Databricks or Apache Spark.
Education qualification:
B.E, B.Tech, MCA, M.Tech, M.E
- Bachelor's degree in computer science; information technology; engineering; information systems and 8+ years’ experience in software engineering or related area at a technology; retail; or data-driven company.
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, MBA (Master of Business Administration)Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
Databricks PlatformOptional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Applied Macroeconomics, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Geopolitical Forecasting {+ 24 more}Desired Languages (If blank, desired languages not specified)
Travel Requirements
Available for Work Visa Sponsorship?
Government Clearance Required?
Job Posting End Date