CUSHMAN & WAKEFIELD

Lead Data Engineer

London ,London Full time

Job Title

Lead Data Engineer

Job Description Summary

Job Description

The Lead Data Engineer helps architect and lead the development of enterprise-scale data platforms and advanced analytics solutions across multiple business units and subject areas. In addition, they assist the project delivery team with scoping and planning integration projects and contribute to the design, implementation, and support of the data artifacts in the data lakehouse and databases. This role combines deep technical expertise with leadership and mentoring responsibilities, driving innovation and best practices across the data engineering function.

Key Responsibilities

  • Support the Solution and Data Architect(s) to help lead the design, development, and optimization of scalable data platforms and analytics solutions that support global data strategy.
  • Architect and implement robust, scalable, metadata-driven pipelines in Databricks using Delta Lake, Unity Catalog, and declarative workflows.
  • Oversee the development and maintenance of Azure SQL database objects using advanced T-SQL for data quality and insight generation.
  • Drive performance, reliability and observability of real-time data processing using Spark Structured Streaming and Change Data Feed strategies.
  • Own CI/CD processes and infrastructure automation using Databricks Asset Bundles, Azure DevOps, and Infrastructure as Code.
  • Define and enforce standards for reusable, metadata-driven integration patterns, including Unity Catalog governance, Declarative Pipeline design, and data quality across the engineering team.
  • Manage and mentor a small team of data engineers, providing technical leadership, pair programming support, conducting design/code reviews, guiding technical decisions and aligning data engineering efforts with strategic goals
  • Collaborate with cross-functional stakeholders to translate business and technical requirements into scalable data solutions, assisting the project delivery team to scope, plan and deliver new and enhanced data integrations
  • Champion innovation, best practices, and continuous improvement in data engineering, while mentoring team members and fostering technical growth.
  • Evaluate and introduce emerging technologies to enhance platform capabilities


Essential Knowledge & Experience

  • Extensive hands-on experience (6+ years) in cloud-native data engineering, including 3+ years with Databricks and Delta Lake in Azure, combined with good (2+ years) in a technical leadership or architecture role.
  • Deep practical experience of the Databricks ecosystem including Delta Lake, Unity Catalog, Declarative Pipelines, Spark Structured Streaming, Change Data Feed, Serverless SQL, and Cluster management & optimization.
  • Expert-level SQL skills, with a track record of designing performant queries, optimizing data models, and driving insight generation from large datasets.
  • Proficiency in multiple programming languages used in data engineering and analytics (e.g., PySpark, Scala, R, Python), with experience mentoring others in their use.
  • Deep understanding of cloud-native data architecture, including Lakehouse design principles, data integration patterns, and automation strategies.
  • Proven ability to design and deliver scalable, production-grade data pipelines, with a focus on reliability, maintainability, and performance.
  • Strong experience with CI/CD, version control (Git), ideally including Databricks Asset Bundles, and Infrastructure as Code, leading DevOps practices in data engineering teams.
  • Skilled in requirements gathering and solution design, translating business needs into technical specifications and guiding teams through delivery.
  • Demonstrated leadership in troubleshooting and resolving complex data and code issues, driving root cause analysis and long-term fixes.
  • Excellent communication and stakeholder engagement skills, with experience presenting architectural decisions, documenting processes, and influencing cross-functional teams.
  • Commitment to continuous improvement, staying current with emerging technologies and fostering a culture of learning and innovation within the team.


Desirable Knowledge & Experience

  • Hands-on experience with machine learning pipelines or integrating data engineering with advanced analytics and AI workflows.
  • Experience designing and implementing data mesh or domain-oriented data architectures in large-scale environments.
  • Test-driven development using common testing frameworks (e.g. pytest, nUnit, tSQLt, nBI, etc.)
  • Exposure to real-time integrations or IoT solutions e.g. Event Hubs, Service Bus, IoT and Event Grid in Azure; IoT Core, PubSub in Google Cloud; etc.
  • Familiarity with data governance frameworks, including data lineage, cataloging, and compliance (e.g., GDPR, HIPAA).
  • Background in delivering data-driven solutions for commercial real estate applications
  • Degree in IT, Engineering, computer sciences, business IT degree or in any quantitative discipline


Skills & Personal Qualities:

  • Strong communicator in English, both verbal and written, with the ability to engage effectively across global teams and all levels of the organisation.
  • Analytical and curious mindset, driven to solve complex problems and understand intricate technical systems and architectures.
  • Organised and detail-oriented, with a diligent, quality-focused approach and a strong sense of ownership.
  • Collaborative and proactive, able to build relationships, resolve conflicts, and contribute meaningfully within cross-functional teams.
  • Adaptable and dependable, capable of managing multiple priorities, meeting deadlines, and working independently or as part of a team.
  • Continuous learner, demonstrating initiative in technical self-improvement and staying current with evolving tools and practices.




 

 

 




INCO: “Cushman & Wakefield”