Job Family Group:
Information TechnologyAre you ready to do something remarkable? Ready to collaborate with an incredible team to solve problems that improve people’s lives? Meet Ingevity.
At Ingevity, we develop innovations that purify, protect and enhance the world around us. Our products enable oil to flow better, crops to grow fuller, roads to last longer and ensure that the air we all breathe is cleaner.
Our people come from all different backgrounds and help reimagine new possibilities daily. We understand there is no challenge too big and no contribution too small. We seek out new ideas for tackling complex problems and celebrate achieving the improbable. We value each person’s unique talents and synergize them to create meaningful impact and sustainable solutions for our customers and our world.
Bold. Energetic. Ingenious. Genuine. If these qualities describe you, we’d love for you to join Ingevity!
Data Engineer
Ingevity is expanding the maturity and reach of its enterprise data platform to support advanced analytics, AI-enabled workflows, and the next generation of intelligent business applications. We are seeking a Data Engineer to help build and operate the data pipelines, curated data products, and governance-enabled data foundations that power our organization.
This role will work closely with the Enterprise Data Architect to implement scalable data solutions on our Microsoft Fabric platform, ensuring data is reliable, well-modeled, and discoverable across the enterprise. The position plays a critical role in enabling AI readiness, supporting emerging use cases such as agent-driven workflows, enterprise search, and Copilot-enabled business processes.
This role offers an opportunity to grow into a future data architecture leadership role by contributing to both hands-on data engineering and the evolution of our enterprise data architecture.
You will collaborate with functional stakeholders, analytics teams, and IT partners to ensure our data platform supports trusted insights, operational intelligence, and responsible AI innovation
Here’s how you will make an impact:
Design, build, and maintain reliable and scalable data pipelines within the Microsoft Fabric data platform and dbt Cloud environments to support analytics, reporting, and AI use cases.
Develop and manage data transformation workflows using dbt Cloud, implementing modular, maintainable, and well-documented data models.
Support the creation and maintenance of curated data products and semantic layers that enable self-service analytics through Power BI and other enterprise tools.
Partner closely with the Enterprise Data Architect to implement enterprise data models, architectural standards, and data platform best practices.
Ensure data pipelines and data products meet high standards for data quality, reliability, performance, and governance.
Implement and support metadata-driven development practices, enabling improved discoverability and reuse of trusted data assets.
Develop and maintain integrations and ingestion pipelines using managed connectors and ingestion frameworks supporting enterprise systems such as SAP ERP and other operational platforms.
Support operational monitoring and troubleshooting of data pipelines, ensuring high platform reliability and data availability.
Contribute to data quality frameworks, including validation, monitoring, and automated checks across critical datasets.
Collaborate with analytics teams and functional stakeholders to translate data requirements into well-modeled, governed data assets.
Support enterprise data governance initiatives, including access controls, lineage visibility, and audit-ready data management practices.
Assist in documenting current-state and future-state data architecture patterns, contributing to the evolution of the organization’s enterprise data platform.
Contribute to the development and creation of Machine Learning (ML) models to support enterprise-wide decision making
Contribute to emerging AI enablement patterns, ensuring trusted enterprise data is available to support use cases such as RAG (Retrieval-Augmented Generation), enterprise knowledge systems, and Copilot-based workflows built with Copilot Studio.
Here is what you’ll need to succeed in this role:
Bachelor’s degree required. Management Information Systems, Engineering, Computer Science, Business, or related disciplines preferred.
5+ years of experience in data engineering, analytics engineering, or enterprise data platform development.
Completely fluent in SQL including at least one of T-SQL, Spark SQL
Strong experience building and maintaining data pipelines and transformation workflows in modern cloud data environments.
Experience working with Microsoft Fabric, Databricks, Azure data services, or similar modern data platforms.
Experience with dbt (dbt Cloud preferred) for transformation, modeling, and analytics engineering workflows.
Experience supporting analytics platforms such as Power BI, including working with curated datasets and semantic models.
Experience integrating enterprise systems such as SAP ERP into analytics or data platform environments.
Experience working with data modeling concepts including dimensional modeling, curated data layers, and data product design.
Experience supporting data governance practices, including access control, lineage, and auditability.
Familiarity with data ingestion frameworks or managed connectors used to integrate operational systems into data platforms.
Scripting or programming experience (e.g., Python) used to support automation, data pipeline notebooks, ingestion of data and automation of tasks using REST APIs, and Data Engineering libraries such as PySpark and Pandas
Strong collaboration skills with the ability to work across technical teams and functional stakeholders.
Ability to operate in a fast-paced environment where data platform capabilities are evolving and expanding.
Strong problem-solving skills, curiosity, and a desire to continuously improve data engineering practices.
Helpful tips:
Experience working with Microsoft Fabric lakehouse / warehouse architectures and knowledge of data modeling techniques such as Entity Relationship modeling, business process mapping and Data Architectures such as Medallion and Data Mesh
Familiarity with Simplement, Fivetran or similar enterprise data integration tooling
Experience supporting self-service analytics environments and data product ecosystems
Experience in implementing and deploying CI/CD processes such as Github Actions and Fabric Deployment Pipelines as well as CI/CD environments such as GitHub and Azure DevOps
Experience building data quality frameworks and automated validation processes
Exposure to AI enablement patterns, including: Machine Learning Techniques and pipelines, enterprise search, Retrieval-Augmented Generation (RAG), knowledge graph or metadata enrichment, AI agents or workflow automation
Familiarity with Copilot Studio or similar tools used to build AI-enabled workflows and agents.
Please note: This is not a position that Ingevity will consider for employment sponsorship. This means that Ingevity will not sponsor in any NIV category (including TN, E-3, H-1B, O-1) or submit the position in the H-1B Registration.
Ingevity is a company made up of extraordinary people of every race, religion and background, all worthy of the same dignity. Our differences are one of our great strengths. Join us in building a culture of increasing diversity and respect – a culture where everyone belongs.
Ingevity is an Equal Opportunity Employer, Minorities/Women/Veterans/Disabled.
Recruiting Agencies: Ingevity does not accept unsolicited resumes and therefore, will not be responsible for any fees associated with unsolicited resumes.