N-ix

Middle Big Data Engineer

Ukraine Full Time

N-iX is seeking a proactive and skilled Middle Data Engineer to join our vibrant and collaborative team. In this role, you will be responsible for designing, developing, and maintaining sophisticated data pipelines, Ontology Objects, and Foundry Functions within Palantir Foundry. You will work closely with cross-functional stakeholders to deliver scalable, reliable, and secure data solutions that support data-driven decision-making across the organization.

The ideal candidate has a strong foundation in cloud-based data engineering, modern data architectures, and a passion for solving complex data challenges, particularly in regulated and data-intensive domains.

Required Qualifications

  • 3+ years of professional experience in data engineering or a closely related field.
  • Strong proficiency in Python and PySpark for data processing and pipeline development.
  • Solid experience working with big data and distributed processing technologies.
  • Hands-on experience with cloud-based data engineering services (AWS, Azure, or GCP).
  • Strong understanding of data modeling, ETL/ELT concepts, and data pipeline architecture.
  • Experience designing and maintaining reliable, scalable data workflows.
  • Ability to work effectively in cross-functional, collaborative environments.
  • English level at least Upper-Intermediate.

Nice to Have

  • Experience in the pharmaceutical or life sciences domain.
  • Prior hands-on experience with Palantir Foundry.
  • Familiarity with data governance frameworks and regulated data environments.
  • Experience working with complex, large-scale datasets.

Key Responsibilities

  • Collaborate with cross-functional teams (engineering, analytics, product, and business stakeholders) to understand data requirements and translate them into scalable technical solutions.
  • Design, implement, and maintain end-to-end data pipelines in Palantir Foundry, ensuring data integrity, reliability, and performance.
  • Develop and maintain Ontology Objects, data models, schemas, and flow diagrams to support consistent and reusable data assets.
  • Build, optimize, and support ETL/ELT pipelines to collect, process, and integrate data from multiple sources into downstream systems and applications.
  • Apply data governance, security, and access control best practices to protect sensitive and regulated data.
  • Monitor pipeline performance, identify bottlenecks, and implement improvements to reduce latency and improve efficiency.
  • Troubleshoot and resolve data pipeline issues to ensure continuous availability and accuracy of data.
  • Maintain clear technical documentation and effectively communicate design decisions and technical solutions.
  • Stay up to date with emerging technologies, tools, and industry trends, proactively suggesting improvements to data engineering practices.

Technologies & Tools

You will work with (but are not limited to):

  • Palantir Foundry
  • Python
  • PySpark
  • SQL
  • TypeScript
  • Big data technologies (e.g. Apache Spark, Hadoop, Kafka, BigQuery)
  • Cloud data services (e.g. AWS Glue, Azure Data Factory, Google Cloud Dataflow)

 

We offer*:

  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits

*not applicable for freelancers