TAKEDA

Data Engineering Professional II

IND - Bengaluru Full time

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use.  I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description


OBJECTIVES/PURPOSE

As Data Engineer, you will be responsible for building and maintaining data systems and constructing data tables and data models that are optimized for operations and analysis, supporting several Business Function requirements and downstream systems.

 

ACCOUNTABILITIES

 

  • Develop and maintain scalable data models and pipelines using AWS native technologies to support increasing data sources, volumes, and complexity.
  • Collaborate with several business functions to improve data models and its quality to support the development of digital products, fostering data-driven decision-making across the organization.
  • Implement processes and systems to ensure data reconciliation, monitor data quality, and ensure production data is accurate and available for key stakeholders, downstream systems, digital products, and business processes.
  • Write unit, integration, and performance test scripts, contribute to engineering documentation, and maintain the engineering wiki.
  • Perform data analysis to troubleshoot and resolve data-related issues.
  • Work closely with AGILE SCRUM Teams, specifically frontend and backend engineers, product managers, scrum masters, quality engineers to deliver integrated and scalable data products.
  • Collaborate with enterprise teams, including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering, to design and develop data integration patterns and models supporting various digital products.
  • Partner with DevOps and the Cloud Center of Excellence to deploy data pipeline solutions in the Takeda AWS environments, meeting security and performance standards.
  • Support and align with Data Trustees, Data Stewards, and Master Data Management functions following Data Governance principles.

 

EDUCATION, BEHAVIOURAL COMPETENCIES AND SKILLS:

 

Essential

  • Bachelor’s Degree from an accredited institution in Engineering, Computer Science, or a related field.
  • 5+ years of experience in data engineering, software development, data warehousing, data lake/mesh, and supporting digital products developing data products following FAIR data principles.
  • Strong expertise in data integration, data modeling, and modern database technologies (GraphQL, SQL, No-SQL, python, pySpark) and AWS cloud technologies (e.g., DMS, Lambda, Databricks, SQS, Step Functions, Data Streaming).
  • Extensive experience in DBA, schema design & dimensional modeling, and SQL optimization.
  • Excellent written and verbal communication skills, with the ability to collaborate effectively with cross-functional teams.
  • Understanding of good engineering practices (DevSecOps, source-code versioning using GIT, ...)
  • Proficient working in AGILE SCRUM Teams and using JIRA and Confluence
  • Knowledge of SDLC Proficiency with version control (e.g., Git, GitHub, GitLab) and familiarity with CI/CD pipelines (e.g., Jenkins, GitHub Actions)
  • Good understanding of software security principles and secure coding practices.
  • Knowledge of automated testing tools and frameworks and practices (unit testing, integration testing, E2E testing).
  • Familiarity with relational and non-relational databases (PostgreSQL, MongoDB, DynamoDB).
  • Knowledge of SDLC

 

Nice To Have

  • Experience with streaming technologies like Spark Streaming or Kafka
  • Infrastructure as Code (IaC) experience, preferably with Terraform.
  • Experience designing and developing API data integrations using SOAP / REST / FAST.

 

Locations

IND - Bengaluru

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time