ABOUT AMGEN
Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today.
ABOUT THE ROLE
Role Description:
The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes
Roles & Responsibilities:
Design and maintain upstream data pipelines that deliver analytics-ready datasets optimized for Tableau reporting.
Develop and optimize data models (e.g., star schema, snowflake, extracts) to support efficient Tableau dashboards and self-service analytics.
Manage Tableau Server/Cloud administration, including permissions, row-level security, publishing workflows, and content governance.
Implement end-to-end testing and validation to ensure consistency and accuracy between source data, pipelines, and Tableau dashboards.
Leverage advanced Tableau capabilities such as parameters, level-of-detail (LOD) expressions, and complex calculations to meet sophisticated reporting needs.Design, develop, and maintain data solutions for data generation, collection, and processing
Be a key team member that assists in design and development of the data pipeline
Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
Implement data security and privacy measures to protect sensitive data
Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
Collaborate and communicate effectively with product teams
Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions
Identify and resolve complex data-related challenges
Adhere to best practices for coding, testing, and designing reusable code/component
Explore new tools and technologies that will help to improve ETL platform performance
Participate in sprint planning meetings and provide estimations on technical implementation
Design and develop data pipelines leveraging Databricks, PySpark, and SQL to ingest, transform, and process large-scale datasets.
Engineer solutions for both structured and unstructured data to enable advanced analytics and insights.
Implement automated workflows for data ingestion, transformation, and deployment using Databricks Jobs and notebooks, with ongoing monitoring and scheduling.
Apply performance optimization techniques, including Spark job tuning, caching, partitioning, and indexing, to improve scalability and efficiency.
Build integrations with multiple data sources, such as SQL databases, APIs, and cloud storage platforms, ensuring seamless connectivity and reliability.
Collaborate effectively with global teams across time zones to maintain alignment, resolve issues, and deliver on shared objectives.
Basic Qualifications and Experience:
Bachelor’s / Master’s degree and 2 to 4 years of Computer Science, IT or related field experience
Functional Skills:
Must-Have Skills
Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing
Strong experience with Tableau Desktop and Tableau Server/Cloud, including building, publishing, and optimizing dashboards.
Proficiency in writing and optimizing SQL queries specifically for Tableau reporting needs.
Experience ensuring Tableau performance and scalability, including tuning extracts, managing joins, and applying best practices for visualization.
Good-to-Have Skills:
Proven ability to design and maintain data models optimized for Tableau (e.g., star schema, snowflake, extracts).
Excellent problem-solving skills and the ability to work with large, complex datasets
Strong understanding of data governance frameworks, tools, and best practices.
Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) processing
Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development
Strong understanding of data modeling, data warehousing, and data integration concepts
Knowledge of Python/R, Databricks, SageMaker, cloud data platforms
Experience implementing automated orchestration and monitoring of data pipelines using Databricks Jobs, Apache Airflow, or similar workflow tools.
Familiarity with performance optimization techniques for big data processing, such as Spark job tuning, caching, partitioning, and indexing.
Exposure to multi-source integration involving APIs, SQL databases, and cloud storage platforms.
Demonstrated ability to collaborate across global teams and time zones, ensuring alignment and delivery in distributed environments.
Soft Skills:
Excellent critical-thinking and problem-solving skills
Strong communication and collaboration skills
Demonstrated awareness of how to function in a team setting
Demonstrated presentation skills
EQUAL OPPORTUNITY STATEMENT
Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.
We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.