Assist in extracting, transforming, and loading (ETL) ERP and operational data into a unified data platform.
Support data integration and transformation activities using Databricks (PySpark/SQL).
Help in designing and developing Power BI dashboards for cross-divisional analytics.
Write and optimize SQL queries and data pipelines for performance and scalability.
Support creation and maintenance of data documentation, including data dictionaries, process flows, and metadata tracking.
Assist in implementing data quality checks, validation scripts, and documentation of data issues.
Learn and contribute to CI/CD processes for analytics workflows and dashboards.
Work with the team to use Git for version control and collaborative code management.
Participate in team discussions on data architecture, automation, and dashboard enhancement opportunities.
What you will need:
Required Skills:
Pursuing or recently completed Bachelor’s / master’s degree in computer science, Engineering, Data Analytics, Statistics, or a related field.
Basic understanding of SQL, Python, and data visualization tools (preferably Power BI).
Familiarity with data engineering concepts and cloud platforms (Azure, Databricks experience is a plus).
Strong analytical and problem-solving mindset.
Good communication skills and ability to work collaboratively in a team setting.
Preferred Skill:
Exposure to Azure Cloud.
Experience using version control tools (Git).
Understanding of Database, data modeling and data quality principles.