With over 100 years of rich history and strongly positioned as a local bank with regional and international expertise, a career with our family offers the opportunity to be part of this exciting growth journey, to reset our future and shape our destiny as a proudly African group.
Job Summary
Work as part of an integrated (run & build) tribe in lower complexity environments to provide enterprise wide application support across multiple stakeholder groups by maintaining & optimizing enterprise-grade applications (tech products & services).Job Description
Team Context
Data Engineering is responsible for the central data platform that receives and distributes data across the bank. This is a multi-platform environment and leverages a blend of custom, commercial and open-source tools to manage and support thousands of critical data-related jobs. These jobs are supported and updated in line with changes across the landscape to avoid disruption to downstream data consumers.
Responsibilities
Manage an assigned team through day-to-day support tasks
Oversee development plans for the team and provide mentorship to the team
Provide guidance and peer review
Support pipelines end to end
Build and deploy enhancements and new developments or new data pipelines
Identify and drive optimisation opportunities across the environment
Manage the handover of new applications ensuring that required standards and practices are met
Improvement on recovery time in case of prod failures
Test prototypes and oversee handover to the Data Operations teams
Attend and contribute to regular team and User meetings
Responsible for the actual coding or programming of Hadoop applications
High-speed querying
Job Experience & Skills Required:
3+ years’ experience working in Big data environment, optimising and building big data pipelines, architectures and data sets with e.g. Java, Scala, Python, Hadoop, Apache Spark and Kafka
Minimum one year experience with Scala programming language
Minimum one year experience managing a team
Cross domain knowledge
Familiarity with Hadoop ecosystem and its components
Good knowledge of the concepts of Hadoop
Solid experience in a working environment in Big Data development utilising SQL or Python
Experience in Big Data development using Spark
Experience in Hadoop, HDFS and MapReduce
Experience in database design, development and data modelling
The following additional knowledge, skills and attributes are preferred:
Good knowledge in back-end programming, specifically java
Experience with development in a Linux environment and its basic commands
Understanding of Cloud technologies and migration techniques
Understanding of data streaming and the intersection of batch and real time data
Ability to write reliable, manageable, and high-performance code
Should have basic knowledge of SQL, database structures, principles, and theories
Knowledge of workflow/schedulers
Strong collaboration and communication skills
Strong analytical and problem-solving skills
Experience in Quality Assurance
Experience in Stakeholder Management
Experience in Testing
Education
Bachelor's Degree: Information TechnologyAbsa Bank Limited is an equal opportunity, affirmative action employer. In compliance with the Employment Equity Act 55 of 1998, preference will be given to suitable candidates from designated groups whose appointments will contribute towards achievement of equitable demographic representation of our workforce profile and add to the diversity of the Bank.
Absa Bank Limited reserves the right not to make an appointment to the post as advertised