Develop, constructs, tests, and maintains data architectures such as databases and large-scale processing systems, where the focus is on infrastructure and data flow management. They are responsible for building & maintaining data pipelines (ETL processes; ensuring data quality & reliability; designing and managing data warehouses and databases; and integrating data from various sources. Data engineers create the systems and structures that allow data to be collected, stored, & processed efficiently
Core Job Responsibilities
Assists in the deployment and monitoring of data pipelines and workflows under guidance from senior engineers.
Supports basic data quality initiatives & integration through testing and validation processes.
Builds basic analytic data sets for exploration and modeling.
Assists in the implementation of data security measures, compliance, standards, and policies.
Documents data processes, procedures, and workflows for knowledge sharing.
Assist with data pipeline development, data cleaning and preprocessing.
Works with Sr Engineers to understands how business needs are converted into analytical/technical requirements; provides input to data gathering.
Assists in the scoping, planning and delivery of projects.
Assists in data collection and cleansing efforts; focused on building and maintaining the infrastructure processes that automate data collection and cleansing.
Understands the fundamentals of big data storage and distribution systems and relevant software tools, their constraints, advantages and disadvantages.
Maintains awareness of trends and best practice approaches
Supervisory / Management Responsibility
Receives specific direction and guidance from more experienced team members. Adapts to changing priorities as they arise.
Completes own work details and assignments of low complexity and guidance from senior level engineers.
Collaborates for results and participates in idea-sharing and solution development.
Escalates issues to more senior colleagues as required.
Works well with others in a collaborative, goal-driven environment.
Position Accountability/ Scope
Contributes to multiple projects to support analytical tasks, with a moderate to high level of support provided by colleagues.
Relevant skills include data mining, informatics, programming, information retrieval, databases and data structures, software engineering and systems design analysis.
Familiar with a range of mainstream commercial and open-source data tools, their constraints, advantages, disadvantages and areas of application; Has intermediate skill in using at least one such tool.
Familiar with data and statistical programming languages (such as SQL, SAS, R and Python). Basic programming skills. Ability to interpret an existing script of moderate complexity
Conducts data acquisition from relational databases and flat files.
Wrangles low-complexity data, selecting appropriate techniques, such as parsing, or an algorithm, to create a data structure relevant to the problem.
Seeks mentorship opportunities in the organization to gain a diverse perspective.
Bachelors Degree (± 16 years)
Computer Science, Information Technology, Data Analytics, Data Science or similar discipline including Databases, Mathematics, Statistics, Physics, or Engineering is preferred
Minimum 1 year
Work experience with degree; or sufficient transferable experience to demonstrate functional equivalence to a degree
Training using mathematical modeling and statistical analysis
Basic experience of database applications
Basic knowledge of several tools (such as SAS, R or Python, SQL)
Prior experience/education in life sciences or healthcare preferred
The base pay for this position is
$81,500.00 – $141,300.00In specific locations, the pay range may vary from the range posted.
Abbott is an Equal Opportunity Employer of Minorities/Women/Individuals with Disabilities/Protected Veterans.
EEO is the Law link - English: http://webstorage.abbott.com/common/External/EEO_English.pdf
EEO is the Law link - Espanol: http://webstorage.abbott.com/common/External/EEO_Spanish.pdf