Objectstream

Sr. Data Engineer

Oklahoma City Full Time

Objectstream is an award-winning small business dedicated to providing innovative products and services in many areas, including information technology, management consulting, and logistics, for customers including the FAA, DOD, and state governments. We accomplish this mission by forging long-term trusted relationships with our employees, customers, and strategic partners; building an organizational culture that promotes empowerment and accountability; assuring a talented, well-trained, and qualified workforce; continuously improving our efficiency and productivity; and being socially and environmentally responsible corporate citizens. We pride ourselves on fostering a collaborative and innovative work environment where employees are encouraged to share ideas and take ownership of their projects.

We are currently seeking a talented Sr. Data Engineer to join our dynamic team supporting our client in Oklahoma City, Oklahoma.

Job Description – Objectstream is seeking a Senior Data Engineer to lead a strategic data migration initiative that moves structured and semi-structured data from Microsoft Azure Blob Storage into Salesforce, with the platform surface delivered through Salesforce Experience Cloud. This role sits at the intersection of cloud data engineering, data quality management, and CRM integration, requiring both technical depth and a sharp analytical mindset.

The source data landscape is heterogeneous: schemas may be inconsistent, field names will differ, data types may require transformation, and business logic must be applied before records can be loaded into Salesforce objects. You will own the end-to-end migration pipeline, from raw ingestion and profiling through cleansing, transformation, schema mapping, and final load validation.

KEY RESPONSIBILITIES

Data Ingestion & Profiling

  • Connect to Azure Blob Storage containers and extract files across formats including CSV, Parquet, JSON, and Avro
  • Profile source datasets to identify schema drift, null patterns, cardinality anomalies, and referential integrity gaps
  • Document data dictionaries and lineage maps for all source-to-target field mappings

Schema Mapping & Transformation

  • Design and implement cross-schema mapping logic where source table structures do not align with Salesforce standard or custom objects
  • Build transformation pipelines using Azure Data Factory, or equivalent ETL tooling to normalize, reshape, and enrich source data
  • Apply business rules and data derivation logic to generate missing or computed fields required by Salesforce objects
  • Handle many-to-one and one-to-many relationship resolution between source tables and Salesforce object hierarchies (Accounts, Contacts, Cases, custom objects)

Data Cleaning & Quality Assurance

  • Detect and remediate duplicate records, invalid formats, out-of-range values, broken foreign keys, and encoding inconsistencies
  • Implement data quality scoring and rejection workflows with clear audit trails for records that fail validation thresholds
  • Collaborate with business stakeholders to define and codify data quality rules and acceptable transformation logic
  • Produce data quality reports pre- and post-migration to demonstrate completeness and accuracy

Salesforce Integration & Loading

  • Load transformed data into Salesforce using Bulk API 2.0, Data Loader, or Salesforce CLI with appropriate error handling and retry logic
  • Work within the Salesforce Experience Cloud data model to ensure migrated records are correctly scoped to the right communities, portals, and sharing rules
  • Manage external ID strategies, upsert operations, and record ownership assignment during migration
  • Coordinate with Salesforce admins and developers to validate object relationships, validation rules, triggers, and process automation that may affect data loading

Pipeline Engineering & Orchestration

  • Design idempotent, resumable migration pipelines capable of incremental and full-load modes
  • Implement orchestration and scheduling using Azure Data Factory or Apache Airflow
  • Build monitoring, alerting, and logging infrastructure for pipeline health and data throughput
  • Write and maintain infrastructure-as-code and pipeline configuration in version-controlled repositories

Stakeholder Collaboration

  • Communicate migration progress, blockers, and data quality findings to technical and non-technical stakeholders
  • Partner with Salesforce Experience Cloud administrators to align portal data structures with migrated records
  • Contribute to post-migration hypercare activities, including reconciliation validation and end-user data access verification

REQUIRED QUALIFICATIONS

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 8+ years of experience in data engineering, ETL development, or data migration roles
  • Demonstrated experience migrating data into Salesforce, including use of Bulk API 2.0, Data Loader, or similar tooling
  • Hands-on experience with Azure Blob Storage and Azure Data Factory
  • Proficiency in Python for data transformation, particularly pandas, PySpark, or similar libraries
  • Strong SQL skills with experience querying, profiling, and transforming large datasets
  • Experience with cross-schema or heterogeneous data mapping where source and target structures differ significantly
  • Familiarity with Salesforce data model fundamentals: standard objects, relationships, sharing model, and record types
  • Understanding of Salesforce Experience Cloud portal data scoping and community user access patterns
  • Experience implementing data quality frameworks: validation rules, rejection workflows, and audit logging
  • Proficiency with Git and version-controlled development practices

PREFERRED QUALIFICATIONS

  • Salesforce Certified Administrator or Platform App Builder certification
  • Experience with Salesforce CLI (sf / sfdx) for data operations and org management
  • Familiarity with Apache Airflow or similar workflow orchestration platforms
  • Experience with dbt for data transformation documentation and lineage
  • Experience with data cataloging and lineage tools
  • Familiarity with Salesforce MuleSoft

WHAT SUCCESS LOOKS LIKE

  • All target Salesforce objects populated with complete, accurate data within agreed migration milestones
  • Full field-level lineage documented from Azure Blob source to Salesforce target
  • Experience Cloud portal users can access migrated records with correct sharing and visibility rules applied
  • Migration pipelines are idempotent, re-runnable, and fully logged for audit purposes
  • Post-migration reconciliation report signed off by business stakeholders

 Benefits of working at Objectstream:

  • Collaborative, innovation‑focused culture
  • Competitive salary and benefits package