Digicert

Sr. Data Engineer

United States Remote Full Time

Who we are

We're a leading, global security authority that's disrupting our own category.  Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers.  We help companies put trust - an abstract idea - to work. That's digital trust for the real world.

 

Job summary

We’re revitalizing our engineering culture and embracing modern software design and delivery practices. The UltraDNS Data Services team is seeking a Senior Data Engineer to design and scale the next generation of data infrastructure. You’ll help us build and scale real-time analytics and cloud-native data processing systems that handle hundreds of billions of DNS transactions daily. Your work will directly enhance DigiCert’s ability to deliver insights, reliability, and performance to customers across the globe.

 

What you will do

  • Design, build, and optimize large-scale data ingestion, transformation, and streaming pipelines for DNS exhaust data collected from our global edge infrastructure.
  • Implement real-time and near real-time analytics pipelines using technologies such as Kafka, Flink, Spark Streaming, or Kinesis.
  • Design, develop, and maintain robust data models and warehouse structures (e.g., Snowflake, BigQuery, Redshift, ClickHouse) to support high-throughput analytical workloads.
  • Build tools and frameworks for data quality, validation, and observability across distributed, cloud-based systems.
  • Optimize data storage, retention, and partitioning strategies to balance cost and query performance.
  • Work with data visualization and analytics tools such as Grafana, Tableau, or similar platforms to surface operational metrics and data insights.
  • Collaborate with software and platform engineering teams to integrate real-time data into their services and customer-facing analytics.
  • Ensure data security and compliance in multi-tenant, high-volume cloud environments.

 

What you will have

  • Four-year degree in IT, Computer Science, or a related field, or equivalent professional experience.
  • 7+ years of experience in data engineering or large-scale data infrastructure development.
  • Strong engineering background with experience building and operating distributed data systems capable of processing millions of transactions per second.
  • Proven experience with stream processing frameworks such as Kafka Streams, Flink, Spark Structured Streaming, or equivalent technologies.
  • Strong proficiency in languages supporting data pipeline development, such as Python, Go, or Scala.
  • Deep understanding of ETL/ELT design patterns and data warehouse technologies (e.g., Snowflake, BigQuery, Redshift, ClickHouse, Databricks).
  • Advanced SQL skills, including query optimization, schema design, and partitioning strategies for large-scale analytical datasets.
  • Extensive hands-on experience with cloud-based data services such as Athena, Glue, S3, Kinesis, and Lambda, as well as data lakehouse technologies like Apache Iceberg, Parquet, and Delta Lake.

 

Nice to have

  • Understanding of DNS data concepts and familiarity with DNS traffic, telemetry, or network-level observability data.
  • Familiarity with CI/CD best practices and infrastructure-as-code tools such as Terraform and CloudFormation.
  • Hands-on experience with containerization and orchestration (e.g. Docker, Kubernetes).
  • Familiarity with machine learning concepts and the integration of data pipelines that support future ML or inference workflows.

 

Benefits

  • Generous time off policies
  • Top shelf benefits
  • Education, wellness and lifestyle support

 

#LI-KK1