What You'll Do:
In this role, you will create solutions to complex technical challenges while coding, testing, troubleshooting, debugging, and documenting the systems you develop.
This role will drive a team responsible for expanding OCC’s analytics capabilities by making internal corporate data accessible and usable to analysts throughout the organization.
This role will drive the design and building of our internal analytics data warehouse and the maintenance of the supporting extract, load, and transform processes. This role will also demonstrate expertise in OCC’s corporate data sets and support teams across the organization in successfully harnessing this data.
This role will drive a team to collaborate with business users and technical teams across the organization to facilitate data-driven decision making by enabling exploration and analysis of historical and near real-time data access using on premise and cloud-based tools and technologies.
This role will drive the team responsible for gathering requirements and designing solutions that address the problem at hand while also anticipating yet-to-be-asked analytical questions, as well as collaborating with, and mentoring, team members in defining and owning processes for developing and maintaining our analytics platform to meet the company’s security and IT standards.
The candidate must be able to solve problems creatively, communicate effectively, and proactively engage in technical decisions to achieve objectives.
A team player and work well with business, technical and non-technical professionals in an agile environment.
Primary Duties and Responsibilities:
To perform this job successfully, an individual must be able to perform each primary duty satisfactorily.
Develop and maintain Java-based data pipeline components under the guidance of senior engineers, writing clean, tested, and well-documented code for batch and event-driven workloads using Spring Batch and Spring Boot.
Implement and maintain Kafka consumer and producer services, learning partition strategy and offset management concepts while contributing to schema governance practices using Protobuf.
Support build and maintenance of data lake solutions on AWS S3, assisting with Apache Iceberg table configurations including basic partitioning, snapshot management, and schema evolution tasks.
Write and optimize SQL queries against Trino/Starburst to support analytical access to settlement, position, and trade data, developing proficiency with federated query patterns and performance tuning techniques.
Assist in integrating and maintaining HashiCorp Vault for secrets management across platform services, learning dynamic secret issuance and least-privilege access patterns within Kubernetes workloads.
Contribute to observability efforts by instrumenting Java services with OpenTelemetry and building Splunk dashboards and alerts to monitor latency, error rates, and throughput in production environments.
Participate in Kubernetes-based deployment pipelines using Rancher and Harness, authoring and updating Helm chart configurations and supporting reliable zero-downtime rollouts with guidance from senior staff.
Leverage AI-assisted development tools, including Claude Code, to accelerate feature development, test generation, code review, and documentation while building good engineering habits in a regulated environment.
Participate in on-call rotations and production incident response under mentorship, contributing to root cause analysis write-ups and post-mortems to develop operational discipline and system reliability skills.
Collaborate with business users and technical teams across the organization to understand data needs, gather requirements, and contribute to solutions that make internal data accessible and useful for analysts.
Supervisory Responsibilities:
None
Qualifications:
The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions.
2 years of professional software engineering experience with exposure to production systems, ideally including distributed systems, data pipelines, or backend service development in an enterprise setting.
Demonstrated intellectual curiosity and a strong desire to learn fast; ability to ramp up quickly in large codebases, absorb new technologies, and deliver meaningful contributions in a short period of time.
Foundational understanding of distributed systems concepts such as message delivery semantics, basic fault-tolerance patterns, and event-driven architecture, with enthusiasm to deepen this knowledge on the job.
Some hands-on cloud experience, preferably with AWS services such as S3, IAM, or EC2, with a willingness to develop production-level cloud-native skills including networking, security, and high-availability design.
Working experience within Agile delivery models, including participation in sprint planning, code reviews, backlog refinement, and cross-team communication, with a collaborative and team-oriented attitude.
Ability to operate in a compliance-sensitive environment, showing willingness to learn data lineage documentation, access control governance, and open-source vulnerability remediation practices.
Capacity to stay composed and focused under pressure in time-sensitive production situations, with a desire to develop operational maturity across SLA adherence, data correctness, and system availability.
Genuine openness to adopting AI-assisted engineering workflows such as Claude Code as part of daily development, with curiosity about how intelligent tooling can improve developer productivity and code quality.
Clear written and verbal communication skills, with the ability to ask good questions, document work thoroughly, and participate productively in technical discussions with peers and stakeholders.
A team-first mindset with the ability to work constructively with business, technical, and non-technical professionals in a fast-paced agile environment, and a proactive approach to problem-solving.
Technical Skills:
Solid Java programming fundamentals including object-oriented design, basic concurrency concepts, and experience with Spring Boot; exposure to Spring Batch or similar batch-processing frameworks is a plus.
Working knowledge of Kafka concepts including producers, consumers, and topics; familiarity with message serialization formats such as Protobuf or Avro, and interest in learning schema governance patterns.
Hands-on experience with AWS services, especially S3; familiarity with data lake concepts and an eagerness to develop proficiency with Apache Iceberg table formats, lifecycle management, and storage optimization.
Ability to write analytical SQL queries and interest in developing skills with distributed query engines such as Trino/Starburst; experience querying large datasets and optimizing basic query performance.
Foundational Kubernetes knowledge including core constructs such as deployments, ConfigMaps, and Secrets; exposure to Helm charts and CI/CD pipelines.
Basic familiarity with secrets management concepts and tools such as HashiCorp Vault, or willingness to learn dynamic secret issuance, Kubernetes authentication methods, and least-privilege access policy authoring.
Experience or strong interest in application observability, including logging best practices, distributed tracing concepts, and building dashboards; familiarity with Splunk or similar monitoring tools is a plus.
Comfort with Git version control workflows including branching strategies, pull requests, and code review; exposure to CI/CD tooling such as Jenkins or similar pipeline orchestration platforms.
Practical curiosity about AI-assisted development tools such as Claude Code, with readiness to use them for code generation, refactoring, unit test authoring, and documentation in a disciplined engineering environment.
Exposure to relational database systems such as PostgreSQL, including basic schema design and query writing; familiarity with connection pooling and indexing concepts is beneficial but not required.
Education and/or Experience:
Bachelor's degree in quantitative discipline (e.g., Statistics, Computer Science, Electrical Engineering) or equivalent professional experience; Master’s degree preferred.
2 years of experience as a software engineer, data engineer, analytics engineer, business intelligence analyst, data analyst, data scientist, or research analyst
Certificates or Licenses:
None Required
About Us
The Options Clearing Corporation (OCC) is the world's largest equity derivatives clearing organization. Founded in 1973, OCC is dedicated to promoting stability and market integrity by delivering clearing and settlement services for options, futures and securities lending transactions. As a Systemically Important Financial Market Utility (SIFMU), OCC operates under the jurisdiction of the U.S. Securities and Exchange Commission (SEC), the U.S. Commodity Futures Trading Commission (CFTC), and the Board of Governors of the Federal Reserve System. OCC has more than 100 clearing members and provides central counterparty (CCP) clearing and settlement services to 19 exchanges and trading platforms. More information about OCC is available at www.theocc.com.
Benefits
A highly collaborative and supportive environment developed to encourage work-life balance and employee wellness. Some of these components include:
Visit https://www.theocc.com/careers/thriving-together for more information.
Compensation
Salary Range
$72,600.00 - $112,900.00Incentive Range
6% to 10%This position is eligible for an annual discretionary incentive compensation award, for which the target range is listed above (see Incentive Range). The amount of such award, if any, will be based on various factors, including without limitation, both individual and company performance.
Step 1
When you find a position you're interested in, click the 'Apply' button. Please complete the application and attach your resume.
Step 2
You will receive an email notification to confirm that we've received your application.
Step 3
If you are called in for an interview, a representative from OCC will contact you to set up a date, time, and location.
For more information about OCC, please click here.
OCC is an Equal Opportunity Employer