Q2eBanking

Data Engineer

Cary, North Carolina Full time

As passionate about our people as we are about our mission.

Why Join Q2?

Q2 is a leading provider of digital banking and lending solutions to banks, credit unions, alternative finance companies, and fintechs in the U.S. and internationally. Our mission is simple: build strong and diverse communities through innovative financial technology—and we do that by empowering our people to help create success for our customers.

What Makes Q2 Special?

Being as passionate about our people as we are about our mission. We celebrate our employees in many ways, including our “Circle of Awesomeness” award ceremony and day of employee celebration among others! We invest in the growth and development of our team members through ongoing learning opportunities, mentorship programs, internal mobility, and meaningful leadership relationships. We also know that nothing builds trust and collaboration like having fun. We hold an annual Dodgeball for Charity event at our Q2 Stadium in Austin, inviting other local companies to play, and community organizations we support to raise money and awareness together.

Our Team 

The Risk & Fraud team at Q2 helps our customers take a proactive stance against fraud while managing the risks inherent to their business. We build and enhance products that evolve with the ever-changing fraud landscape, delivering tangible value to our customers. Our solutions allow financial institutions to focus more of their time and energy on their mission: serving their customers and communities.


The Role 

In this role, you will take ownership for building and operating our data architecture to support new and evolving fraud solutions.   You’ll play a key role in ensuring data is reliable, scalable, and accessible to power models, agents, and UIs directly impacting our customers’ ability to detect and prevent fraud.

This is an opportunity to work on production systems with real-world impact while continuing to grow your skills in data engineering, cloud platforms, and distributed systems.


Your Key Responsibilities

  • Design, build, and maintain scalable data pipelines and workflows in a cloud environment
  • Deliver clean, well-structured datasets to support fraud analytics, machine learning models, and agentic solutions
  • Contribute to improving our data architecture, including ingestion, storage, and access patterns
  • Own data operations by monitoring data workflows, triaging failures, and resolving data issues
  • Enhance observability and performance by implementing monitoring and optimizing pipelines for reliability, scalability, and cost efficiency
  • Partner with product managers, data scientists, and engineers to translate fraud and risk requirements into data solutions
  • Write maintainable code; participate in code reviews; and help improve testing, deployment, and documentation standards

Requirements 

Must Haves 

  • Typically requires a Bachelor’s degree in (relevant degree) and a minimum of 2 years of related experience; or an advanced degree without experience; or equivalent work experience.  
  • Experience building and maintaining data pipelines and workflows in production environments
  • Proficiency in SQL and working with relational and/or analytical data stores
  • Experience with Python
  • Familiarity with data modeling, transformation, and orchestration concepts
  • Experience with data warehouses and distributed data processing systems
  • Experience with version control (e.g., Git) and CI/CD practices
  • Ability to troubleshoot data issues, debug pipelines, and work through ambiguous problems

Nice to Have

  • Experience with tools such as Apache Airflow, dbt, Kafka, Airbyte, or FiveTran 
  • Experience with Snowflake or similar cloud data warehouses
  • Experience with SQL Server, PostgreSQL, or NoSQL systems like DynamoDB
  • Familiarity with infrastructure as code tools (e.g. Terraform) 
  • Experience with Docker and/or Kubernetes 
  • Exposure to platforms like Databricks, AWS Glue, AWS Sagemaker, Snowpark

RESPONSIBILITIES • Production Support: Start the day by reviewing production data pipeline executions, investigating and resolving failures • Development: Build and orchestrate data pipelines, defining data flow, transformations, and dataset relationships • Observability: Monitor and optimize data pipelines for performance and efficiency • Collaboration: Work closely with teams and stakeholders to understand data requirements and ensure platform solutions meet business needs EXPERIENCE AND KNOWLEDGE • Typically requires a Bachelor’s degree in (relevant degree) and a minimum of 2 years of related experience; or an advanced degree without experience; or equivalent work experience • Advanced knowledge of data transformations, data orchestration and pipelines, data replication • Working knowledge of SQL and NoSQL databases, data warehouses, distributed file storage and compute platforms • Experience with some of the following technologies is a plus: o Data Movement and Pipelines: Apache Airflow, dbt, Kafka, AirByte o Data Warehouse: Snowflake o Databases: SQL Server, Postgres, DynamoDB o Languages: Python, C#, Golang, Bash, SQL o CI/CD tools and infrastructure as code: GitLab, Azure DevOps, Terraform, Argo CD o Containerization: Kubernetes, Docker o Data Tools: Pyspark, Snowpark, AWS Glue, Pandas, Databricks, SageMaker

This position requires fluent written and oral communication in English.

Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.

Health & Wellness

  • Hybrid Work Opportunities

  • Flexible Time Off 

  • Career Development & Mentoring Programs 

  • Health & Wellness Benefits, including competitive health insurance offerings and generous paid parental leave for eligible new parents 

  • Community Volunteering & Company Philanthropy Programs 

  • Employee Peer Recognition Programs – “You Earned it”

Click here to find out more about the benefits we offer.

Our Culture & Commitment:

We’re proud to foster a supportive, inclusive environment where career growth, collaboration, and wellness are prioritized. And our benefits go beyond healthcare—offering resources for physical, mental, and professional well-being. Click here to find out more about the benefits we offer. Q2 employees are encouraged to give back through volunteer work and nonprofit support through our Spark Program (see more). We believe in making an impact—in the industry and in the community.

We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, genetic information, or veteran status.


Applicants in California or Washington State may not be exempt from federal and state overtime requirements