Positively disrupting an industry that has not seen any innovation in over 50 years, Tekion has challenged the paradigm with the first and fastest cloud-native automotive platform that includes the revolutionary Automotive Retail Cloud (ARC) for retailers, Automotive Enterprise Cloud (AEC) for manufacturers and other large automotive enterprises and Automotive Partner Cloud (APC) for technology and industry partners. Tekion connects the entire spectrum of the automotive retail ecosystem through one seamless platform. The transformative platform uses cutting-edge technology, big data, machine learning, and AI to seamlessly bring together OEMs, retailers/dealers and consumers. With its highly configurable integration and greater customer engagement capabilities, Tekion is enabling the best automotive retail experiences ever. Tekion employs close to 3,000 people across North America, Asia and Europe.
Job Description
We are seeking a Staff SDET to own the quality architecture for our AI-native platform. This is not a test execution role — it is a quality engineering leadership role. You will design and own the test strategy across all layers of the hexagonal architecture (domain core, adapters, integration, end-to-end), establish quality gates for AI model outputs, build the golden dataset evaluation framework, and set the engineering bar for Senior SDETs and QA engineers on both Scrum teams. Nothing ships without your sign-off.
Key Responsibilities
• Own the complete quality architecture: define test strategies at each hexagonal layer — domain core unit tests, adapter integration tests (Testcontainers), service-to-service contract tests (Pact), and end-to-end scenario tests.
• Design and build the golden dataset evaluation framework: seed realistic multi-tenant dealer/customer data across MongoDB, PostgreSQL, Cosmos DB, Elasticsearch, and Kafka, and define assertion criteria for scoring accuracy, suppression correctness, and attribution logic.
• Establish AI quality gates: define evaluation criteria for ML model outputs (scoring, recommendations), LLM response quality, RAG retrieval accuracy (precision@k, recall), and agentic workflow correctness.
• Architect test infrastructure: Testcontainers configurations for MongoDB, PostgreSQL, Cosmos DB, Kafka, Elasticsearch, and Redis/Aerospike — enabling fully local integration test runs.
• Own CI/CD quality pipeline: define what automated tests run at commit, PR, staging, and pre-release gates — with blocking quality thresholds.
• Define and track quality metrics: test coverage per hexagonal layer, defect leakage, flakiness rate, MTTR, and automation ROI.
• Lead root cause analysis for production escapes — own post-mortems and preventive measure implementation.
• Mentor Senior SDETs and QA engineers across both teams; conduct design reviews for test frameworks and golden dataset designs.
• Influence backend and frontend engineers on testability: drive API design, logging, observability, and error handling that makes services intrinsically testable.
Skills & Experience
• 8+ years of QA Engineering, Test Automation, or SDET experience with strong hands-on engineering depth.
• Deep expertise designing test strategies for hexagonal architecture: proven ability to test domain cores independently from adapters, and adapters independently from each other.
• Strong programming skills in Java (primary) and Python — able to write production-quality test frameworks, not just test scripts.
• Expert-level Testcontainers experience across MongoDB, PostgreSQL, Kafka, Elasticsearch, and Redis/Aerospike.
• Experience with contract testing (Pact or similar) for service boundary validation and service virtualization.
• Deep Kafka testing expertise: consumer group behavior, event ordering guarantees, idempotency, and dead-letter queue validation.
• Experience designing golden dataset strategies: multi-tenant data seeding, realistic event sequences, and deterministic assertion design.
• Experience integrating quality gates into CI/CD pipelines with meaningful blocking thresholds.
• Strong understanding of AI/ML output validation: scoring correctness, RAG retrieval quality metrics, and agentic workflow eval design.
Preferred Skills
• Experience with performance, load, and chaos testing (Gatling, k6) against Kafka, Elasticsearch, and MongoDB at scale.
• Background in AI/ML quality engineering: eval frameworks (RAGAS or similar), model regression detection, and LLM safety testing.
• Prior experience owning quality for a platform serving multiple engineering teams (quality platform ownership).
• Familiarity with Cosmos DB testing patterns and eventual consistency validation strategies.
Perks & Benefits
• Competitive compensation and generous stock options.
• Medical insurance coverage.
• Work with some of the brightest minds from Silicon Valley's most dominant and successful companies.
Tekion is proud to be an Equal Employment Opportunity employer. We do not discriminate based upon race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, victim of violence or having a family member who is a victim of violence, the intersectionality of two or more protected categories, or other applicable legally protected characteristics.
For more information on our privacy practices, please refer to our Applicant Privacy Notice here.