Leidos

Principal Responsible AI Engineer

6314 Remote/Teleworker US Full time
Job Description

The Leidos Chief Data & Analytics Office (CDAO) is a high-growth organization at the center of the company's technology strategy. Our Operational AI (Ops.AI) division is seeking a motivated and talented Principal Responsible AI Engineer to join our team. This role is critical for ensuring the ethical, transparent, and secure development of AI that will power our nation's most mission-critical applications. Building trust into our AI systems is vital to accelerating innovation, technology adoption, and improving mission outcomes.

This is an exciting opportunity for a hands-on builder who excels at integrating Responsible AI principles into production-ready solutions. You will be a key technical contributor responsible for the entire lifecycle of our Responsible AI initiatives, from design and development to deployment and monitoring. You will work with a team of experts to build the scalable, high-performance, and trustworthy AI systems that are essential for our success.

Primary Responsibilities

  • System Development: Develop and implement enterprise-scale Responsible AI systems and governance frameworks, ensuring they meet the ethical, performance, and security requirements for mission-critical applications.
  • Framework Architecture: Contribute to the architecture and implementation of a centralized "Responsible AI Framework" to ensure compliance, manage model access, and provide a unified interface for governance and risk management.
  • Monitoring and Auditing: Implement and manage robust monitoring systems to track model performance, fairness, bias, and ethical compliance, and to optimize the cost-effectiveness of AI systems in production.
  • Strategic Collaboration: Work closely with principal engineers, data scientists, and systems architects to translate strategic designs into hardened, production-grade solutions that embed fairness, accountability, and transparency principles.
  • Governance and Guardrails: Establish and maintain robust AI governance frameworks and guardrails to ensure data integrity, filter inputs/outputs, prevent bias, mitigate deployment risks, and protect against adversarial attacks.
  • Best Practices: Apply and promote software engineering best practices, including robust version control, comprehensive automated testing, and mature CI/CD processes for AI systems.
  • Continuous Learning: Stay current with industry trends in Responsible AI, Explainability (XAI), operational AI, and MLOps to continuously evolve the team's capabilities and technical implementation.

Basic Qualifications

  • A Bachelor's degree in Computer Science, Engineering, or a related quantitative field with 12+ years of professional experience, OR a Master's degree with 10+ years of relevant experience.
  • Demonstrated programming proficiency in Python and hands-on experience with major ML libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn).
  • Experience with software engineering best practices and tools, including version control, automated testing, and CI/CD pipelines.
  • Solid understanding of the full machine learning lifecycle, from data preparation and model training to deployment and monitoring.
  • A strong understanding of Responsible AI principles, ethical AI practices, and techniques for bias detection and mitigation.
  • An understanding of cybersecurity principles as they apply to AI systems, including threat modeling and vulnerability assessment.
  • Must be a U.S. Citizen and have the ability to obtain and maintain a U.S. security clearance.

Preferred Qualifications

  • Experience working within the national security, defense, or intelligence communities.
  • Experience with MLOps platforms such as MLflow, Kubeflow, or AWS Sagemaker.
  • Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes).
  • Familiarity with Infrastructure-as-Code (IaC) tools like Terraform or CloudFormation.
  • Hands-on experience with a major cloud platform (AWS, Azure, or GCP).
  • Knowledge of AI ethics, responsible AI practices, and federal compliance standards.
  • Deep familiarity with AI governance and security frameworks such as the NIST AI Risk Management Framework (AI RMF) and MITRE ATLAS.

At Leidos, we don’t want someone who "fits the mold"—we want someone who melts it down and builds something better. This is a role for the restless, the over-caffeinated, the ones who ask, “what’s next?” before the dust settles on “what’s now.”

If you’re already scheming step 20 while everyone else is still debating step 2… good. You’ll fit right in.

Original Posting:

January 7, 2026

For U.S. Positions: While subject to change based on business needs, Leidos reasonably anticipates that this job requisition will remain open for at least 3 days with an anticipated close date of no earlier than 3 days after the original posting date as listed above.

Pay Range:

Pay Range $131,300.00 - $237,350.00

The Leidos pay range for this job level is a general guideline only and not a guarantee of compensation or salary. Additional factors considered in extending an offer include (but are not limited to) responsibilities of the job, education, experience, knowledge, skills, and abilities, as well as internal equity, alignment with market data, applicable bargaining agreement (if any), or other law.