As generative AI systems move into regulated, high‑risk environments, early success in pilot projects often breaks down at scale. In Retrieval‑Augmented Generation (RAG) and agentic AI systems, instabilities in embeddings and retrieval can lead to inconsistent answers, hallucinations, and compounding errors, undermining trust and making regulatory acceptance impossible.
This session introduces Trust by Design (TbD), a practical framework that adapts proven Quality by Design (QbD) principles from pharmaceutical development (ICH Q8) to AI systems. Instead of treating trust as an afterthought, TbD embeds trustworthiness directly into system design.Using real‑world examples, the talk shows how AI output quality can be defined, measured, and controlled through concepts familiar to regulators: Target Product Profiles, Critical Quality Attributes, Critical Process Parameters, and validated design spaces. Particular focus is placed on output stability as a critical quality attribute for trustworthy AI.The session offers regulators, policymakers, and industry leaders a shared, evidence‑based approach to governing AI where trust is non‑negotiable.
Key takeaways for the audience:
- Why hallucinations and instability originate early in AI pipelines—especially in embeddings and retrieval
- How agentic AI systems amplify small errors into major risks at scale
- How to translate proven QbD and ICH Q8 concepts into a practical AI governance framework
- How Trust by Design enables explainability, reproducibility, and regulatory confidence in AI systems
- Why Trust by Design should be applied selectively—only where trust is non‑negotiable