Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

First automated risk management platform to enable safety, fairness, explainability, and continuous monitoring of generative AI systems

Periodic Reporting for period 1 - QuantPi (First automated risk management platform to enable safety, fairness, explainability, and continuous monitoring of generative AI systems)

Okres sprawozdawczy: 2024-02-01 do 2025-01-31

Artificial Intelligence (AI) is the most profound transformation humanity as ever seen. In the era of AI-first organizations, all processes and applications of companies will be driven by AI. With the rise of generative AI, intelligent systems not only determine the formats of human interaction but also its content. At the same time, these extremely powerful generative AI systems can be dangerous black boxes. The number of dramatic incidents explodes and puts the existence of entire organizations at risk. Controlling AI black boxes with today’s testing tools is impossible. Existing testing approaches are highly complex, computationally expensive and cannot be applied for generative AI systems. Full risk assessments with quantifiable evidence of risk likelihood are expensive and time consuming. Large multinationals are at high risk of non-compliance with existing and upcoming AI regulations, e.g. EU AI Act, or US’s Algorithmic Accountability Act.

The QuantPi platform is the first-of-its-kind, model-agnostic, plug & play solution for risk management of generative AI. QuantPi’s platform automatically conducts technical risk assessments within the most important risk dimensions, enabling companies developing or operating generative AI systems to identify, assess and mitigate major risks across dimensions such as data quality, model performance, fairness, robustness, explainability, and more. With a minimal number of queries to the AI system, our proprietary technology PiCrystal is able to detect unintended system behaviour and provide quantitative assessments on concrete testing metrics avoiding redundant calculations delivering results >5-times faster than assessing each risk individually. This information is translated into technical documentations, as well as certification and audit-readiness reports. Our solution assesses conformance with content of +100 standards and regulations, including the EU AI Act, ISO/IEC 23894, and NIST AI RMF and assists companies in their pursuit of responsible and trustworthy AI systems.

Our mission is to enable and accelerate the AI-transformation of all types of organizations without sacrificing security and transparency. Our goal is to reduce time-to-value by allowing organizations to focus on the opportunity of AI while QuantPi ensures that the associated risk is being minimized.

This mission translates in the following objectives:
- Accelerate time-to-value by streamlining the implementation of AI solutions, enabling faster deployment and helping organisations achieve positive ROI on AI.
- Prevent reputational damage by ensuring the reliability, and ethical use of generative AI technologies, helping to safeguard the reputation of our clients and maintain customer trust.
- Mitigate legal risk by translating regulations into technical specifications to ensure AI compliance and by supporting businesses to navigate legal complexities associated with AI implementation.
During the first year of the project, we have performed the following activities related to the product development:

- Developed and used PiCrystal as a tool to automatically generate test suites and assess generative AI systems.
- Conducted proof of value (PoV) projects.
- Collected performance data in operational environments.
- Collected user feedback on UX/UI of AI Hub and PiCrystal from different stakeholders such as data scientists, legal experts, product owners, AI governance leads and C-level.
- Created a structured approach to test generative AI systems by using scenarios represented by data.
- Developed the concept of embedders, metrics, and perturbers to facilitate the analysis and testing of AI behaviour.
- Enabled parametrization and customization of test suites by allowing the specification of appropriate embedders, metrics, and perturbers based on the AI system's requirements.
- Implemented various features and functionalities in response to customer and partner requests, reflecting customer-centric development.
- Developed cloud integration.
- Started ISO certification.
During the first year of the project, our solution based on PiCrystal has enabled clients and users to assess generative AI systems considering the most important risk dimensions such as data quality, model performance, fairness, explainability, and robustness. Furthermore, we have collected user feedback on UX/UI of AI Hub and PiCrystal which has allowed us to implement various features and functionalities in response to customer and partner requests, reflecting customer-centric development.

Our solution has an important impact for the region since it contributes to strength the EU’s role as a global leader in shaping ethical AI practices (especially with EU AI Act implementation) and governance frameworks. Our technology directly supports initiatives that promote digital innovation, data privacy, cybersecurity, and the development of digital skills, boosting the European AI market growth. Additionally, we contribute to build a fairer and more inclusive economy by addressing AI-related risks, ensuring transparency, and protecting individuals' rights; thus, fostering trust, and enhancing the well-being of people in the digital age.