Case studies
In this section, you’ll find case studies showing how our independent AI assurance services strengthen reliability, fairness, and privacy, helping organizations improve performance while reducing systemic and reputational risks.
Deep Learning for social services
The evaluation of Allegheny County’s homelessness risk tool examined performance and potential disparities across protected groups. The insights led to stronger monitoring, clearer procedures, and better guidance for teams using the system in practice.
Detecting bias in AI hiring systems
The FINDHR project introduced practical tools, guidelines, and auditing frameworks designed to reduce discrimination in AI-assisted hiring. These resources support more transparent, inclusive, and accountable recruitment practices across Europe.
Auditing an AI-based cybersecurity application
This assessment of a high-risk cybersecurity model revealed critical gaps in data quality, governance, and transparency. The work strengthened compliance with the EU AI Act and GDPR while significantly improving the system’s accuracy, fairness, and reliability.
Fairness in public-sector AI
The evaluation of Allegheny County’s homelessness risk tool examined performance and potential disparities across protected groups. The insights led to stronger monitoring, clearer procedures, and better guidance for teams using the system in practice.
Audit of an AI-based wellbeing support application
Safety, fairness, and privacy were tested across sensitive, real-world scenarios involving an AI wellbeing companion. The review sharpened crisis-response protocols, reduced subtle bias, and reinforced safeguards throughout the user experience.
Responsible AI for wellbeing apps
An ethics and algorithmic review of two digital wellbeing apps identified key opportunities to strengthen data transparency, accessibility, and inclusivity. The assessment supported the integration of responsible AI practices throughout product development.