Free Tool Launch Makes AI Safer, More Trustworthy, The PHAWM Project

As artificial intelligence becomes increasingly embedded in everyday life, concerns about its safety, reliability, and trustworthiness have grown in parallel. From healthcare diagnostics to content generation, AI systems are making decisions that affect millions of people, yet the mechanisms for auditing these systems remain largely inaccessible to ordinary users.

A new initiative from researchers at the University of Strathclyde and partner institutions across the UK aims to change that. The PHAWM project has launched a free, publicly available tool designed to enable ordinary users to conduct in-depth audits of the strengths and weaknesses of any AI-driven application.

The Need for AI Auditing

AI can be an invaluable resource for processing and understanding vast amounts of often complex information. Yet significant concerns remain about its safety and reliability. As Dr Yashar Moshfeghi, Principal Investigator at Strathclyde, explains: “This increasing prevalence means it is important that people using AI, particularly those who do not have technical knowledge around it, are able to do so with confidence and reassurance, for either professional or personal use.”

The challenge is real. AI systems can perpetuate bias, make errors, or behave in unpredictable ways. Without tools to audit these systems, users are forced to trust them blindly or avoid them altogether. Neither option is satisfactory.

The PHAWM tool addresses this gap by providing a framework for systematic auditing that is accessible to non-experts. It will actively involve audiences who are usually excluded from the audit process, including those who will be affected by the AI app’s decisions, to produce better outcomes for end users.

The EU AI Act Context

The launch comes at a time when regulatory frameworks for AI are evolving rapidly. The EU AI Act, introduced in 2024, seeks to balance AI innovation with protections against unintended negative consequences. It establishes a risk-based approach to AI regulation, with stricter requirements for high-risk applications.

Tools like PHAWM complement such regulatory efforts by providing the means to implement them. Regulation without the tools to comply is empty; tools without regulation lack teeth. Together, they create a framework for accountable AI development.

How the Tool Works

The tool and its accompanying guiding framework have been developed through extensive workshops with the project’s partners and other stakeholders in the health and cultural heritage sectors. These are two of the four areas, along with media content and collaborative content generation, that the project was set up to investigate.

The choice of these sectors is strategic. Healthcare AI applications have direct implications for patient safety. Cultural heritage applications raise questions about representation and bias. Media content involves issues of misinformation and manipulation. Collaborative content generation touches on creativity and authorship.

By focusing on these areas, the PHAWM project is addressing some of the most consequential applications of AI. The insights gained will be applicable across domains.

The Research Partnership

PHAWM is supported by £3.5 million in funding from Responsible AI UK (RAI UK) and brings together more than 30 researchers from seven UK universities and 28 partner organisations. This scale of collaboration is necessary to address the multifaceted challenges of AI auditing.

The team is also developing comprehensive training and support for certification to help organisations adopt PHAWM’s auditing tools as effectively as possible. This goes beyond simply releasing a tool; it is about building an ecosystem of practice around responsible AI.

Free and Accessible

The tool and framework are free to download from PHAWM’s website. This is a significant choice. By making the tool freely available, the project ensures that access is not limited to well-resourced organisations. Small businesses, non-profits, and individual users can all benefit.

This democratisation of AI auditing is essential if AI is to be truly trustworthy. Trust cannot be imposed from above; it must be earned through transparency and accountability. By giving users the means to audit AI systems themselves, the PHAWM project empowers them to make informed decisions.

Implications for India

While the PHAWM project is based in the UK, its implications are global. India, with its rapidly growing AI ecosystem, stands to benefit from such tools. Indian companies developing AI applications can use PHAWM to audit their systems and build trust with users. Indian regulators can draw on the framework to inform their approach to AI governance.

The principles underlying PHAWM—transparency, accountability, inclusion—are universal. They apply as much to AI systems deployed in Mumbai as to those in Manchester.

The Broader Challenge

AI auditing is not a one-time activity. As AI systems learn and adapt, their behaviour can change. What was safe yesterday may not be safe tomorrow. Continuous auditing, ongoing monitoring, and regular updates are necessary.

The PHAWM project recognises this. By developing training and certification programmes, it is building capacity for sustained engagement with AI auditing, not just a one-off check.

Conclusion: A Step Toward Trustworthy AI

The launch of the PHAWM tool is a significant step toward making AI safer and more trustworthy. By empowering ordinary users to audit AI systems, it addresses a critical gap in the AI ecosystem. By making the tool free and accessible, it ensures that the benefits of AI auditing are widely shared.

As Dr Moshfeghi puts it, the tool will help users “take full advantage of AI’s potential and minimise their exposure to its risks.” That is exactly the balance we need to strike as AI becomes ever more embedded in our lives.

The PHAWM project demonstrates that responsible AI is not just about regulation from above but about empowerment from below. When users have the tools to hold AI systems accountable, those systems become more reliable, more trustworthy, and more beneficial for everyone.

Q&A: Unpacking the PHAWM AI Auditing Tool

Q1: What is the PHAWM project and what does it offer?

PHAWM is a research project supported by £3.5 million in funding from Responsible AI UK, bringing together over 30 researchers from seven UK universities and 28 partner organisations. It has launched a free, publicly available tool that enables ordinary users to conduct in-depth audits of the strengths and weaknesses of any AI-driven application, making AI safety accessible to non-experts.

Q2: Why is such a tool necessary?

AI systems can perpetuate bias, make errors, or behave unpredictably. Without auditing tools, users must trust AI systems blindly or avoid them. The tool addresses this gap by providing a systematic framework for auditing that is accessible to non-experts. It actively involves audiences usually excluded from audit processes, including those affected by AI decisions, to produce better outcomes for end users.

Q3: How does the tool relate to the EU AI Act?

The EU AI Act, introduced in 2024, seeks to balance AI innovation with protections against unintended negative consequences through a risk-based regulatory approach. Tools like PHAWM complement such regulations by providing practical means to implement them. Regulation without compliance tools is empty; tools without regulation lack teeth. Together, they create a framework for accountable AI development.

Q4: What sectors were involved in developing the tool?

The tool and framework were developed through extensive workshops with partners and stakeholders in health and cultural heritage sectors—two of four areas the project investigates (along with media content and collaborative content generation). These sectors were chosen because healthcare AI affects patient safety, cultural heritage raises questions about representation, media involves misinformation, and collaborative content touches on authorship.

Q5: What are the implications for India?

India’s rapidly growing AI ecosystem stands to benefit significantly. Indian companies can use PHAWM to audit their systems and build user trust. Indian regulators can draw on the framework to inform AI governance approaches. The principles of transparency, accountability, and inclusion are universal, applying as much to AI systems in Mumbai as in Manchester. The free, accessible nature of the tool ensures wide availability.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form