AI model audits need a ‘trust, but verify’ approach to enhance reliability

Share This Post

The following is a guest post and opinion of Samuel Pearton, CMO at Polyhedra.

Reliability remains a mirage in the ever-expanding realm of AI models, affecting mainstream AI adoption in critical sectors like healthcare and finance. AI model audits are essential in restoring reliability within the AI industry, helping regulators, developers, and users enhance accountability and compliance.

But AI model audits can be unreliable since auditors have to independently review the pre-processing (training), in-processing (inference), and post-processing (model deployment) stages. A ‘trust, but verify’ approach improves reliability in audit processes and helps society rebuild trust in AI.

Traditional AI Model Audit Systems Are Unreliable

AI model audits are useful for understanding how an AI system works, its potential impact, and providing evidence-based reports for industry stakeholders.

For instance, companies use audit reports to acquire AI models based on due diligence, assessment, and comparative benefits between different vendor models. These reports further ensure developers have taken necessary precautions at all stages and that the model complies with existing regulatory frameworks.

But AI model audits are prone to reliability issues due to their inherent procedural functioning and human resource challenges.

According to the European Data Protection Board’s (EDPB) AI auditing checklist, audits from a “controller’s implementation of the accountability principle” and “inspection/investigation carried out by a Supervisory Authority” could be different, creating confusion among enforcement agencies.

EDPB’s checklist covers implementation mechanisms, data verification, and impact on subjects through algorithmic audits. But the report also acknowledges audits are based on existing systems and don’t question “whether a system should exist in the first place.”

Besides these structural problems, auditor teams require updated domain knowledge of data sciences and machine learning. They also require complete training, testing, and production sampling data spread across multiple systems, creating complex workflows and interdependencies.

Any knowledge gap or error between coordinating team members can lead to a cascading effect and invalidate the entire audit process. As AI models become more complex, auditors will have additional responsibilities to independently verify and validate reports before aggregated conformity and remedial checks.

The AI industry’s progress is rapidly outpacing auditors’ capacity and capability to conduct forensic analysis and assess AI models. This leaves a void in audit methods, skill sets, and regulatory enforcement, deepening the trust crisis in AI model audits.

An auditor’s primary task is to enhance transparency by evaluating risks, governance, and underlying processes of AI models. When auditors lack the knowledge and tools to assess AI and its implementation within organizational environments, user trust is eroded.

A Deloitte report outlines the three lines of AI defense. In the first line, model owners and management have the main responsibility to manage risks. This is followed by the second line, where policy workers provide the needed oversight for risk mitigation.

The third line of defense is the most important, where auditors gauge the first and second lines to evaluate operational effectiveness. Subsequently, auditors submit a report to the Board of Directors, collating data on the AI model’s best practices and compliance.

To enhance reliability in AI model audits, the people and underlying tech must adopt a ‘trust but verify’ philosophy during audit proceedings.

A ‘Trust, But Verify’ Approach to AI Model Audits

‘Trust, but verify’ is a Russian proverb that U.S. President Ronald Reagan popularized during the United States–Soviet Union nuclear arms treaty. Reagan’s stance of “extensive verification procedures that would enable both sides to monitor compliance” is beneficial for reinstating reliability in AI model audits.

In a ‘trust but verify’ system, AI model audits require continuous evaluation and verification before trusting the audit results. In effect, this means there is no such thing as auditing an AI model, preparing a report, and assuming it to be correct.

So, despite stringent verification procedures and validation mechanisms of all key components, an AI model audit is never safe. In a research paper, Penn State engineer Phil Laplante and NIST Computer Security Division member Rick Kuhn have called this the ‘trust but verify continuously’ AI architecture.

The need for constant evaluation and continuous AI assurance by leveraging the ‘trust but verify continuously’ infrastructure is critical for AI model audits. For example, AI models often require re-auditing and post-event reevaluation since a system’s mission or context can change over its lifespan.

A ‘trust but verify’ method during audits helps determine model performance degradation through new fault detection techniques. Audit teams can deploy testing and mitigation strategies with continuous monitoring, empowering auditors to implement robust algorithms and improved monitoring facilities.

Per Laplante and Kuhn, “continuous monitoring of the AI system is an important part of the post-deployment assurance process model.” Such monitoring is possible through automatic AI audits where routine self-diagnostic tests are embedded into the AI system.

Since internal diagnosis may have trust issues, a trust elevator with a mix of human and machine systems can monitor AI. These systems offer stronger AI audits by facilitating post-mortem and black box recording analysis for retrospective context-based result verification.

An auditor’s primary role is to referee and prevent AI models from crossing trust threshold boundaries. A ‘trust but verify’ approach enables audit team members to verify trustworthiness explicitly at each step. This solves the lack of reliability in AI model audits by restoring confidence in AI systems through rigorous scrutiny and transparent decision-making.

The post AI model audits need a ‘trust, but verify’ approach to enhance reliability appeared first on CryptoSlate.

Read Entire Article
spot_img
- Advertisement -spot_img

Related Posts

Ethereum Holds Support As Smart Money Steps In – What This Means For Price

Ethereum is holding firm above key support as smart money steps in, hinting at growing confidence beneath the surface With bullish signals and steady inflows aligning, the market now watches whether

Silver Breaks Into Record Territory—Schiff Says ‘The Silver Train Can’t Be Stopped’

Silver’s surge to record highs is flashing a warning on inflation, monetary policy, and hard-asset demand, as rising yields and the Fed’s latest pivot fuel a powerful rotation into precious

Is It More Profitable To Hold Bitcoin For The Short-Term? 2025 Numbers Are Here

Bitcoin’s 2025 price action has been anything but smooth, but one group of investors has quietly dominated the year’s profit statistics Short-term holders, which are classified as addresses

XRP Mirrors 2016 Trend That Led To 69% Crash Before 110,000% Rally

XRP has struggled to create any upside traction over the past few days, with the price rejecting above $215 in the middle of the week and now back to lingering just above the $2 level  A new

Robert Kiyosaki Warns Global Crash Resets Valuations as Bitcoin Stands Outside Weakening Systems

Robert Kiyosaki urges investors to prepare for long-term economic decline by using market crashes to accumulate cash-flowing assets and decentralized stores of value, arguing disciplined planning and

Bitcoin Macro Retracement Meets Mid-Range Battle – Will Bulls Reclaim Momentum?

Bitcoin is facing a critical juncture as its macro retracement converges with a tight mid-range battle between $86,000 and $100,000 With bearish patterns confirmed and short-term support holding, the