Risk rewired: Are we teaching AI the wrong things about uncertainty?

0
124

We worry about corrupt data. But what if the bigger risk is corrupted thinking?

Agentic artificial intelligence (AI) doesn’t just learn from historical data. It learns from how we define risk, assign responsibility, and reward behavior. If our default mindset is tactical and reactive, we’re hard-coding that worldview into the systems we build.

This isn’t just a data issue – it’s a design flaw. Unless we rethink our approach to uncertainty, we won’t just automate today’s blind spots. We’ll institutionalize them. Instead of building AI that is adaptive, predictive, and resilient, we’ll train it to mirror our limitations.

To manage risk in an age of autonomous systems, we need to rewire how we think about risk itself.

The illusion of control

Picture this. A generative AI model, designed to guide retail investors, scans market data and personalizes stock recommendations. One day, without warning, it starts issuing sell alerts on high-performing stocks. Investors panic. Markets shift. And someone, somewhere, makes a fortune.

Behind the scenes, the algorithm has been subtly corrupted. This is algorithmic poisoning, where attackers don’t just tamper with the data, but rewrite the decision logic itself. And in an age of agentic AI, where systems don’t just assist but act, these risks become harder to trace and faster to scale.

There are no more black swans. Only white ones. Events we once considered rare must now be expected.

Cybercriminals move faster, hit harder, and scale wider than ever before. AI is both the amplifier and the accelerant. Traditional risk models – built on linear assumptions and backward-looking probability – are no longer built for this terrain.

Autonomous agents, exposed assumptions

Gartner named agentic AI its top strategic technology for 2025. By 2029, autonomous agents are expected to resolve 80 percent of customer service issues without human input, transforming cost structures and decision speed across industries.

But this efficiency exposes new risks. Agentic AI isn’t just a tool; it’s an actor. In the wrong hands, or with the wrong incentives, it can become a risk vector in its own right.

Data poisoning is already a problem. But algorithmic manipulation is the next frontier—where attackers exploit the model’s internal logic by rewriting ethical constraints, undermining safety protocols, or subtly steering decisions. These threats don’t require elite skills or state-level backing. Off-the-shelf tools and open-source code make them widely accessible.

The bigger issue is the growing disconnect between how fast AI is deployed and how slowly risk governance is evolving. Most organizations are still using yesterday’s frameworks to contain tomorrow’s threats.

We’re teaching systems to move faster without teaching them to reason better about risk.

The digital autobahn meets governance gridlock

Cybercriminals operate on a digital autobahn – fast, fluid, and increasingly automated. Deepfakes, phishing-as-a-service, synthetic identity fraud, and ransomware are deployed by networks that function like agile startups. They experiment, scale, and iterate faster than most organizations can update their firewalls.

Meanwhile, many businesses are stuck in governance gridlock – slowed by legacy systems, budget constraints, and fragmented governance. AI and cybersecurity are treated as adjacent concerns, managed by disconnected teams.

This fragmentation creates systemic blind spots. Responsible AI cannot be retrofitted. It must be designed in – governing how data is sourced, how models are trained, how decisions are explained, and how systems are monitored. And it must be owned by more than the risk team.

At the EY organization, we’ve developed a Responsible AI framework that brings together governance, performance, transparency, security, fairness and data quality. But the deeper shift is cultural: from compliance-driven thinking to cross-functional trust by design.

Asia isn’t just ground zero – it’s the proving ground

The Asia-Pacific is often described as “ground zero” for cyber attacks. But it may also be the most important testing ground for resilient, adaptive AI governance.

The region’s diverse regulatory, cultural, and technological maturity creates a complex governance landscape. Some jurisdictions lead in AI adoption but are still developing legal guardrails. Others emphasize privacy and consumer protection, but are taking a slower path on AI innovation.

There is no single standard or checklist that works across all markets. Businesses must adopt layered, adaptable approaches: aligning to global standards while responding to local expectations.

Firms operating across the region have a built-in stress test for their agentic AI systems. In effect, they are training their AI to handle ambiguity. That’s not a liability, but a leadership advantage

Confidence is the new velocity

In our latest EY/Institute of International Finance global risk survey, 75 percent of chief risk officers said cybersecurity was their top priority – outstripping all other concerns.

But boardroom action still lags boardroom awareness. Most directors understand that AI is reshaping the risk landscape. They’ve read the headlines about hallucinating chatbots, cloned voices, data breaches, and disappearing audit trails. But many still treat cybersecurity and AI governance as compliance issues, not strategic enablers.

That’s a mistake. AI now sit at the heart of customer experience, brand trust, supply chain operations, and investor confidence. The cost of failure isn’t just financial. It is potentially existential.

Boards shouldn’t be asking, “Are we compliant?”. They should be asking, “Are we confident?”

In a world of white swans, hope is not a strategy. The organizations that succeed won’t be those who move the fastest. They’ll be the ones who design with foresight – embedding guardrails, empowering cross-functional governance, and building cultures where everyone understands and bears responsibility for the risks.

 

#AgenticAI #AIrisks #Cybersecurity #DataPoisoning #ResponsibleAI

Gesponsert
Suche
Gesponsert
Kategorien
Mehr lesen
Food
Ratatouille Pasta
If you're looking for a late summer, vegetarian pasta dish, here it is! This helps to use up all...
Von Recipes 2025-02-10 16:31:50 0 5KB
Networking
Singapore’s fileAI raises $14M in Series A funding led by Illuminate Financial, Antler Elevate, Insignia, Heinemann and others
FileAI, a Singapore-based enterprise artificial intelligence (AI) startup, has on Monday...
Von Ifvex 2025-02-05 08:16:56 0 4KB
Food
Chipotle Mayo
This recipe for chipotle mayo will spice up your favorite sandwiches or become your new favorite...
Von Recipes 2025-04-30 10:03:34 0 1KB
Networking
Google Cloud and Malaysia’s Ministry of Digital launch AI at Work 2.0
The Ministry of Digital of Malaysia and Google Cloud have on Wednesday announced the launch of...
Von Ifvex 2025-02-06 05:38:56 0 4KB
Networking
Ant International opens first middle east office in Saudi Arabia
Ant International, a global digital payment, digitization and financial technology provider, has...
Von Ifvex 2025-02-12 20:39:00 0 4KB
Ifvex https://ifvex.com