Risk rewired: Are we teaching AI the wrong things about uncertainty?

0
110

We worry about corrupt data. But what if the bigger risk is corrupted thinking?

Agentic artificial intelligence (AI) doesn’t just learn from historical data. It learns from how we define risk, assign responsibility, and reward behavior. If our default mindset is tactical and reactive, we’re hard-coding that worldview into the systems we build.

This isn’t just a data issue – it’s a design flaw. Unless we rethink our approach to uncertainty, we won’t just automate today’s blind spots. We’ll institutionalize them. Instead of building AI that is adaptive, predictive, and resilient, we’ll train it to mirror our limitations.

To manage risk in an age of autonomous systems, we need to rewire how we think about risk itself.

The illusion of control

Picture this. A generative AI model, designed to guide retail investors, scans market data and personalizes stock recommendations. One day, without warning, it starts issuing sell alerts on high-performing stocks. Investors panic. Markets shift. And someone, somewhere, makes a fortune.

Behind the scenes, the algorithm has been subtly corrupted. This is algorithmic poisoning, where attackers don’t just tamper with the data, but rewrite the decision logic itself. And in an age of agentic AI, where systems don’t just assist but act, these risks become harder to trace and faster to scale.

There are no more black swans. Only white ones. Events we once considered rare must now be expected.

Cybercriminals move faster, hit harder, and scale wider than ever before. AI is both the amplifier and the accelerant. Traditional risk models – built on linear assumptions and backward-looking probability – are no longer built for this terrain.

Autonomous agents, exposed assumptions

Gartner named agentic AI its top strategic technology for 2025. By 2029, autonomous agents are expected to resolve 80 percent of customer service issues without human input, transforming cost structures and decision speed across industries.

But this efficiency exposes new risks. Agentic AI isn’t just a tool; it’s an actor. In the wrong hands, or with the wrong incentives, it can become a risk vector in its own right.

Data poisoning is already a problem. But algorithmic manipulation is the next frontier—where attackers exploit the model’s internal logic by rewriting ethical constraints, undermining safety protocols, or subtly steering decisions. These threats don’t require elite skills or state-level backing. Off-the-shelf tools and open-source code make them widely accessible.

The bigger issue is the growing disconnect between how fast AI is deployed and how slowly risk governance is evolving. Most organizations are still using yesterday’s frameworks to contain tomorrow’s threats.

We’re teaching systems to move faster without teaching them to reason better about risk.

The digital autobahn meets governance gridlock

Cybercriminals operate on a digital autobahn – fast, fluid, and increasingly automated. Deepfakes, phishing-as-a-service, synthetic identity fraud, and ransomware are deployed by networks that function like agile startups. They experiment, scale, and iterate faster than most organizations can update their firewalls.

Meanwhile, many businesses are stuck in governance gridlock – slowed by legacy systems, budget constraints, and fragmented governance. AI and cybersecurity are treated as adjacent concerns, managed by disconnected teams.

This fragmentation creates systemic blind spots. Responsible AI cannot be retrofitted. It must be designed in – governing how data is sourced, how models are trained, how decisions are explained, and how systems are monitored. And it must be owned by more than the risk team.

At the EY organization, we’ve developed a Responsible AI framework that brings together governance, performance, transparency, security, fairness and data quality. But the deeper shift is cultural: from compliance-driven thinking to cross-functional trust by design.

Asia isn’t just ground zero – it’s the proving ground

The Asia-Pacific is often described as “ground zero” for cyber attacks. But it may also be the most important testing ground for resilient, adaptive AI governance.

The region’s diverse regulatory, cultural, and technological maturity creates a complex governance landscape. Some jurisdictions lead in AI adoption but are still developing legal guardrails. Others emphasize privacy and consumer protection, but are taking a slower path on AI innovation.

There is no single standard or checklist that works across all markets. Businesses must adopt layered, adaptable approaches: aligning to global standards while responding to local expectations.

Firms operating across the region have a built-in stress test for their agentic AI systems. In effect, they are training their AI to handle ambiguity. That’s not a liability, but a leadership advantage

Confidence is the new velocity

In our latest EY/Institute of International Finance global risk survey, 75 percent of chief risk officers said cybersecurity was their top priority – outstripping all other concerns.

But boardroom action still lags boardroom awareness. Most directors understand that AI is reshaping the risk landscape. They’ve read the headlines about hallucinating chatbots, cloned voices, data breaches, and disappearing audit trails. But many still treat cybersecurity and AI governance as compliance issues, not strategic enablers.

That’s a mistake. AI now sit at the heart of customer experience, brand trust, supply chain operations, and investor confidence. The cost of failure isn’t just financial. It is potentially existential.

Boards shouldn’t be asking, “Are we compliant?”. They should be asking, “Are we confident?”

In a world of white swans, hope is not a strategy. The organizations that succeed won’t be those who move the fastest. They’ll be the ones who design with foresight – embedding guardrails, empowering cross-functional governance, and building cultures where everyone understands and bears responsibility for the risks.

 

#AgenticAI #AIrisks #Cybersecurity #DataPoisoning #ResponsibleAI

إعلان مُمول
البحث
إعلان مُمول
الأقسام
إقرأ المزيد
Networking
Stock Exchange of Thailand collaborates with Google Cloud to enhance operations of capital markets with AI
The Stock Exchange of Thailand (SET) and Google Cloud have on Thursday announced a strategic...
بواسطة Ifvex 2025-03-31 01:41:55 0 3كيلو بايت
Networking
Qualcomm expands generative AI capabilities with acquisition of VinAI division
The United States-based chipmaker Qualcomm has on Wednesday announced the acquisition of MovianAI...
بواسطة Ifvex 2025-04-03 07:01:49 0 2كيلو بايت
Networking
Maybank : concern raised as Temu may turn aggressive in ASEAN amid US challenges
Maybank Investment Bank said Wednesday that the elimination of de minimis exemption by the United...
بواسطة Ifvex 2025-02-12 11:09:48 0 4كيلو بايت
Food
East Ventures incubated company Fore Coffee’s IPO oversubscribed over 200 times
PT Fore Kopi Indonesia Tbk (Fore Coffee), an Indonesia-based premium affordable coffee chain,...
بواسطة Ifvex 2025-04-14 04:35:38 0 3كيلو بايت
Food
Summer Orzo Salad
This recipe for summer orzo salad is quick and easy to prepare, and make a perfect side dish or...
بواسطة Recipes 2025-02-12 04:54:18 0 4كيلو بايت
Ifvex https://ifvex.com