Deloitte report: Fewer than two-thirds of organizations in SEA believe their employees have the capabilities to use AI responsibly
A new report co-developed by Deloitte Access Economics and Deloitte AI Institute, AI at a crossroads: Building trust as the path to scale, reveals critical insights for C-suite and technology leaders on how they can develop effective artificial intelligence (AI) governance amidst accelerating adoption and growing risk management challenges.
The report is based on a survey of nearly 900 senior leaders across 13 Asia Pacific geographies, including six Southeast Asia (SEA) geographies – namely, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam – whose responses were assessed against Deloitte’s AI Governance Maturity Index to identify what good AI governance looks like in practice, according to a statement on Wednesday.
With investments in AI projected to reach $110 billion by 2028 in the Asia Pacific region alone, it emphasises the need for robust governance frameworks to enable businesses to adopt AI more effectively, build customer trust, and create paths to value and scale.
Commenting on the report, Dr. Elea WURTH, Lead Partner, Trustworthy AI Strategy, Risk & Transactions, Deloitte Asia Pacific and Australia, said, “Effective AI governance is not just a compliance issue; it is essential for unlocking the full potential of AI technologies. Our findings reveal that organisations with robust governance frameworks are not only better equipped to manage risks but also experience greater trust in their AI outputs, increased operational efficiency and ultimately greater value and scale.”
Navigating risks from AI adoption
Amongst SEA geographies covered in the survey, security vulnerabilities, including cyber or hacking risks, were most commonly cited as top concerns associated with the risk of using AI. Other top concerns also include those pertaining to privacy, such as confidential or personal data breaches and the invasion of privacy due to pervasive surveillance.
This is a trend that was especially pronounced for Singapore, with nearly all respondents indicating security vulnerabilities (96 percent) and privacy breaches (94 percent) as areas of concern – and tracks closely with the finding that 35 percent of Singapore respondents also reported an increase in incidents at their organisations in the last financial year, the highest amongst all SEA geographies.
Chris Lewin, AI & Data Capability Leader, Deloitte Asia Pacific and Southeast Asia, said, “The rapid pace and scale of AI adoption has meant that organisations are encountering AI-related risks in real-time as they experiment and roll out the technology. Given that Southeast Asia and the wider Asia Pacific region are hotbeds for cyberattacks, business leaders are understandably most concerned about security vulnerabilities, which can arise from the AI solutions themselves, the vast amount of data used by these solutions, or a combination of both. What we have found, however, is that organisations who have implemented incident responses and remediation plans are less likely to be concerned about such risks. This highlights the critical importance of effective governance to address concerns about AI use.”
Building Trustworthy AI
Developing trustworthy AI solutions is essential for senior leaders to successfully navigate the risks of rapid AI adoption and fully embrace and integrate this transformative technology. Deloitte’s Trustworthy AI Framework outlines seven dimensions that are necessary to build trust in AI solutions which are transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, responsible, and accountable. This framework and criteria should be applied to AI solutions from ideation through to design, development, procurement, and deployment.
The survey reveals that across Asia Pacific, organisations with mature AI governance frameworks report a 28 percent increase in staff using AI solutions, and have deployed AI in three additional areas of the business. These businesses achieve nearly 5 percentage points higher revenue growth compared to those with less established governance.
Key recommendations from the report include:
Prioritise AI governance to realise returns from AI: Continuous evaluation of AI governance is required across the organisation’s policies, principles, procedures, and controls. This includes monitoring changing regulations for specific locations and industries to remain at the forefront of AI governance standards.
Understand and leverage the broader AI supply chain: Organisations need to understand their own use of AI as well as interactions with the broader ‘AI supply chain’ − including developers, deployers, regulators, platform providers, end users, and customers − and perform regular audits throughout the AI solution lifecycle.
Build risk managers, not risk avoiders: Developing employees’ skills and capabilities can help organisations better identify, assess, and manage potential risks, thereby preventing or mitigating issues rather than avoiding them altogether.
Communicate and ensure AI transformation readiness across the business: Organisations should be transparent about their long-term AI strategy, the associated benefits and risks, and provide training for teams on using AI models while reskilling those whose roles may be affected by AI.
“The erosion of consumer confidence and damage to brand reputation can have lasting effects, making it essential for businesses to effectively manage AI and cybersecurity. Consumers prefer companies that align AI use with ethical standards like transparency, with 45 percent of those surveyed believing strong governance enhances their organisation’s reputation,
“However, our research shows that organisations are tending to overestimate their readiness in terms of AI governance. Urgent action is required by senior leaders to enhance their current AI governance practices to unlock the benefits of AI, as well being prepared for emerging AI regulations which will impact future business success”, said Deloitte Asia Pacific’s Consulting Businesses Leader Rob HILLARD.
Human judgement as fundamental to Trustworthy AI
Given that the rapid pace of AI adoption is driven by employees, who often outpace their leaders – a previous Deloitte study on Generation AI[3] found that more than 70% of young employees and students in Southeast Asia have already adopted the use of generative AI – the report also highlights the critical role of human judgement and action (or reaction) in successful AI governance.
Employees – whether they are designing, deploying, or using the AI solutions – have valuable insights about the functionality and potential risks related to using AI solutions. However, fewer than two-thirds of SEA respondents – and in the case of Singapore, only half (50%) – believe that employees in their organisations have the required level of skills and capabilities to use AI solutions responsibly.
“Based on our findings, however, the people and skills pillar is an area where organisations consistently score the lowest on average. Training is, of course, a powerful tool to bridge this gap, and we have observed that more than three-quarters of SEA respondents are investing in employee upskilling. The only exception is Singapore, where the skills gap is the widest and nearly seven in 10 organisations have needed to close the gap through hiring, possibly due to the market’s demand for highly specialised and technical roles,” added Chris.
#AI #AIgovernance #TrustworthyAI #Cybersecurity #AIadoption
- Art
- Causes
- Crafts
- Dance
- Drinks
- Film
- Fitness
- Food
- Spellen
- Gardening
- Health
- Home
- Literature
- Music
- Networking
- Other
- Party
- Religion
- Shopping
- Sports
- Theater
- Wellness