Using Gen AI in healthcare puts lives on the line – here’s how to mitigate this

0
591

As the use of Gen AI tools in the workplace continues to proliferate and starts becoming a staple much like the document software that we use every day, it follows that many end users trust it to deliver faultless answers. Unfortunately, this assumption does not hold true as recent studies have revealed that Gen AI models often provide answers that are less accurate than its confidence level suggests.

This mismatch between this level of accuracy and the projected confidence of Gen AI tools is especially concerning when we examine the adoption of AI in healthcare, where the stakes are immeasurably higher. According to McKinsey&Company, 72 percent of healthcare organizations are either using or planning to use AI. In Singapore, the Ministry of Health intends to allocate S$200 million for AI advancements over the next five years, deploying GenAI to automate tasks such as medical documentation and summarization.

It behooves healthcare organizations to address inaccuracies in AI models because what’s at risk is not just efficiency or cost savings to a business — but potentially irreparable harm to human lives, their well-being, and their health are on the line.

AI models are as good as the data that is fed to them. This means that AI models used within a healthcare context need to be rigorously tested, provided with reliable data, and supported by robust data management systems to mitigate errors.

Hallucinations hold immense risk

The challenge with using GenAI built on open-source public LLMs lies in its propensity for hallucinations—errors where the AI generates inaccurate or entirely fabricated information. Despite significant efforts from hyperscalers to address this issue with new products and techniques, none have been able to eliminate hallucinations or guarantee consistent thresholds of accuracy.

While GenAI applications hold great promise, they also carry significant risk. An inaccurate GenAI insight derived from GenAI could lead to dire consequences for patient care. GenAI providers shield themselves from liability by including disclaimers in their licensing agreements, cautioning against the use of these tools in high-stakes scenarios. However, healthcare practitioners are already leveraging GenAI for tasks like interpreting X-rays, MRIs, and CT scans, as well as generating statistical summaries of patient data.

This widens the probability of risks wherein erroneous conclusions yielded by GenAI models are mistakenly regarded as accurate. It is thus imperative for healthcare companies to fully recognize these risks and implement rigorous verification processes for any AI-generated diagnosis or recommendation. Responsibility goes beyond ensuring accuracy—it also encompasses the protection of sensitive patient information. Without robust safeguards in place, the use of GenAI may expose data vulnerabilities, including unauthorized access or breaches of confidentiality.

Robust data management mitigates this risk

To address these challenges, developers are exploring ways to improve confidence calibration in GenAI systems. Confidence calibration ensures that the AI system’s reported confidence aligns closely with the actual accuracy of its outputs, reducing the risk of over-reliance on incorrect responses. Achieving this requires three key actions:

  1. Integrating explicit feedback mechanisms where human users or automated systems provide real-time corrections to the AI.
  2. Refining training protocols using diverse, representative datasets to minimize biases and improve model robustness.
  3. Conducting rigorous post-deployment accuracy tests to ensure consistent performance across various conditions.

These efforts depend heavily on access to high quality data — data that is accurate, up-to-date, and diverse. Hybrid data platforms are increasingly becoming the backbone of this process, serving as a single source of truth and enabling organizations to consolidate, validate, and analyze data seamlessly across private data centers, public clouds, and hybrid environments. By providing real-time data access, these platforms serve as foundational enablers for data transformation, implementing feedback mechanisms, refining training, and conducting performance tests.

Features such as robust data cataloging and lineage tracking enhance transparency by revealing the origin and movement of data, thereby building trust in the insights generated by AI models.

Equally critical is ensuring security and compliance. Hybrid platforms must prioritize robust security practices, including encryption (both at rest and in transit), role-based access controls, and integration with enterprise identity management systems. These measures safeguard sensitive patient information, protect data integrity, and ensure that AI models are trained and operate on secure, trustworthy data. Compliance with regulatory standards further solidifies these safeguards, making hybrid platforms indispensable for mitigating risks in AI-driven healthcare.

The future of healthcare depends on our ability to harness the power of data to improve patient outcomes. Scalable, secure, and flexible data ecosystems are the key to unlocking new possibilities in care delivery, research, and innovation, paving the way for transformative advancements in the healthcare landscape.

 

#GenAI #AIinHealthcare #HealthTech #DataSecurity #AIAccuracy

إعلان مُمول
البحث
إعلان مُمول
الأقسام
إقرأ المزيد
أخرى
Deals in brief: Paper battery maker Flint lands seed funding, Pimax raises over RMB 100 million for premium VR headsets, other China deals, and more
Bringing you the latest updates on funding deals and activities in the Asia Pacific. Flint lands...
بواسطة Ifvex 2025-01-08 13:44:16 0 2كيلو بايت
Health
Thailand’s HD HD boosts Series A to $7.8M to accelerate healthcare marketplace and AI in Southeast Asia
HD, a Thailand-based healthcare and surgery marketplace, has on Thursday announced the close of a...
بواسطة Ifvex 2025-02-15 14:56:05 0 1كيلو بايت
Networking
Coastal Sustainability Alliance launches Singapore’s largest electric supply boat
The Coastal Sustainability Alliance (CSA), an industry collaborative effort led by Kuok Maritime...
بواسطة Ifvex 2025-02-06 05:47:58 0 795
Networking
APAC businesses face rising AI-driven cyberattacks in 2025
In 2025, businesses across the Asia-Pacific (APAC) region are confronting a significant...
بواسطة Ifvex 2025-02-12 04:06:22 0 351
Networking
Klook secures $100M in funding led by Vitruvian Partners
Klook, a Hong Kong-based platform for experiences and travel services, has on Tuesday announced...
بواسطة Ifvex 2025-02-12 11:22:51 0 580