AI Governance in Healthcare
Presented by
Dr. Harvey Castro, MD, MBA
Chief AI Officer, Phantom Space Corporation | ER Physician | 2025 Global AI Ambassador
AI Assists. Licensed Humans Decide.
A physician-led educational platform exploring how AI systems like ChatGPT™, Claude, Gemini, and Med-PaLM should be used responsibly, governed carefully, and overseen by licensed healthcare professionals in clinical practice.

Physician-Created & Physician-Controlled Educational Platform

This website is created and controlled by Dr. Harvey Castro, MD, a board-certified Emergency Medicine physician, and is designed for licensed healthcare professionals, clinicians, policymakers, and the healthcare community. This site is not affiliated with, endorsed by, sponsored by, or operated by OpenAI, Anthropic, Google, or any AI company. AI system names (ChatGPT™, Claude, Gemini, GPT-4) are trademarks of their respective owners and are referenced solely for educational and descriptive purposes to discuss generative AI in healthcare. The word "chat" is used in its common, generic sense as a verb meaning "to converse" (e.g., "let's chat about your health"), not as a brand identifier. This site does not provide medical advice, diagnosis, or treatment.

AI Governance in Healthcare
Presented by
Dr. Harvey Castro, MD, MBA
Chief AI Officer, Phantom Space Corporation | ER Physician | 2025 Global AI Ambassador
AI Assists. Licensed Humans Decide.
A physician-led educational platform exploring how AI systems like ChatGPT™, Claude, Gemini, and Med-PaLM should be used responsibly, governed carefully, and overseen by licensed healthcare professionals in clinical practice.

Physician-Created & Physician-Controlled Educational Platform

This website is created and controlled by Dr. Harvey Castro, MD, a board-certified Emergency Medicine physician, and is designed for licensed healthcare professionals, clinicians, policymakers, and the healthcare community. This site is not affiliated with, endorsed by, sponsored by, or operated by OpenAI, Anthropic, Google, or any AI company. AI system names (ChatGPT™, Claude, Gemini, GPT-4) are trademarks of their respective owners and are referenced solely for educational and descriptive purposes to discuss generative AI in healthcare. The word "chat" is used in its common, generic sense as a verb meaning "to converse" (e.g., "let's chat about your health"), not as a brand identifier. This site does not provide medical advice, diagnosis, or treatment.

Governance & Accountability

The Guardrails of Care

Governance is not a barrier to innovation; it is the foundation of safety. This section provides educational content on the critical importance of licensed human oversight in AI-assisted clinical decision-making and robust accountability frameworks for AI in healthcare. We explore escalation thresholds for AI-driven recommendations, the necessity of auditability in clinical AI systems, and the significant risks of bias and hallucination. Our focus is on establishing clear lines of responsibility when AI fails, ensuring that licensed healthcare professional oversight remains central to patient safety.

Human-in-the-Loop

AI should never be the final decision-maker. We advocate for "Licensed Human-Verified Output" where every AI suggestion is reviewed by licensed healthcare professionals.

Accountability Frameworks

Who is responsible when AI fails? Liability cannot be outsourced to an algorithm.

Bias & Hallucination

LLMs can sound confident while being factually wrong. Education on identifying "fluent errors" is critical.

Auditability

Frameworks for tracking and reviewing AI system decisions to maintain transparency.
Clinical AI Literacy
Understanding the Tool, Not Just the Hype
We offer educational resources for clinicians, health system leaders, policymakers, and educators on the practical realities of using large language models in healthcare. This includes a clear-eyed assessment of what generative AI systems (ChatGPT™, Claude, Gemini, GPT-4, Med-PaLM 2) can and cannot do, an analysis of their common failure modes, and a discussion of the risks of over-trust and automation bias across all platforms.

We argue that in the context of patient care, strong governance matters more than model size or capability.
Research-Backed Insights

38% - 70%

AI adoption growth among physicians (2023-2024)

1.47%

Hallucination rate in clinical LLMs

15%+

Error rates in medical AI models

For Clinicians

Capability vs. reliability, failure modes, automation bias, and prompt engineering for safety

For Health System Leaders

Organizational readiness, deployment strategies, risk management, and system-level governance

For Policymakers & Educators

Regulatory frameworks, accountability standards, patient protection, and educational curricula
Clinical AI Literacy
Understanding the Tool, Not Just the Hype
We offer educational resources for clinicians, health system leaders, policymakers, and educators on the practical realities of using large language models in healthcare. This includes a clear-eyed assessment of what generative AI systems (ChatGPT™, Claude, Gemini, GPT-4, Med-PaLM 2) can and cannot do, an analysis of their common failure modes, and a discussion of the risks of over-trust and automation bias across all platforms.

We argue that in the context of patient care, strong governance matters more than model size or capability.
Research-Backed Insights

38% - 70%

AI adoption growth among physicians (2023-2024)

1.47%

Hallucination rate in clinical LLMs

15%+

Error rates in medical AI models

For Clinicians

Capability vs. reliability, failure modes, automation bias, and prompt engineering for safety

For Health System Leaders

Organizational readiness, deployment strategies, risk management, and system-level governance

For Policymakers & Educators

Regulatory frameworks, accountability standards, patient protection, and educational curricula

A Human Accountability Layer

Healthcare does not need more hype. It needs guardrails. As Large Language Models (LLMs) enter clinical environments, the risk of "blind automation" grows. This platform exists to ensure that human judgment remains central to patient care, regardless of which AI system is being used. We provide the educational resources to help you question, validate, and govern these tools responsibly. This platform serves as a guardrail, not a hype engine - a corrective force to ensure that clinicians are seen and protected.

5+

TEDx Talks

100+

Speaking Engagements

20+

Years Experience

Frameworks

Evidence-based governance frameworks, risk mitigation strategies, and research-backed approaches to responsible AI deployment in healthcare.

Explore ResourcesBook Dr. Castro to Speak

Leading AI Governance Frameworks

Framework

Core Focus

Key Recommendations

World Health Organization (WHO)

2021, revised 2024
Ethics, human rights, and governance for AI in health, with recent focus on Large Multimodal Models (LMMs)
Protect autonomy, ensure transparency, promote equity, and establish government oversight with mandatory audits

American Medical Association (AMA)

2024 toolkit
"Augmented intelligence" to support clinicians and improve patient care
Implement risk-based oversight, establish clear liability for developers, and avoid mandatory AI use without clinical validation

NIST AI Risk Management Framework

2023, updated 2025
Voluntary, cross-sector framework for managing AI risks throughout its lifecycle
Adopt four-function core (Govern, Map, Measure, Manage) to create trustworthy and responsible AI systems

EU AI Act

2024, effective 2025
Risk classification (low to high) with strict requirements for high-risk medical AI
Prohibits high-risk AI without human oversight in medical contexts; mandates transparency and accountability

HHS AI Strategy

2024, effective 2025
Ethical directives for U.S. health departments positioning AI as core to healthcare transformation
Focus on patient safety, data security, and responsible deployment across federal health programs

AI Risks & Mitigation Strategies

Risk Category

Description

Mitigation Strategy

Hallucination & Inaccuracy

AI generates plausible but false or unsubstantiated information. Studies show hallucination rates of 1.47% in clinical note generation, with some medical models exceeding 15% on analytical tasks.
Human-in-the-Loop (HITL) validation, robust testing protocols, and use of chain-of-thought reasoning to enable self-verification

Automation Bias

Over-reliance on AI outputs, leading to errors in clinical judgment. Clinicians may accept flawed AI recommendations and cease searching for confirmatory evidence.
Clinician training on AI limitations, accountability frameworks, and system designs that encourage critical evaluation of AI suggestions

Data Bias & Health Equity

AI models perpetuate or amplify existing health disparities due to biased training data that underrepresents certain demographic groups.
Diverse and representative data sourcing, fairness audits, external validation across different populations, and continuous monitoring

Liability & Accountability

Lack of clarity regarding who is responsible for AI-related errors - developers, institutions, or clinicians.
Clear governance policies defining liability for developers, institutions, and clinicians, as advocated by organizations like the AMA

Resources

Downloads

Downloadable resources for healthcare professionals, policymakers, and organizations implementing AI governance frameworks.

AI Assists. Humans Decide.

Comprehensive presentation on AI risks in healthcare, including statistics on adoption growth (38% → 70%), hallucination rates (1.47%), and the critical importance of human oversight in clinical decision-making.

Download PDF

AI Governance & Human Decisions

Framework for implementing responsible AI governance in healthcare organizations, including practical guidelines, policy recommendations, and best practices for maintaining human accountability.

Download PDF

Books & Publications

Early Thought Leadership in Responsible AI

"ChatGPT and Healthcare: The Key to New Future of Medicine"

Published 2023 - Foundational text on LLMs in medicine

A foundational text on the capabilities, risks, and ethical considerations of using LLMs in medicine. This book was among the first to call for strict human oversight and patient-centric design.

This publication is an educational work, not a technical manual or product.

"ChatGPT and Healthcare: Unlocking The Potential Of Patient Empowerment"

Published February 2023

Explores how AI can empower patients while maintaining the critical role of healthcare providers in decision-making and care delivery.

Educational resource for understanding patient-centered AI applications.

Coming Soon

"ChatGPT AI and Healthcare: The Key to the New Future of Medicine"

2nd Edition - Co-authored with David Rhew, MD

An expanded and updated edition exploring the evolving landscape of AI in healthcare. Co-authored with Dr. David Rhew, this comprehensive work examines the key frameworks, governance strategies, and human-centered approaches needed to unlock AI's potential while maintaining patient safety and clinical excellence.

Building on the foundation of the first edition with new insights, case studies, and practical guidance for the future of medicine.

More Books by Dr. Harvey Castro

Explore additional publications on AI, healthcare innovation, and leadership