Category Archives: Research News

Image of a robot with a questioning thought

DBDS’ Nigam Shah: Healthcare Leaders and UK Government Discuss Safe Deployment of AI

DBDS’ Nigam Shah: Healthcare Leaders and UK Government Discuss Safe Deployment of AI
Healthcare leaders are currently engaged in vigorous discussions about the potential risks and responsibilities that come with the deployment of artificial intelligence (AI) in the sector. While some segments of the public fear dystopian scenarios akin to those seen in science fiction narratives, with robots taking control, healthcare professionals deem these anxieties exaggerated. Nonetheless, they stress the necessity for safe and responsible implementation of AI technologies.
Tina Hernandez-Boussard

Tina Hernandez-Boussard’s paper on bias risks cited by White House AI Fact Sheet

A recently published paper, “Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care,” of which Tina Hernandez-Boussard, PhD was a co-author, has been cited by a White House Fact Sheet released Jan. 29, 2024. This sheet shares progress on President Biden’s Executive Order to, “…ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence.”

Visit JAMA Netw Open for the original article.

https://www.whitehouse.gov/briefing-room/statements-releases/2024/01/29/fact-sheet-biden-harris-administration-announces-key-ai-actions-following-president-bidens-landmark-executive-order/

Nigam Shah

Nigam Shah and partners roll out beta version of Stanford medicine SHC and SoM Secure GPT

Dear Stanford School of Medicine Community,

Stanford Medicine is committed to being digitally driven, adopting cutting-edge technologies that advance our missions of research, education, and patient care. As part of this commitment, Stanford Medicine Technology & Digital Solutions (TDS) actively collaborates with teams across Stanford University and Silicon Valley to evaluate leading AI technologies and bring the power of AI to Stanford Medicine. Our driving focus is to offer tools that uphold rigorous principles of fairness, usefulness, and reliability.

Today we are pleased to announce the rollout of a beta version of Stanford Medicine SHC and SoM Secure GPT – a new resource for you to access large language models (LLMs) to support your work. Built and supported by Stanford Medicine TDS, SHC and SoM Secure GPT is powered by GPT 4.0 and provides a safe, secure environment that you can use to ask questions, summarize text and files, and help solve a range of complex problems.

You can try out this new offering here: SHC and SoM Secure GPT

Below are several important things to keep in mind as you explore this new resource:

  1. SHC and SoM Secure GPT is the only LLM cleared for sensitive data: Public versions of LLMs (including ChatGPT) are not approved to handle Protected Health Information (PHI) or Personally Identifiable Information (PII). As a reminder, to ensure the privacy and security of our patients’ information as well as our proprietary information, do not share any of this kind of data with ChatGPT or other LLM or chat-based tools. SHC and SoM Secure GPT is the only LLM approved for use.
  2. Not for clinical decision-making: While LLMs are powerful, they are not 100% accurate, and factual errors – often called hallucinations or confabulations – do occur. This tool is not to be used for clinical decision-making.
  3. Files and text are not saved in SHC and SoM Secure GPT: Note that any text or files uploaded by individuals into SHC and SoM Secure GPT are not saved and do not update or modify the model in any manner. Content sent to the SHC and SoM Secure GPT is only queryable by the individual uploading the content.
  4. Information presented may not be current: Stanford Medicine TDS does not curate the source data used by SHC and SoM Secure GPT. Information and answers presented by SHC and SoM Secure GPT are not guaranteed to represent the most up-to-date information available on the web. Users remain responsible for ensuring the accuracy and relevance of results.

Have questions? Contact the TDS Help Desk at 650-723-3333.

To share general feedback on this service, please use this form.

To report bugs or request a feature enhancement, please use this form.

TDS is proud to offer this beta version of SHC and SoM Secure GPT, contributing to Stanford Medicine’s ongoing leadership in health care AI – from co-founding RAISE Health this past summer and shaping national guidelines for responsible AI in health care to guiding the adoption of AI in clinical practice.

Through this innovative new offering and other recent projects, such as our implementation of Generative AI in the School of Medicine website and DEPLOYR, in which academic research from SoM and SHC is being used to train and deploy custom researcher-developed machine learning models, we want to underscore our commitment to supporting you with AI tools that help you be at your best.

We believe that harnessing the power of AI will enhance Stanford Medicine’s capabilities across all areas of our tripartite mission. As we embark on this new strategic journey, we look forward to collaborating to continue to bring the power of AI to Stanford Medicine.

In partnership,

Michael A. Pfeffer, MD, FACP
Chief Information Officer and Associate Dean
Stanford Health Care and School of Medicine
Clinical Professor of Medicine
Stanford University School of Medicine

Nigam H. Shah, MBBS, PhD
Chief Data Scientist, Stanford Health Care
Professor of Medicine and Associate Dean for Research
Stanford University School of Medicine

Christopher (Topher) Sharp, MD
Chief Medical Information Officer, Stanford Health Care
Clinical Professor of Medicine
Stanford University School of Medicine

Christian Lindmark
Chief Technology Officer
Stanford Health Care and School of Medicine
Gretchen Brown, MSN, RN, NEA-BC
Chief Nursing Information Officer
Stanford Health Care