Nigam Shah

Nigam Shah and partners roll out beta version of Stanford medicine SHC and SoM Secure GPT

Dear Stanford School of Medicine Community,

Stanford Medicine is committed to being digitally driven, adopting cutting-edge technologies that advance our missions of research, education, and patient care. As part of this commitment, Stanford Medicine Technology & Digital Solutions (TDS) actively collaborates with teams across Stanford University and Silicon Valley to evaluate leading AI technologies and bring the power of AI to Stanford Medicine. Our driving focus is to offer tools that uphold rigorous principles of fairness, usefulness, and reliability.

Today we are pleased to announce the rollout of a beta version of Stanford Medicine SHC and SoM Secure GPT – a new resource for you to access large language models (LLMs) to support your work. Built and supported by Stanford Medicine TDS, SHC and SoM Secure GPT is powered by GPT 4.0 and provides a safe, secure environment that you can use to ask questions, summarize text and files, and help solve a range of complex problems.

You can try out this new offering here: SHC and SoM Secure GPT

Below are several important things to keep in mind as you explore this new resource:

  1. SHC and SoM Secure GPT is the only LLM cleared for sensitive data: Public versions of LLMs (including ChatGPT) are not approved to handle Protected Health Information (PHI) or Personally Identifiable Information (PII). As a reminder, to ensure the privacy and security of our patients’ information as well as our proprietary information, do not share any of this kind of data with ChatGPT or other LLM or chat-based tools. SHC and SoM Secure GPT is the only LLM approved for use.
  2. Not for clinical decision-making: While LLMs are powerful, they are not 100% accurate, and factual errors – often called hallucinations or confabulations – do occur. This tool is not to be used for clinical decision-making.
  3. Files and text are not saved in SHC and SoM Secure GPT: Note that any text or files uploaded by individuals into SHC and SoM Secure GPT are not saved and do not update or modify the model in any manner. Content sent to the SHC and SoM Secure GPT is only queryable by the individual uploading the content.
  4. Information presented may not be current: Stanford Medicine TDS does not curate the source data used by SHC and SoM Secure GPT. Information and answers presented by SHC and SoM Secure GPT are not guaranteed to represent the most up-to-date information available on the web. Users remain responsible for ensuring the accuracy and relevance of results.

Have questions? Contact the TDS Help Desk at 650-723-3333.

To share general feedback on this service, please use this form.

To report bugs or request a feature enhancement, please use this form.

TDS is proud to offer this beta version of SHC and SoM Secure GPT, contributing to Stanford Medicine’s ongoing leadership in health care AI – from co-founding RAISE Health this past summer and shaping national guidelines for responsible AI in health care to guiding the adoption of AI in clinical practice.

Through this innovative new offering and other recent projects, such as our implementation of Generative AI in the School of Medicine website and DEPLOYR, in which academic research from SoM and SHC is being used to train and deploy custom researcher-developed machine learning models, we want to underscore our commitment to supporting you with AI tools that help you be at your best.

We believe that harnessing the power of AI will enhance Stanford Medicine’s capabilities across all areas of our tripartite mission. As we embark on this new strategic journey, we look forward to collaborating to continue to bring the power of AI to Stanford Medicine.

In partnership,

Michael A. Pfeffer, MD, FACP
Chief Information Officer and Associate Dean
Stanford Health Care and School of Medicine
Clinical Professor of Medicine
Stanford University School of Medicine

Nigam H. Shah, MBBS, PhD
Chief Data Scientist, Stanford Health Care
Professor of Medicine and Associate Dean for Research
Stanford University School of Medicine

Christopher (Topher) Sharp, MD
Chief Medical Information Officer, Stanford Health Care
Clinical Professor of Medicine
Stanford University School of Medicine

Christian Lindmark
Chief Technology Officer
Stanford Health Care and School of Medicine
Gretchen Brown, MSN, RN, NEA-BC
Chief Nursing Information Officer
Stanford Health Care

New paper from Julia Salzman in BioRxiv: Ultra-efficient, unified discovery from microbial sequencing with SPLASH and precise statistical assembly

New paper from Julia Salzman in BioRxiv

New paper from Julia Salzman in BioRxiv:
Ultra-efficient, unified discovery from microbial sequencing with SPLASH and precise statistical assembly
Bacteria comprise > 12% of Earth’s biomass and profoundly impact human and planetary health.1 Many key biological functions of microbes, and functions differentiating strains, are conferred or modified by genome plasticity including mobilization of genetic elements, phage integration, and CRISPR arrays….
Nima Aghaeepour

Nima Aghaeepour’s study of preterm birth risks featured in Stanford Medicine

Wearable device data reveals that reduced sleep and activity in pregnancy is linked to premature birth risk

Data from wearables show that deviations from normal sleep and activity in pregnancy are connected to a risk for premature delivery, a Stanford Medicine-led study found.

Photo of pregnant woman

A lack of sleep and reduced physical activity during pregnancy are linked to risk of preterm birth, according to new research led by the Stanford School of Medicine.

In the study, which published online Sept. 28 in npj Digital Medicine, the researchers collected data from devices worn by more than 1,000 women throughout pregnancy. With a machine learning algorithm, the scientists sifted through participants’ activity information to detect fine-grained changes in sleep and physical activity patterns.

Read the story here: https://med.stanford.edu/news/all-news/2023/09/smartwatch-sleep-premature-birth.html

 

An illustration with a doctor sitting on a bloc of text

James Zou featured in WSJ on ChatGPT and medicine

Should You Use ChatGPT for Medical Advice?

Yes, patients and doctors can use chatbots for certain types of questions, experts say. But beware of the shortcomings.

If you have chest pain, should you ask a chatbot, like ChatGPT, for medical advice? Should your doctor turn to AI for help with a diagnosis?

These are the types of questions that chatbots are raising for the healthcare industry and the people it serves.

Read it here: https://www.wsj.com/tech/ai/chatgpt-medical-advice-767b4aa1?mod=hp_jr_pos2.