Humphrey Visual Field Measurements in Glaucoma Patients: Error Model and Implications for Clinical Trial Design: Wednesday, 2 November 2022

TITLE: Humphrey Visual Field Measurements in Glaucoma Patients: Error Model and Implications for Clinical Trial Design: Wednesday, 2 November 2022

INVESTIGATORS:

Laurel Stell (1)

Jeffrey Goldberg (2)

Gala Beykin (2)

  1. Biomedical Data Science
  2. Ophthalmology

DATE: Wednesday, 2 November 2022

TIME: 1:30–3:00 PM

LOCATION: Conference Room X303, Medical School Office Building, 1265 Welch Road, Stanford, CA

WEBPAGEhttps://dbds.stanford.edu/data-studio/

ABSTRACT

The Data Studio Workshop brings together a biomedical investigator with a group of experts for an in-depth session to solicit advice about statistical and study design issues that arise while planning or conducting a research project. This week, the investigator(s) will discuss the following project with the group.

INTRODUCTION

The Humphrey Visual Field (HVF) exam is widely used to diagnose glaucoma and its progress.  It measures the sensitivity to light of varying brightness in an array of points on the retina.  Its test-retest variability increases considerably as sensitivity decreases.  Furthermore, the mapping of the retina locations to the optical nerve, where glaucoma damage occurs, results in unusual spatial relationships in the sensitivity measurements.  Clinical trials of potential glaucoma treatments seek to show improvement–or at least less decline–in sensitivity at a “sufficient” number of locations compared to untreated controls, but glaucoma generally progresses slowly.  All of these factors can result in prohibitively large sample sizes or long trial times.

HYPOTHESIS & AIM

We have performed exploratory analysis of HVF exams, seeking to model the test-retest variability and spatial relationships.  We hope to leverage such insights to improve clinical trial inclusion criteria and statistical tests for treatment effect.

DATASET

We have HVF data from 140 glaucomatous eyes of 92 patients acquired by multiple sources.  Thirty of the eyes were in a test-retest study that performed weekly exams for three months (Artes et al, 2014).  Another 51 eyes of 30 patients were in a Phase 1b trial of eye drops (Goldberg et al, 2022).  The other exams were selected opportunistically.  Each eye has at least two exams within 105 days.

STATISTICAL MODELS

We will describe test-retest variability and spatial relationships in HVF sensitivity data based on our dataset.  We will also discuss potential measures of treatment effect.  We are seeking advice on statistical models for testing treatment effect.

STATISTICAL QUESTIONS

(1) What are optimal trial inclusion criteria, taking into account the test-retest variability?

(2) How can we leverage spatial relationships and multiple exams to check the inclusion criteria?

(3) Which statistical models are appropriate for treatment effect?

(4) Any suggestions for clinical trial design and sample size calculations?

Piloting a Standardized Psychosocial Assessment Tool (BATHE) in Genetic Counseling

TITLE: Piloting a Standardized Psychosocial Assessment Tool (BATHE) in Genetic Counseling

DATE: Wednesday, 9 November 2022

TIME: 1:30–3:00 PM

LOCATION: Conference Room X303, Medical School Office Building, 1265 Welch Road, Stanford, CA

WEBPAGE: http://med.stanford.edu/dbds/resources/data-studio.html

 

INVESTIGATORS:

MaryAnn Campion (1)

Chloe Reuter (2)

Tia Moscarello (2)

Mimi Nguyen (1)

Beth Pollard (1)

(1) Department of Genetics

(2) Stanford Center for Inherited Cardiovascular Disease (SCICD)

WEBPAGE: https://dbds.stanford.edu/data-studio/

ABSTRACT

The Data Studio Workshop brings together a biomedical investigator with a group of experts for an in-depth session to solicit advice about statistical and study design issues that arise while planning or conducting a research project. This week, the investigator(s) will discuss the following project with the group.

INTRODUCTION

Significant rates of psychological distress have been found in patients receiving genetic counseling (GC) across a range of settings. These needs are often profound and unmet. Although genetic counselors (GCs) routinely assess patients’ emotional and psychological state, there is neither a universal definition of “psychosocial assessment” in genetic counseling nor are there any broadly-applicable assessment tools available. Those tools that do exist tend to be questionnaire-based, specific to a clinical indication, too time-consuming to be used clinically, require additional pre-appointment paperwork for patients, and/or were developed without deliberate attention to aspects of diversity. We can look to other healthcare settings to borrow tools which standardize the psychosocial assessment. One such tool developed in primary care is the BATHE method: a structured technique consisting of four questions that explore patients’ background (B), their emotional affect (A), the most troubling aspect (T), and how they are handling (H) the situation, all of which are paired with empathic responses (E). Evidence from the primary care literature shows that BATHE reduces patient anxiety and improves patient empowerment, and that providers find it concise and easy-to-learn, without increasing consultation time.

HYPOTHESIS & AIMS

For Aim 2 (Is the BATHE method feasible and acceptable among genetic counselors?), we have developed a 2hr workshop (for in-person or virtual delivery) to train genetic counselors on the BATHE method. We plan to lead 3–4 BATHE training workshops for ~200 practicing GCs (final number depending on power analysis) across diverse clinical specialties. We will use pre/post workshop questionnaires to gather the following data to assess the acceptability, appropriateness, and feasibility of BATHE among GCs.

DATA

We will collect data at three time points labeled T1 (prior to workshop), T2 (immediately following workshop), and T3 (two months post workshop). At time point T1, the data collected will include: clinical practice demographics and job satisfaction; self-reported practices of psychosocial assessment; and the modified GC Self-Efficacy Scale (Caldwell et al, 2018). At time point T2, the data collected will include: modified GC Self-Efficacy Scale (Caldwell et al, 2018); prospective acceptability, appropriateness, and feasibility (AAF); the Acceptability of Intervention Measure (AIM); Intervention Appropriateness Measure (IAM); and Feasibility of Intervention Measure (FIM; Weiner et al., 2017). The AAF, AIM, IAM, and FIM are measures of implementation outcomes that are often considered “leading indicators” of implementation success (Proctor et al., 2011). At time point T3, the data collected will include: modified GC Self-Efficacy Scale (Caldwell et al, 2018); changes in clinical practice; and retrospective acceptability, appropriateness, and feasibility (AAF). Here is a link to our planned data tables ( https://docs.google.com/spreadsheets/d/1Tnv_1Mj7Bip7Dsc1wT6SKSoVDJRX9n8ET8tuw-ZiOfs/edit#gid=0 ).

STATISTICAL ANALYSIS PLAN

These are the research questions that we hope to answer coupled with our best guess at the most appropriate statistical method(s) for each.

Q1: Is the BATHE method acceptable, appropriate, and feasible (AAF) among genetic counselors?

For this question, we will compute descriptive statistics for AAF at T2 and T3 and perform the paired samples t-test to determine whether there is a difference between participants’ prospective AAF (T2) and their retrospective AAF (T3).

Q2: What is the relationship between participants’ clinical practice demographics, job satisfaction, and psychosocial assessment practices, AND their AAF of BATHE?

Q2a: Is there a relationship between participants’ clinical practice demographic data (T1) and their prospective AAF (T2)?

Q2b: Is there a relationship between participants’ clinical practice demographic data (T1) and their retrospective AAF (T3)

Q2c: Is there a relationship between participants’ psychosocial assessment practices (T1) and their prospective AAF (T2)?

Q2d: Is there a relationship between participants’ psychosocial assessment practices (T1) and their retrospective AAF (T3)?

For Q2abcd, we plan to compute correlations for measurement data and to perform Pearson’s chi-squared test for categorical data.

Q3: What is the relationship between self-efficacy and AAF at the timepoints?

Q3a: What is the relationship between GCs’ self-efficacy and AAF of BATHE (T1, T2, T3) at any time point?

For Q3a, we will perform the paired samples t-test to determine whether participants’ self-efficacy scores change over time (compare T1, T2, and T3)?

Q3b: Is there a relationship between participants’ self-efficacy scores (T1 or T2) and their prospective AAF (T2)?

Q3c: Is there a relationship between participants’ self-efficacy scores (T1, T2, or T3) and their retrospective AAF (T3)?

For Q3bc, we will compute correlations for measurement data and perform Pearson’s chi-squared test for categorical data.

Q4: What is the relationship between changes of clinical practices (T3) and AAF of BATHE (T2 and T3)?

For Q4, we will compute correlations for measurement data and perform Pearson’s chi-squared test for categorical data.

STATISTICAL QUESTIONS

1. Do the analyses above look correct? If not, what do you recommend?

2. Are we planning to run too many analyses?

3. When looking for associations between several variables, should we:

a. run separate individual correlations (for measurement data) and chi-square analyses (for categorical data)?

b. run an ANOVA if comparing multiple groups (e.g. genetic counselors representing different specialties)?

c. run something else?

4. For measurement data, do we run the same analyses if the data is on a sliding scale versus Likert versus numerical?

5. What tests should we run for “check all that apply” and/or when “other” is an option in the survey (in T1)?

6. For categorical data, does it matter how many categories participants can choose from?

7. What is the best way to conduct a power analysis to determine the minimum sample size?

8. How do we handle incomplete data (e.g. if we only get T1 and T2 on some participants)?

9. Can you recommend a statistician who might want to assist us along the way?

Identification of Wnt Ligands Regulating Planar Cell Polarity in the Developing Mouse Cochlea

TITLE: Identification of Wnt Ligands Regulating Planar Cell Polarity in the Developing Mouse Cochlea

INVESTIGATORS:

Ippei Kishimoto (1)

Erin Su (1)

Alan G. Cheng (1)

  1. Department of Otolaryngology, Head & Neck Surgery

DATE: Wednesday, 16 November 2022

TIME: 1:30–3:00 PM

LOCATION: Conference Room X303, Medical School Office Building, 1265 Welch Road, Stanford, CA

WEBPAGE: https://dbds.stanford.edu/data-studio/

ABSTRACT

The Data Studio Workshop brings together a biomedical investigator with a group of experts for an in-depth session to solicit advice about statistical and study design issues that arise while planning or conducting a research project. This week, the investigator(s) will discuss the following project with the group.

INTRODUCTION

A vital part of the mammalian inner ear is the cochlea, a spiral-shaped sensory organ with hair cell (HC) bundles that detect sound vibrations in the cochlear fluid. Hearing function critically depends upon the unidirectional polarization of HCs and their stereocilia. The PCP pathway organizes cells in tissue planes. Hair cells are precisely oriented with hair bundles aligned radially, serving as a prime example where proper development requires the PCP pathway. We previously showed that PCP during cochlear development, including, cochlear extension, hair cell orientation and core PCP protein polarization, is regulated by Wnt secretion from the embryonic cochlear epithelium. However, the specific Wnt ligands among 19 Wnts contributing to cochlear PCP are still unknown.

HYPOTHESIS & AIM

The aim of the study is to elucidate the role of individual Wnt members in the establishment of cochlear PCP, and then we put these hypotheses:

(1) The establishment of cochlear PCP requires specific Wnt ligands

(2) Epithelia- and mesenchyme-derived Wnt ligands are both required for cochlear PCP establishment

DATASET

To test these hypotheses, we generated the conditional knockout (cKO) mice for specific target Wnts (Wnt4, Wnt5a, Wnt7a, Wnt7b etc.), in which the expression of the target Wnt is deleted in cochlear epithelium or periotic mesenchyme. Then, we look at three readouts for cochlear PCP (cochlear extension, hair cell orientation and core PCP protein polarization) to see if which Wnt contributes to the establishment of cochlear PCP.

What we’d like to ask for advice about is the second readout, hair cell orientation. What we do to see if these cKO mouse have PCP abnormality in hair cell orientation is to measure the orientation of each hair cell (degrees; from -180° to 180°) relative to a specific reference line and to compare them between control (WT) and experimental (cKO) groups. Usually in our datasets, each of control and experimental groups consists of 3 to 5 mice, and each mouse has about 20 to 40 values (such as 23.45°, -4.543°, 9.212°, 11.098°, -9.101°……).

STATISTICAL MODELS

We’d like to see if there is significant difference in the average direction or the variation of angles between control and experimental groups. What we already tried to compare the variation of angles, is to simply compare standard deviation (SD) of the angles between both groups. For example, a control group has 5 mice so 5 SDs, and an experimental group has 4 mice so 4 SDs, and then we compare both groups with unpaired t-test, such as (8.44, 6.86, 6.50, 5,35, and 8.34) VS (6.79, 6.62, 7.55, and 6.94). However, this method did not seem to detect small differences and could not reproduce the results as reported in the past.

STATISTICAL ISSUES

To compare the average direction or the variation of angles between control and experimental groups, what statistical comparison methods are appropriate?

Data Studio Office Hour

DATE: Wednesday, 30 November 2022

TIME: 1:30–3:00 PM

LOCATION: Conference Room X303, Medical School Office Building, 1265 Welch Road, Stanford, CA

REGISTRATION FORMhttps://redcap.stanford.edu/surveys/?s=WMH74XCX33

DESCRIPTION:

The Data Studio Office Hour brings together a series of biomedical investigators with a group of experts for brief individualized sessions to solicit advice about a statistical and study design issue that arises while planning or conducting a research project.

This week, Data Studio holds office hours for your data science needs. Biomedical Data Science faculty are available to provide assistance with your research questions. If you need help with bioinformatics software and pipelines, check out the Computational Services and Bioinformatics Facility (http://cmgm-new.stanford.edu/) and the Genetics Bioinformatics Service Center (http://med.stanford.edu/gbsc.html).

Reserve a Data Studio Office Hour session by completing the Registration Form. Sessions are about 30 minutes long but might be extended at the discretion of the coordinator. If you register for a session, please be present at the start time on Wednesday.