To hear about similar clinical trials, please enter your email below
Trial Title:
Project 1: Self-Triage by 2D Full-field Digital Mammography or Synthetic Images
NCT ID:
NCT05960188
Condition:
Breast Cancer Screening
Conditions: Keywords:
Mammography
Digital Breast Tomosynthesis
Artificial Intelligence
Radiology
Medical Image Perception
Study type:
Interventional
Study phase:
N/A
Overall status:
Recruiting
Study design:
Allocation:
N/A
Intervention model:
Single Group Assignment
Intervention model description:
We are asking the opinions of a group of clinicians.
Primary purpose:
Diagnostic
Masking:
Single (Participant)
Masking description:
At the end of the study, the participant is debriefed about their performance.
Intervention:
Intervention type:
Behavioral
Intervention name:
AI Opinion
Description:
For each case, we give the radiologist a numeric score reflecting the AI's rating of the
abnormality of the case.
Summary:
One method of breast cancer screening involves radiologists reading digital tomosynthesis
(DBT) images. DBT consists of a 3D stack of x-ray "slices" through the breast. The exam
is accompanied by a 2D image like a standard mammogram, a single x-ray of the breast. In
a screening setting, most cases are normal. Sometimes it is obvious that a case is normal
from a quick look at the 2D image. It would speed up the process of screening if readers
could dismiss a clearly normal case on the basis of the 2D image, alone, without looking
at the DBT images. Obviously, the investigators would only want to "triage" cases in this
way if the investigators were almost perfectly sure that no cancers would be missed. In
this study, the investigators look at radiologist's willingness to triage cases and on
the accuracy of their answers. In addition, the investigators ask about the impact of an
Artificial Intelligence (AI) opinion. Would it be possible to triage an image on the
basis of the AI opinion, alone?
Radiologists will look at each case for up to five seconds and offer an opinion (on a
1-10 scale) about how sure they are that a case is normal. Next, they will see the
opinion of the AI. Finally, they will say (using a 1-10) scale, how willing they would be
for the AI to triage this case without human intervention.
This study is the start of an effort to understand the conditions under which
radiologists might be willing to declare a case "normal" with little or no human
examination.
Detailed description:
NOTE: This registration is linked to a Human Subjects registration in ASSIST. That, in
turn, is part of an NCI Grant, CA207490. The grant describes many proposed experiments
and notes that many others might be done as follow-up studies. At the suggestion of the
NIH, the investigators grouped these studies into three "studies", each covering multiple
experiments. The experiment described here is part of "Study ID 386408 Project 1:
Radiologist Studies". It is not possible to register a set of experiments through the PRS
system in CT.gov and it is not possible to file an annual report for the grant (RPPR)
without an NCT number for projects that have started collecting participants.
Accordingly, the investigators are describing one experiment here that would be part of
the "Project 1" bundle of studies.
The core idea of Project 1 is that it might be possible for Os to reliably eliminate a
set of cases in screening mammography after just a brief look at a 2D image or,
potentially, after an AI system takes a brief look at the image. That is, the clinician
and/or the AI would look at the image and know "for sure" there is nothing there and
would be willing to dismiss or "triage" the case on the basis of this brief look. If the
reader was not completely sure, the case would get more scrutiny.
As a start at looking at this issue, the investigators wanted to estimate how willing
clinicians would be to triage a case and how they would interact with an AI that was
asked to triage cases. A challenge for any implementation of triage will be to get
clinicians (to say nothing of lawyers, et al) to accept the idea of not looking at an
image/case or of looking briefly at, say, the 2D image and being willing not to look at
the 3D digital tomosynthesis (DBT) images. The experiment the investigators report here
is intended to be a start on studying this issue. There is a continuum of cases from
"obviously normal" to "obviously abnormal". The investigators wanted to estimate the
point on that continuum below which a case is so normal, that readers would be willing to
let the computer triage the case and/or would be willing to triage a case themselves. It
is also possible that there are cases so abnormal that the patients can be recalled for
further examination by the computer alone though the investigators are not studying that
form of triage in this case. The investigators hypothesize that these triage points will
be related to both the computer's rating of normality and the reader's rating.
Method: A bilateral 2D mammogram is presented for 5 seconds. The time limit is intended
to limit the normal scrutiny that a radiologist would give to the case. The investigators
want a decision based on the "gist" of the case. To mimic the low prevalence of disease
in a screening mammography, only 4 of 150 cases are positive. Readers are told that the
cases mimic a screening setting so they know that positive cases will be rare, but they
are not told the actual prevalence.
The investigators ask radiologists to answer two questions about each of up to 150 single
image "cases". ("Up to 150" because radiologists can and do quit at without completing
all cases. Using a rating scale method, the investigators ask:
1. How sure are participants that these images are from a normal case?
Next the investigators tell readers that "The computer rated these images as X out
of 10 (10: highest probability of cancer present)." This AI value is a real rating
of abnormality, generated by Transpara version 1.7.0 which returns a probability of
malignancy score for the examination. The ratings were obtained by Sarah Verboom of
Radboud U.
Then the investigators ask the investigators asked if the reader would think that it
was reasonable for the computer to triage the case without further human inspection.
2. How willing would participants be for the computer to make the decision about this
case alone without having participants look at it?
Os answered using a sliding bar that served as a rating scale. The selected rating was
displayed on top of the sliding bar.
Participants:
The first observers for this study were tested at a 'pop-lab', organized at the European
Conference of Radiology (ECR, Vienna, March 2023). The investigators have been organizing
these labs as an opportunity for researchers from many labs to come to big meetings like
ECR where they might be able to test radiologists in larger numbers than at home. The
upside is access to readers. The downside is that the investigators can typically get
only 15-30 min of a reader's time. Thus, the readers for this experiment were a
population of convenience. The investigators tested 15 readers. These varied widely in
experience. The investigators asked how many screening cases they estimated that they
read each year. This varied from 0 (students who had learned about mammography but were
not in practice) to 8000. Readers also varied in how many cases they were willing to read
for us, before running out of time/patience. The range was 19 to 148 (avg 83 cases). At
this stage, the investigators are underpowered to say with any conviction if these
variables have an important impact of the results. This is a chronic problem with testing
experts like radiologists. It is extremely difficult to collect as much data as one would
wish. Nevertheless, these data can give us information about the factors that will
determine the success or failure of image triage.
Criteria for eligibility:
Criteria:
Inclusion Criteria:
- Must be Radiologists or radiology trainees
- some experience reading mammography.
Exclusion Criteria:
- acuity less than 20/25 with correction
Gender:
All
Minimum age:
18 Years
Maximum age:
N/A
Healthy volunteers:
No
Locations:
Facility:
Name:
Visual Attention Lab / Brigham and Women's Hospital
Address:
City:
Boston
Zip:
02215
Country:
United States
Status:
Recruiting
Contact:
Last name:
Jeremy M Wolfe, PhD
Phone:
617-851-1166
Email:
jwolfe@bwh.harvard.edu
Contact backup:
Last name:
Ava A Mitra, BA
Phone:
617-525-3681
Email:
amitra@bwh.harvard.edu
Start date:
March 1, 2023
Completion date:
September 1, 2028
Lead sponsor:
Agency:
Brigham and Women's Hospital
Agency class:
Other
Source:
Brigham and Women's Hospital
Record processing date:
ClinicalTrials.gov processed this data on November 12, 2024
Source: ClinicalTrials.gov page:
https://clinicaltrials.gov/ct2/show/NCT05960188