Skip to main content
Reality Check

AI and other technologies offer the potential to redefine research and transform health care

AI, or artificial intelligence, refers to the ability of a machine or computer system to perform tasks that would normally require human intelligence, such as learning, problem-solving, decision-making, and language understanding.

VR, or virtual reality, is a computer-generated simulation of a three-dimensional environment that can be interacted with in a seemingly real or physical way by a person using special equipment, such as a headset with a screen or gloves fitted with sensors.

MR, or mixed reality, is a term that refers to the combination of virtual reality and real-world environments. In MR, virtual objects are seamlessly integrated into the real world, creating a hybrid environment where both virtual and real-world elements coexist.

AR, or augmented reality, is a technology that superimposes digital information or images onto the physical world. AR allows users to see and interact with digital content in the context of the real world. For example, AR apps might allow users to see virtual objects or information overlaid on the camera view of their smartphone or other device.

W
hat you have read thus far is not the product of a human writer. Rather, it was composed by OpenAI’s ChatGPT, a chatbot launched in November that has starred in dozens of news stories — many of them predicting the end of academia as we know it due to the bot’s uncanny ability to write students’ papers for them. It has already appeared as a co-author of several published research papers. It even passed a version of the U.S. licensing exam for physicians.

The human writer who has resumed control of this story decided to take ChatGPT out for a spin by asking it to define the alphabet soup of new technologies that are remaking much of biomedical research and clinical practice. At the Miller School and UHealth – the University of Miami Health System, AI in particular is playing a major role in groundbreaking discoveries. Rather than becoming a self-aware threat to its human masters, as often depicted in science fiction, AI is proving to be a valuable tool that can help shape medicine’s tomorrows.

Faster, Better Readings

Researchers at Sylvester Comprehensive Cancer Center are finding this especially true for developing new diagnostic capabilities and treatments for a variety of cancers. The process of diagnosing tumors during surgery, for example, hasn’t changed much over the decades. “You take the tissue specimens, freeze them, slice, stain, put it on a slide, then send it to the pathologist,” said Michael Ivan, M.D., associate professor of neurological surgery. More than half an hour later, the pathologist calls and tells you what’s on the slide.

But that process is changing — not the steps involved, but their speed and accuracy. Dr. Ivan worked with colleagues at New York University and the University of Michigan to develop and validate stimulated Raman histology, or SRH, technology, which helps diagnose benign and malignant tumors in under 90 seconds using machine learning, AI, and optical imaging without sacrificing accuracy.

In most surgeries on malignant tumors, removing the entire tumor makes a difference in the patient’s survival. SRH uses laser light to illuminate aspects of tumors that cannot be seen through standard methods. The AI component provides information throughout the procedure so that, for example, the surgeon can be sure that the surgical resection boundaries are clear of cancer.

Another team of Sylvester researchers has developed an automated way to calculate metabolic tumor volume, or MTV, in diffuse large B-cell lymphoma.

“Metabolic tumor volume can have a profound impact on patient outcomes, but until now, there has not been an automated way to calculate it,” said Juan Pablo Alderuccio, M.D., associate professor of medicine. “Now, we have developed an artificial intelligence-based system to get accurate tumor volumes both faster and with less human intervention.”

Lymphoma patients with larger tumor volumes tend to have more challenging prognoses. As a result, clinicians have wanted to stratify patients based on MTV, but this has been quite difficult. Existing software is only semi-automated, meaning that radiologists must pay close attention to the results and often fill in missing information, which can be a time-intensive process.

“When we read a PET scan, there are both pathologic lesions and normal structures that light up,” said Russ Kuker, M.D., associate professor of radiology. “Radiologists review these images and differentiate between real lesions and benign processes. Most software can’t tell the difference between something that is benign or has normal physiologic activity and a tumor.”

“We know that patients with higher MTV do worse,” Dr. Alderuccio explained. “Those patients may be more likely to receive intensified or experimental therapies to improve outcomes. Patients with low metabolic tumor volume might present better outcomes to standard therapies, and shorter treatment programs may become an option for them. This machine learning approach could give us better opportunities to make those distinctions helping to integrate MTV in clinical trials and patient care.”

Dr. Ivan agrees, noting that machine learning and AI algorithms are not a one-time thing, “It’s a continually developing and improving system,” he said. “The more information it has, the better it becomes. The next iteration of SRH will identify what kind of tumor it is as well as the genetic mutations within it. Each subtype of tumor responds to different drugs. The earlier you know the subtype, the better chance you have of picking the right medication for the patient.”

Gaining on Glioblastoma

A third research group at Sylvester has already taken that next step in identifying subtypes. Leading an international collaboration of investigators, they have developed a new machine learning algorithm that can combine large datasets from multiple sources to pinpoint cancer-driving enzymes called kinases and potentially target them for treatment. Their research could have profound implications for future treatment of glioblastoma multiforme, or GBM, an aggressive, usually fatal type of brain cancer.

The AI algorithm, known as SPHINKS (Substrate PHosphosite-based Inference for Network of KinaseS), deploys deep-machine learning to help the researchers identify and experimentally validate two protein kinases (PKCδ and DNAPKcs) as the culprits associated with tumor progression in two GBM subtypes and as potential therapeutic targets for certain breast, lung, and pediatric cancers.

“Our work represents translational science that offers immediate opportunities to change the way glioblastoma patients are routinely managed in the clinic,” said Antonio Iavarone, M.D., Sylvester’s deputy director and leader of the research team. SPHINKS gives oncologists a new tool to apply the correct treatments in homogenous cancer subtype. In addition, by using tumor organoids grown in the laboratory from patient samples, the team showed targeted drugs that interfere with the two kinases can block tumor growth.

“We can now stratify glioblastoma patients based on biological features,” Dr. Iavarone said. “Reading the genome alone has not been enough. We have needed more comprehensive data to identify tumor vulnerabilities.”

He and his colleagues believe their findings may enable the development of “basket trials,” which would include patients with the same biological subtype but not necessarily the same cancer types.

“If patients with glioblastoma or breast or lung cancer have similar molecular features, they could be included in the same trial,” Dr. Iavarone said. “Rather than conducting multiple trials for a single agent, we could conduct one and potentially bring more effective drugs to patients more quickly.”

Mixed Reality for DUMMIES

Harvey will soon take the next step in simulation

“Harvey,” the world’s first cardiopulmonary simulator, was created in 1968 by Michael S. Gordon, M.D., Ph.D., at the Miller School. Today, the Gordon Center for Simulation and Innovation in Medical Education remains the world’s longest continuous academic medical center that develops, builds, and sells medical simulators.

Last year, Harvey made news when the Gordon Center introduced the manikin in multiple skintones. According to S. Barry Issenberg, M.D., director of the Gordon Center and professor of medicine and medical education at the Miller School, Harvey’s next iteration will feature mixed reality.

“You walk into a room and Harvey is there,” he said. “You put on a headset and click to start the lesson.” Mixed Reality Harvey will appear differently depending on the case being presented. It could be a be a 22-year-old woman or an 8-year-old boy.

“You do the same things with Mixed Reality Harvey in the virtual world that you do with human patients in the real world,” Dr. Issenberg said. “You pick up a virtual stethoscope and hear the heart sounds in your earpiece and feel the pulse transmitted through the haptic glove. Or you may place an ultrasound probe on the chest and the corresponding image appears in your field of view. All your actions are being recorded by the system, so now you aren’t only right or wrong, but the system also knows where you felt and didn’t feel. The next time, it focuses on where your deficiencies were.”

Mixed Reality Harvey prototypes are currently in use at the Miller School; in 24 months, they will be made available to other medical schools.

Monitoring Cancer Conversations

“I may represent the most nontraditional use of AI in the cancer center space,” said Tracy Crane, Ph.D., co-leader of the Cancer Control Research Program and director of Lifestyle Medicine and Digital Health for Survivorship at Sylvester, and associate professor of medicine in the Division of Medical Oncology. That’s because her team isn’t focusing on AI’s role in cancer diagnosis or treatment. Instead, they are using machine learning and natural language processing to better understand sentiment and word choice in conversations between patient and provider.

Can AI be trained to listen to voice inflections and word choice to help determine whether people are making good decisions? That’s the central question. Imagine a cancer patient on a telehealth call. The patient has been providing data on their exercise, diet, sleep, heart rate, blood pressure, and more through wearable devices. During the call, their provider may have an earpiece or a chatbot with AI communicating with them. The AI might say something like, “They haven’t been out of the house all week; ask about exercise,” or “When they talk about their fear of cancer returning, their heart rate increases.”

This AI would also be used to monitor and train counselors. Typically, a third party monitors only about 10% of the calls to check for elements such as empathy, autonomy, and open-ended questions. By contrast, AI can listen in on all calls. “We are training models to listen for protocol fidelity in every call instead of manually gathering and listening to only 10%,” Dr. Crane said. “These models need to be refined, but over the next year we’ll be ready to introduce them into clinical practice.”

The Future of Office Visits

Ever wonder if there is a better way to handle routine office visits? “You go to the doctor, you’re sweating, your blood pressure is high,” said Azizi Seixas, Ph.D., interim chair of the Department of Informatics and Health Data Science, associate professor of psychiatry and behavioral sciences, and founding director of the Media and Innovation Lab, or MIL. “In the future, you won’t necessarily have to go to the doctor.” Instead, you may receive a MILBox at your home, containing devices you would wear and set up so that your vital signs, sleep patterns, and environmental data will be collected. “Not only do we obtain your digital biomarkers,” Dr. Seixas said, “but we also monitor you for a week or two — whatever it takes to get a realistic picture.”

The MILBox is part of a current research project at the Miller School in collaboration with Amazon Web Services and Open Health Network. The program creates an individual’s “digital twin” using data collected from the MILBox’s in-home and on-body sensors. Using data generated from multiple readings of the patient and their environment over time, a digital twin will be developed that can allow providers to virtually evaluate treatment options and potential outcomes before applying them to the actual patient.

Just as the appropriate mix of technology is still under development, so are compliance and financial considerations, Dr. Seixas said. The confidentiality of patient data will be protected, and he foresees providers, health systems, and payers — even a direct-to-consumer model — as revenue sources.

“The goal of the research program is not to avoid in-office visits but to fill in the gaps of care. When the patient is not at the office we can monitor and capture a more realistic picture of the patient,” Dr. Seixas said. “This can lead to more precise, personalized, and contextualized care.”

Dr. Seixas said providers want to spend more time with patients but are overwhelmed with paperwork and administrative tasks. “They only spend 10 or 15 minutes with you. We are saying, ‘Let’s remove those barriers and add intelligence using AI.’ It’s going to be more accurate, anyway.”

Once the data is collected, it can be triangulated with previous data for returning patients. The physician might say, “You need to increase your sleep,” and then the AI can identify the barriers to good sleep. “Maybe you are busy or don’t have social support,” Dr. Seixas said. “Or we can find out why you aren’t exercising. The data collection makes it increasingly more personalized. This is what the future of health care is going to look like.”

Developing Digital Therapists

If you were to gather up every provider trained in mental health, along with every child with a mental health disorder, the caseload would be greater than 1:900 — and this has increased during the pandemic, according to Daniel Armstrong, Ph.D., director of the Mailman Center for Child Development. “We can’t produce a workforce fast enough,” he said.

So how can technology address mental health? That’s one of the priorities of telehealth pioneers at the Mailman Center using a grant from the U.S. Department of Health and Human Services. Dr. Armstrong explained: “Can we use AI to provide these assessment and treatment of children with developmental disabilities using virtual technology in the home? Instead of bringing a child into an unfamiliar office environment, with parental consent and appropriate child assent, we may be able to develop standardized approaches using live video to assess a child’s sleep, behavior at the kitchen table, or play in the living room in their home environment.”

Presently there are no standard tools for doing so, he said, but “by using an AI model to review the natural environment data, we can likely improve the accuracy of diagnosis and develop more relevant treatments, which may in turn lead to better outcomes.”

Dr. Armstrong offered another example: “To evaluate children with autism, we insist on bringing them into the office, a place that is strange and often frightening. Not only are offices unprepared after the pandemic for an onslaught of in-person assessments, but technology may offer a faster, more efficient process by standardizing telehealth assessments in natural environments and building apps that help consolidate data and develop concise treatment plans. That must happen in the context of the developmental trajectory of the child, so that our approach can be adapted to the needs of a 6-month-old, a 6-year-old, or a 10-year-old.”

Spinal Surgery’s New View

An AR headset allows surgeons to see two things at once

During a conventional spinal surgery, the surgeon will need to look up from time to time to view X-ray images and the surgical instruments. During that time, the surgeon is not looking at the patient.

Imagine, instead, that the surgeon is wearing an augmented reality headset that contains a navigation system. Just as your GPS can determine your car’s exact location, the Augmedics xvision Spine System is designed to locate the position of surgical tools in real time and then to superimpose them on the patient’s CT data. The data is then projected onto the surgeon’s retina using the headset, enabling a simultaneous look at both the patient and the data.

“Spine surgeons have to process a lot of data,” said Michael Y. Wang, M.D., professor of neurological surgery and chief of neurosurgery at UHealth Tower. “We look at information like the relationship of screws being implanted into the spine, and we have to look at the body at the same time.” The AR headset can superimpose these things. “You don’t lose focus, orientation, or time,” he said. “We can see the map of the human body like the way you project Google Maps onto your screen.”

In 2020, Dr. Wang performed the first surgery in the Southeastern U.S. using the Augmedics technology. The system is currently approved for use in open and minimally invasive spine implant procedures.

“We are one of the first centers in the world to have it,” Dr. Wang said. “We’ve been using it in ultra-minimally invasive spine surgeries. It’s like flying a fifth-generation fighter plane, because we have a virtual map. It’s true X-ray vision. We see everything.”

What’s in the future for this gear? “Hardware-wise, the technology on our heads weighs a couple pounds,” Dr. Wang said. “In around a year, the system will look more like Google glasses, and the software will allow an interface based on MRI, which means we can see nerves, tissue, and muscle.”

Currently, the data is based on a CT scan. “Today we have the visual piece,” Dr. Wang said. “Seeing is knowing.” In the future, he believes the system will also indicate how things sound and feel. “Now we use electrical signals to figure out where nerves are, but what if you can tell where a nerve is going to be?” he asked. “What about a probe that lets us know we are three millimeters away from a nerve?”

Data’s Holy Grail

An AI-ready database containing millions of images of tumors can help diagnose cancer quickly and efficiently. But what else in that database could help identify potential treatments? “The Sylvester Data Portal is our effort to organize all of the data,” said Stephan C. Schürer, Ph.D., professor of molecular and cellular pharmacology, associate director for data science at Sylvester, and director of digital drug discovery at the Institute for Data Science and Computing at the University of Miami. Without combining the research with patient data, “we have no chance” of identifying novel treatments, he said. “Beyond large-scale multimodal omics and patient data, groups at Sylvester integrate and analyze where people live, their sociodemographic status, their cultural background, and what food they eat, including access to things like fresh, affordable vegetables. A lot of cultural aspects contribute to risk.”

Combining research data, patient data, and public data results in a better picture of the patient population, and a better prediction of which therapies will work. “It’s all about the data,” Dr. Schürer said. “Bringing it together, standardized, harmonized, and available to generate new knowledge, such as in the form of predictive models that can be used for diagnoses or to guide developing new therapies — that is the Holy Grail.”

Taking treatment a step further, “we can envision AI avatars,” Dr. Schürer said. “You could have an avatar on a screen that can have a conversation with the patient, and the avatar can be optimized to address the patient it is talking to.” One day, he added, provided patients are willing, he imagines virtual clinicians speaking interactively with patients and answering questions about a recommended cancer screening, explaining how a diagnostic test works, offering follow-up options in case there is a positive result, or even obtaining consent before a visit and explaining exactly what the consent means. This would free up nurses and other staff, and could result in a better experience for the patient, because the avatar can spend as much time as the patient needs. “AI,” he said, “can help with that bottleneck.”

Another bottleneck that AI can address: radiologists’ time. In the future, it will help radiologists drive efficiency and cost-effectiveness “so we can spend more time on the complex cases,” said Alex McKinney, M.D., chair of the Department of Radiology. “It’s important, because a radiologist’s time is expensive.” Being able to work remotely in an AI environment saves time and money. “When we have a complex case, we spend 20 to 30 minutes, three or four times a day” looking at the images, he added. AI can do it faster.

According to Thiago Braga, M.D., director of practical informatics for radiology at the Miller School, radiologists have a “pressing need” for AI. “Instead of going to the charts and trying to understand the risk factors, you are going to talk to AI that will give you the information you need.” It will be like talking to Siri, he said: “Hey, AI, give me all the cardiovascular risk factors of this patient,” and AI will deliver that information. “The radiologist is poised to do that,” he said. “This will be a big change in the practice of radiology.”

Appropriate Use, Appropriate User

AI technology is also playing a growing role in back-office functions. Nicholas Lancho, M.B.A., manager of data protection for UHealth Privacy, described how it can protect patients’ personal health information while it helps diagnose and treat them.

“Our patient privacy monitoring system, Protenus, is an AI-driven platform that utilizes advanced analytics to detect potential data breaches and inappropriate accesses,” he said. “It tracks how you search for the patient’s record, measures the time spent in the patient’s record, and uses our Workday data to determine whether there may be a clinical or business-related reason for the access. This AI system is state of the art when it comes to privacy monitoring programs.”

This presents the two faces of digital health, according to Desert Horse-Grant, chief transformation officer at Sylvester. “Anything we do should benefit patients, to help them be the healthiest they can be,” she said. “At the same time, we need to keep in mind the counterbalance. Connecting to devices disconnects us from other humans and nature. As we partner with engineers and start businesses, we must be sure not to leave out the human considerations. We need to remember the reason why we started.”

When AI was just a gleam in a developer’s eye, Hollywood had already gone full bore with intelligent machines taking over the world. “That moment when a machine achieves consciousness is far better for movies than for reality,” said ethicist Kenneth Goodman, Ph.D., director of the Miller School’s Institute of Bioethics and Health Policy. “Computers compute — they don’t make decisions. If we were as worried about people using power inappropriately as we are about computers doing so, the world would be a better place.”

The question is, what is appropriate use, and who is an appropriate user? “Accountability and responsibility are not entirely clear,” Dr. Goodman said. “Someone wrote the code, someone tested the code, and someone is using it at the bedside to render a diagnosis or prognosis. When something goes wrong, who is at fault?”

Indeed, AI is not perfect, and it remains to be seen how comfortable people will be with various AI tools being used in their health care. In a recent Pew Research Center study, for example, 60% of those polled said they would be uncomfortable with their provider using AI. While some technologies offer clear benefits — faster diagnoses, for example — others likely still need to be refined before patients will accept them. Again, in the Pew study, only 38% believed AI would improve their care. Clearly there is a patient education challenge moving forward, and there are ethical concerns that must be addressed.

What is needed, Dr. Goodman believes, is a governance model that includes shared responsibility. “As we discuss appropriate use and who is an appropriate user, we need to have mechanisms to refine that on an ongoing basis,” he said. “Getting it right as often as possible will require ongoing review and evaluation of systems, and that review should be conducted by humans. We want the benefits of the technology, and we want to minimize the harms and risks. Addressing the questions of appropriate use, appropriate user, and the governance by which we determine those things is a delicious teaching moment.”

‘Is Your Headset On?’

Urology is turning science fiction into science fact

In 1935, American science fiction writer Stanley G. Weinbaum introduced the concept of virtual reality when he published “Pygmalion’s Spectacles,” a story about a pair of goggles that transported the wearer into a simulated environment.

Thirty-three years later, Ivan Sutherland, Ph.D., an electrical engineer at Harvard University, co-created the Sword of Damocles, widely regarded as the world’s first virtual reality headset. The device, which featured simple virtual wireframe 3D shapes, was cumbersome and too heavy for anyone to wear. The goggles were suspended from the ceiling, which limited the user’s mobility. Nonetheless, virtual reality was born.

Today, Ranjith Ramasamy, M.D., associate professor of clinical urology at the Miller School’s Desai Sethi Urology Institute, is looking at whether virtual reality, augmented reality, or both will alleviate the anxiety and discomfort sometimes associated with urological procedures.

Dr. Ramasamy’s randomized study of men undergoing vasectomies is comparing three modes: total immersion in a VR environment, such as waves on a beach; an AR experience where the real world is combined with digital content, such as a magic show that patients can watch through their headset; and a control group wearing no headset at all.

“During the procedure, we ask about anxiety and pain, and we monitor heart rate and other vitals using a Fitbit,” Dr. Ramasamy said. Anecdotally, he reported that patients wearing the headsets sets are happy. “So far,” he said, “the enthusiasm is pretty palpable.”

VR or AR could become routine in urology, Dr. Ramasamy said: “In the future, it will be like the way we give local anesthesia. We’ll ask, ‘Is your headset on?’”

If the study determines that VR and/or AR reduce pain and anxiety in urological procedures, he added, it could encourage more men to visit a urologist and result in better health. And it could have potential for other specialties.

Virtual reality, like regular reality, works best when it’s well organized. “We have established an informal faculty working group whose purpose is to bring people together to work on similar problems,” said Nick Tsinoremas, Ph.D., professor of biochemistry and molecular biology and vice provost of research computing and data.

“At a large and diverse university like UM,” he said, “different people are working on a lot of different things. The idea is to create an environment in which we can learn from each other.”

In 2018, AR innovator Magic Leap and UM entered a partnership to award Magic Leap grants to students, faculty, and staff for developing new uses for augmented reality. These grants fund research not only in health care, but also in engineering, design, and the arts. “We are creating a playground, if you will, where we bring people together to exchange ideas,” Dr. Tsinoremas said.

Your human writer also asked the ChatGPT chatbot to describe some of the ways the Miller School is using these new technologies. Here is how its response began:

“The Miller School of Medicine is at the cutting edge of technology and research, and is leading the way in the study of artificial intelligence, virtual reality, augmented reality, and mixed reality. Researchers at the Miller School are exploring the potential of these technologies to revolutionize health care, from personalizing treatments to improving diagnosis accuracy. …”

Vague, but not incorrect. Next, the writer asked about these technologies at his children’s nursery school and at a school of cosmetology drawn at random from a phone directory. Alas, according to ChatGPT, a preschool and a beauty academy are doing the same cutting-edge work as the Miller School. In other words, this AI app can construct meaningful language without necessarily revealing the truth. It won’t let misinformation get in the way of good syntax. AI has a great future as a tool in medicine, but it’s hardly ready to take over the world.

Josh Baxt contributed to this story.

UNIVERSITY OF MIAMI MEDICINE
SPRING 2023