Dr. Ahmed Ghazi, a urologist and director of the simulation innovation lab at the University of Rochester (N.Y.) Medical Center, once thought autonomous robotic surgery wasn’t possible. He changed his mind after seeing a research group successfully complete a running suture on one of his lab’s tissue models with an autonomous robot.
It was surprisingly precise—and impressive, Ghazi said. But “what’s missing from the autonomous robot is the judgment,” he said. “Every single patient, when you look inside to do the same surgery, is very different.” Ghazi suggested thinking about autonomous surgical procedures like an airplane on autopilot: the pilot’s still there. “The future of autonomous surgery is there, but it has to be guided by the surgeon,” he said.
It’s also a matter of ensuring AI surgical systems are trained on high-quality and representative data, experts say. Before implementing any AI product, providers need to understand what data the program was trained on and what data it considers to make its decisions, said Dr. Andrew Furman, executive director of clinical excellence at ECRI. What data were input for the software or product to make a particular decision must also be weighed, and “are those inputs comparable to other populations?” he said.
To create a model capable of making surgical decisions, developers need to train it on thousands of previous surgical cases. That could be a long-term outcome of using AI to analyze video recordings of surgical procedures, said Dr. Tamir Wolf, co-founder and CEO of Theator, another company that does just that.
While the company’s current product is designed to help surgeons prepare for a procedure and review their performance, its vision is to use insights from that data to underpin real-time decision support and, eventually, autonomous surgical systems.
UC San Diego Health is using a video-analysis tool developed by Digital Surgery, an AI and analytics company Medtronic acquired earlier this year. The acquisition is part of Medtronic’s strategy to bolster its AI capabilities, said Megan Rosengarten, vice president and general manager of surgical robotics at Medtronic.
“There’s a lot of places where we’re going to build upon that,” Rosengarten said. She described a likely evolution from AI providing recommendations for nonclinical workflows, to offering intra-operative clinical decision support, to automating aspects of nonclinical tasks, and possibly to automating aspects of clinical tasks.
Autonomous surgical robots aren’t a specific end goal Medtronic is aiming for, she said, though the company’s current work could serve as building blocks for automation.
Intuitive Surgical, creator of the da Vinci system, isn’t actively looking to develop autonomous robotic systems, according to Brian Miller, the company’s senior vice president and general manager for systems, imaging and digital. Its AI products so far use the technology to create 3D visualizations from images and extract insights from how surgeons interact with the company’s equipment.
To develop an automated robotic product, “it would have to solve a real problem” identified by customers, Miller said, which he hasn’t seen. “We’re looking to augment what the surgeon or what the users can do,” he said.