Photo courtesy of Moon Surgical
The use of robotics in surgery began in the late 1980s, but as technology has progressed, robotic surgery has become the standard of care for minimally invasive surgeries.
Anne Osdoit, CEO of French-American surgical robotics company Moon Surgical, sat down with MobiHealthNews to discuss the organization's new commercial surgical robotics offering, dubbed Maestro System, and AI's place within its product.
MobiHealthNews: Can you tell me about Moon Surgical and Maestro?
Anne Osdoit: Moon Surgical is a surgical robotics company. We are focused specifically on soft tissue surgery, what we call laparoscopy. So basically, keyhole surgery is done through threading instruments inside the body.
And this has evolved over the past 40 years as the standard of care for abdominal, thoracic, gynecology and urology surgery. As the field shifted from open surgery towards laparoscopy, it has also increased the requirements in the operating room, because you've basically distributed and made more complex the management of vision and access to tissue, and so you have all these different instruments inside these ports in the abdomen and someone has to hold them and manage them.
So, the surgeon is not operating by themselves anymore. You need to have these surgical assistants maneuvering these instruments and anticipating what the needs of the surgeon is going to be.
What we've built at Moon Surgical is a system that addresses exactly that. A system that is going to manage the vision and the tissue exposure for surgeons in laparoscopy procedures, in any laparoscopy procedure, and deliver essentially the benefits of robotics – and what surgeons love, which is the complete control over all the instruments, the stable vision, the confidence it gives them, the efficiency-related benefits, and better clinical outcomes, ultimately, but with a form factor and with a solution that is really designed to be at the bedside with the surgeon, preserve the existing surgical technique, preserve the existing surgical instrumentation, not modify the workflow in depth.
We had an initial version of the system, which has been used on 60 patients between the U.S. and Europe over the past year and a half, and then, what we've announced recently is that the new version of our system, the second generation, which is the commercial version of the system, which features a number of data-driven improvements. That version was recently used in the clinic for the first time in Nice, France. It's been used to date on about 15 procedures.
MHN: Do you utilize AI within the robotic system?
Osdoit: I think it's fair to say that most medical devices are going to be connected or communicating and leveraging some sort of datasets from now on.
Our approach to artificial intelligence has been to integrate it gradually into our product, and only after the product has received its initial regulatory clearances, right? Just to be very clear. Focusing on use cases and features that deliver very high value for surgeons and users, but also have a chance to get through regulatory requirements.
I think there's a lot of excitement and hype in the robotics space around artificial intelligence and data-driven approaches, but the reality is there's not a lot of, or probably not a single product out there, that integrates it as a commercial feature because, you know, validation has been a challenge and regulatory approvals as well.
I'll give you two examples of how we think about it. The first thing is you can, for instance, automate the setup of the system using artificial intelligence. Every single surgeon is going to have their preferences in terms of where they place the ports for the surgery. Every surgeon is going to have a different height. They will position the beds in a different way. They will angulate the bed in a different way.
Of course, the patient's abdomen, when it's inflated, is going to have a different form factor. The room setup might be different as well. The procedure type might mandate specific placement of the arms of a robot. And so, that is something that you can set by default, you know. You can start from default positions for your system, but there is an ability for the system to learn over time and improve and automate that initial setup of the robot.
And if you think about it, it might seem like an ancillary feature and benefit, but it's actually absolutely central, right? Because when you take traditional robots, what they call the docking time for a Da Vinci System is something like 45 minutes. The turnaround time, you know, between cases is a lot longer than the cases themselves. And it's a big pain point and bottleneck for surgical robot adoption.
So that's one example, and that is a part of our initial commercial system. It does not have very critical regulatory implications because it's something that is done before the procedure, before the system is even brought very close to the patient.
Another example would be, you know, I mentioned earlier that the vision has to be managed for the surgeon, and we need to anticipate what they're going to be wanting to see on the screen. And basically, what they want to see are the tips of their handheld instruments. The main risk is for them to be outside of the screen, and then you don't know what's happening, and that is where, basically, you can have uncontrolled motion and damage to tissue.
So, keeping those handheld instruments at the center of the screen is absolutely critical for safety, but that is also something that you can learn and that you can automate on the fly to keep the instruments centered in the field of view that the surgeon is looking at. So, this is a lot more involved in terms of potential implications to the patient. This is not a part of our initial commercial system, but clearly something we've been working on. It is something that we've had an opportunity to try in our preclinical work, and that the surgeons get very excited about.