Innovation Activities in the Field of Surgery

Surgical innovation has radically changed the way of surgery. Minimally invasive laparoscopic surgery that places less burden on the patient than open abdominal surgery has been developed in step with the advances in imaging technology and therapeutic devices used for incision and hemostasis. System integration has also advanced to make it possible to control multiple devices in the operating room from a single controller. Furthermore, the advancement of robotic technology has enabled the surgeon to control therapeutic devices precisely. However, the innovation goal has not been fully achieved. The next innovation can be summed up with a concept we call Information Rich.

In this concept, artificial intelligence (AI) is fully utilized in surgery in an IoT environment that connects multiple devices on a network. Connectivity and information support are the keys to this concept.
We are developing a platform to implement this concept and have completed a prototype*. The platform is made up of multiple devices which do not operate independently but work in an integrated manner with the platform while exchanging data among themselves.

* The prototype has not yet been approved in accordance with the Pharmaceutical and Medical Device Act.


Information Rich: The concept is to provide the AI surgeons with the information necessary for surgery with perfect timing to help them see and make decisions.

The platform specifically operates as follows. The display unit consists of a main screen for the operating surgeon and a sub screen for the nurse. When the surgeon and the nurse enter the operating room, the system reads the ID of personal devices, authenticates the persons, and displays information for preparation. The surgeon and the nurse make a safety check through interaction with the system.


Autonomous View Control: Enable seeing what needs to be seen

The endoscope to capture surgical field images is mounted on an automated arm. In current surgery, the endoscope is controlled by a surgeon different from the operating surgeon. In this system, however, AI understands surgical steps and automatically inserts the endoscope into the body from a trocar (a tubular device placed through the abdomen) to view the inside of the cavity. When the surgeon inserts forceps through another hole, the field of view of the endoscope follows the forceps and captures the inside of the cavity. When the surgeon specifies an area with the forceps and gives an order verbally, the area is enlarged on the screen. We call this technology to enable seeing what needs to be seen Autonomous View Control.


Intraoperative Image Navigation: Enable seeing what cannot be seen

Furthermore, the endoscopic images work in conjunction with 3D computer graphics (3DCG), and they are overlaid by information, such as the tumor location, blood vessels, cross section, and blood vessels after cutting the lesion using the AR (augmented reality) technology. In the case that a CT or ultrasound image is needed, the surgeon just gives an order, and AI selects an appropriate image according to the position of the forceps from the accumulated images and overlays it on the endoscope image. In the case of bleeding from an organ, the system detects the point of bleeding and displays it. Recorded images going back to the point before bleeding can also be displayed to understand the circumstances in which bleeding occurred. We call this technology to enable seeing what cannot be seen Intraoperative Image Navigation.


Decision Making Support: AI helps make decisions

AI knows which step is taking place at any given time during the planned operation. It eventually predicts a complication and suggests a change in the planned procedure. We call this AI technology to help make decisions Decision Making Support, and we also plan to implement it.

The system dynamically generates and displays work records during surgery. This information, along with postoperative information, is analyzed. The analysis results are used to ensure safer and more consistent quality of surgery.

This project is accelerated by funding received by the Japan Agency for Medical Research and Development (AMED). The project consists of three parts: “Information-Assisted Endoscopic Surgery System,” “Autonomous View Control,” and “Active Device Control.” We will cooperate with the National Cancer Center Japan, Oita University, the University of Tokyo, and Fukuoka Institute of Technology to implement the project.

The purpose of this system is of course to improve the surgical quality; in other words, to increase the safety. However, interviews with more than 300 surgeons in Japan and abroad revealed that they have high expectations for other new values. They are management values. It is said that more than 60% of hospitals in Japan are unprofitable. For surgical units that are required to contribute to profitability, it is a compelling issue to figure out how to reduce costs and improve efficiency. Our system is supposed to be able to dynamically understand the surgical process, the devices used, and other data. Which means all costs will be “visualized.” Improvement of the operation based on it provides the potential to create new values, such as cost reduction and efficiency improvement.

The current prototype has many functions that must be improved, and some functions are still mock-ups. However, we know the direction to proceed with the project and the approach to implement it.

We will proceed with the development project in cooperation with partner organizations and companies while keeping in mind that this platform will be able to contribute to medical services.

ANSWERS BEYOND SIGHT

Your collaboration proposals are welcome.