Invention for Virtual Reality Training, Simulation, and Collaboration in a Robotic Surgical System

Invented by Pablo Eduardo Garcia Kilroy, Eric Mark JOHNSON, Bernard Fai Kin SIU, Haoran YU, Verb Surgical Inc

The market for virtual reality (VR) training, simulation, and collaboration in a robotic surgical system is rapidly expanding, revolutionizing the way surgeons are trained and improving patient outcomes. With the advancements in technology and the increasing demand for minimally invasive surgeries, the integration of VR in robotic surgical systems has become a game-changer in the medical field. Robotic surgical systems, such as the da Vinci Surgical System, have been widely adopted in hospitals worldwide. These systems allow surgeons to perform complex procedures with enhanced precision, control, and flexibility. However, the learning curve for surgeons to master these systems can be steep, requiring extensive training and practice. This is where VR training and simulation come into play. Virtual reality provides a realistic and immersive environment for surgeons to practice their skills before performing actual surgeries. Surgeons can use VR to simulate various surgical scenarios, allowing them to gain confidence and expertise in using the robotic system. They can practice manipulating the robotic arms, using the surgical instruments, and performing intricate maneuvers, all in a risk-free virtual environment. One of the key advantages of VR training is the ability to recreate complex anatomical structures. Surgeons can visualize and interact with 3D models of organs, tissues, and blood vessels, providing a better understanding of the patient’s anatomy. This level of realism helps surgeons plan and strategize their surgical approach, reducing the risk of complications during the actual procedure. Collaboration is another significant aspect of VR in robotic surgical systems. Surgeons can connect with colleagues and experts from different locations, enabling real-time collaboration and guidance during surgeries. This feature is particularly beneficial in remote areas where access to specialized surgical expertise may be limited. Surgeons can receive guidance, share their screens, and communicate with other professionals, enhancing the overall quality of care. The market for VR training, simulation, and collaboration in robotic surgical systems is projected to witness substantial growth in the coming years. According to a report by MarketsandMarkets, the global market for surgical simulators is expected to reach $2.81 billion by 2022, with a compound annual growth rate of 15.2%. The increasing adoption of robotic surgical systems, coupled with the need for efficient training methods, is driving this growth. Several companies are actively developing VR solutions for robotic surgical training and simulation. For example, FundamentalVR offers a haptic VR platform that combines realistic visuals with touch feedback, allowing surgeons to feel the sensation of operating on virtual patients. Osso VR provides a VR training platform specifically designed for orthopedic surgeons, enabling them to practice procedures and improve their skills. In conclusion, the market for VR training, simulation, and collaboration in robotic surgical systems is witnessing significant growth and innovation. The integration of VR technology in these systems is revolutionizing surgical training, enabling surgeons to gain expertise in a risk-free environment. The ability to simulate complex surgical scenarios and collaborate with experts remotely is improving patient outcomes and expanding access to specialized surgical care. As technology continues to advance, the potential for VR in robotic surgical systems is limitless, promising a future where surgeons are better equipped to provide the highest standard of care.

The Verb Surgical Inc invention works as follows

Herein are described a virtual reality system that provides a virtual robotic surgery environment and methods of using it. In the virtual reality system there are different user modes that allow for different types of interaction between the user and the virtual robot surgical environment. One variation of a navigation method of a virtual surgical environment involves displaying the first-person perspective of the environment in a virtual surgical environment at a certain vantage, the first window view from the same vantage, and the second window view from the third vantage. In addition, in response a user input associating first and secondary window views, a path between the second vantage point and the third vantage point can be generated sequentially by linking the first window view and the second window view.

Background for Virtual Reality Training, Simulation, and Collaboration in a Robotic Surgical System

Laparoscopic surgery is a form of minimally-invasive surgery. It uses techniques to minimize tissue damage. Laparoscopic procedures, for example, typically involve making a few small incisions on the patient’s abdomen and inserting one or more surgical tools (e.g. an end effector, a camera, etc.). Through the incisions, the surgical instruments are introduced into the patient. “The camera can be used to visualize the surgical procedure and then perform it using the surgical instruments introduced.

In general, MIS offers multiple benefits such as reduced scarring and pain for patients, shorter recovery times, and lower costs of medical treatment associated with recovery. In some embodiments of MIS, one or more robotic arm systems are used to manipulate surgical instruments in response to commands from the operator. A robotic arm can, for instance, support various devices at its distal end, such as imaging devices, surgical end effectors and cannulae to provide access to the body cavity of a patient or organs.

Robotic systems perform complex procedures. They are complex systems.” In order to operate a robot surgical system successfully, an operator (e.g. surgeons) may need significant training and/or experience. This training and experience can be used to plan MIS procedures more effectively (e.g., determining the optimal number, orientation, and location of robotic arms; determining optical numbers and locations of incisions; deciding on optimal sizes and types of surgical instruments and determining order of actions during a procedure .).

Also, the design of robotic surgical systems can be complex.” As an example, physical prototypes of improvements in hardware, such as robotic arms, are created and tested. Physical embodiments may be required for improvements in software, such as control algorithms of robotic arms. This cyclical testing and prototyping is usually expensive and time-consuming.

In general, a system that provides a virtual robot surgical environment can include a virtuality processor (e.g. a processor within a computer that implements instructions stored in memory), a head-mounted screen wearable by the user, and one, or more, handheld controllers the user can manipulate to interact with the virtual robotic surgery environment. In some variations, the virtual reality processor can be configured to create a virtual robotic surgery environment using at least one configuration file that describes a virtual component in the virtual environment (e.g. virtual robotic component). The head-mounted displays may include immersive displays for displaying a virtual robotic surgical environment (e.g. with a first person perspective view of the environment) to the user. In certain variations, the virtual-reality system can include an external display to show the virtual robotic surgical environments. If both displays are present, they can be synced to display the same content or similar content. The virtual reality system can be configured to create a virtual robotic surgery environment in which the user can navigate a virtual operating area and interact with virtual objects using a head-mounted display or handheld controllers. The virtual reality system, including variations thereof as described further herein, may be useful in robotic surgery applications, such as training, simulation and/or collaboration between multiple people.

In some variations, virtual reality systems may interface with an actual or non-virtual operating room. The virtual reality system can enable visualization of a robot surgical environment. It may also include a virtual reality process configured to generate a simulated robotic surgical setting comprising at lease one virtual robotic component and at least a sensor within the robotic surgical area. The sensor can be in communication with a virtual reality processor, and is configured to detect the status of the robotic component that corresponds to the virtual component. The virtual reality processor can receive the detected state of the robotic element and modify the virtual component at least partially based on that detected status so that it mimics the robotic part.

The user can interact with a virtual environment that reflects the conditions of a real operating space. The detected positions of robotic components in a surgical procedure can be compared to their expected positions, as determined by surgical pre-planning, in a virtual setting. This may prompt a surgeon to make adjustments to avoid collisions.

In some variations, one or more sensors can be configured to detect characteristics of a robot component, such as its position, orientation or speed. In an example, one or more sensors within the robotic surgical environment could be configured to detect the position and/or the orientation of a robot component such as a mechanical arm. The virtual reality processor can use the position and orientation of the robot arm to move or modify a virtual robotic equivalent to the real robotic arm. A user viewing the virtual surgical environment can see the virtual robotic arm that has been adjusted. Another example is that one or more sensors can be configured to detect collisions involving the robot component in the virtual robotic surgical environment. The system will then alert the user.

The virtual reality system allows for different types of interaction between the user and the virtual robot surgical environment. One variation of a navigation method in a virtual surgical environment involves displaying the first-person perspective of the environment at a certain vantage, displaying the first window view from another vantage, and displaying the second window view from a different vantage. The first and the second window views can be displayed on respective regions of a first-person perspective. In addition, the method can include sequentially linking first and secondary window views in response to an input from the user associating them to create a trajectory between second and third vantage point. Window views of the robotic virtual surgical environment can be displayed in different scale factors, such as zoom. Window views of the virtual robotic surgical environment may be displayed at different scale factors (e.g.,?zoom?

The method can include “displaying a first-person perspective of the virtual world from the vantage of the window view selected in response to an input by the user indicating the selection.” The window views can, for instance, be used as portals to facilitate travel between different vantage points in the virtual environment.

As another example of interaction between the user and the virtual surgical environment one variation of method to facilitate visualization of the environment includes displaying first-person perspectives of the environment from different vantage points within the environment. Receiving a user input that indicates placement of a camera at the second vantage points within the environment. Generating a view from the second perspective of the environment. Displaying the view from the second perspective in a portion of the first-person perspective. The camera view can, for instance, give the user a supplemental perspective of the virtual environment that allows them to monitor different aspects of the environment at the same time while maintaining primary focus on the main first-person view. In some variations, receiving user input to select a type of virtual camera (e.g. a movie camera configured for placement outside a patient virtual, an endoscopic or 360-degree camera configured for inside the patient virtual, etc.) may be included in the method. The method may also include displaying the virtual model of the chosen virtual camera type from the second vantage within the virtual robotic surgery environment. Herein are also described other examples of user interaction with the virtual environment.

In another variation of a virtual reality system, the virtual reality system may simulate a robotic surgical environment in which a user may operate both a robotically-controlled surgical instrument using a handheld controller and a manual laparoscopic surgical instrument (e.g., while adjacent a patient table, or ?over the bed?). A virtual reality system simulating robotic surgery may, for example, include a virtuality controller configured to create a virtual robotic environment that includes at least 1 virtual robotic tool and atleast 1 virtual manual tool. The virtuality controller is coupled with a first handheld tool for manipulating atleast one virtual robot arm in the robotic simulation environment. In some variations, for example, the tool feature can include a tool and a shaft connector to connect the tool shaft with the handheld portion of a second handheld device. The second handheld device can be either a working manual laparoscopic instrument or a mockup (e.g. facsimile, genericized version), whose movements in the tool feature may be mapped to correspond to the movements of the virtual laparoscopic instrument.

The second handheld device can be modular. For example, the tool feature may be removable from the handheld portion of the second handheld device, thereby enabling the second handheld device to function as a laparoscopic handheld device (for controlling a virtual manual laparoscopic tool) when the tool feature is attached to the handheld portion, as well as a non-laparoscopic handheld device (e.g., for controlling a robotically-controlled tool or robotic arm) when the tool feature is detached from the handheld portion. In some variations the handheld portion of a second handheld device can be substantially the same as the first handheld.

The handheld portion of the device can include an interactive feature such as a button or trigger that activates a feature of the virtual laparoscopic tool when the interactive feature is engaged by the user. A virtual trigger may be assigned to the trigger of the handheld portion. In a variation where the virtual manual tool is a laparoscopic stapler virtual, the trigger on the handheld device may be mapped with firing a staple in the virtual world. Other aspects of this system can be used to approximate the virtual tool set-up in the virtual environment. The virtual reality system could, for example, include a mock patient abdomen with a cannula that is configured to receive a portion of tool functionality of the second handheld device. This would further simulate how a manual laparoscopic instrument feels.

The computer-implemented methods for operating a robotic virtual surgical environment include, “generally, creating a virtual surgical environment with a client application that includes at least one robotic virtual component and passing information between the two software applications to effect movements of this virtual robotic component.” In response to user input to move at least one robotic component within the virtual surgical environment, this method could include sending status information about the robotic component to a server app, generating actuation commands based upon the status info and user input using the server app, and then moving the robotic component as per the actuation order. The client application and server application can be run on the same processor device or separate processor devices.

In some variations, the application programming interface may be invoked to facilitate communication between client and server applications. The API can include definitions for data structures of virtual robotic components or other virtual components within the virtual environment. The API could include data structures such as a number of virtual robot arms, virtual robotic arm segments (e.g. link), virtual patient tables, virtual cannulas, and/or virtual surgical instruments. The API could also include a datastructure for a virtual touchpoint that allows manipulation of a virtual robotic component, such as a virtual robotic arm, or another virtual component.

For instance, the method could include passing status data regarding a virtual robot arm, such a position and orientation (e.g. pose of the virtual robot arm).” The client application can pass this status information to the application server, which will then generate an actuation order based on the kinematics of the virtual robotic hand.

As described in this document, the virtual reality system has many applications and uses. In one variation, a virtual reality system can be used to accelerate the R&D process during the development of a robot surgical system. This is done by allowing simulations of possible designs without having to build physical prototypes. A method of designing a robot surgical system can include, for example, creating a virtual version of the system, testing it in a virtual environment and then modifying the model based upon the results. Then, the real model is created based on this modified model. The virtual model can be tested by performing a simulation of a surgical procedure with a virtual robotic hand and a virtual instrument attached to the arm. This could be done through the client app described here. During a testing, the system can detect collisions involving the virtual robot arm. This may trigger, for instance, a modification of the virtual model. In response to the detected event, the system may modify its virtual model. The modified virtual model can then be tested further to confirm if the modification has reduced the likelihood that the collision event will occur during the virtual surgery procedure. In this way, it is possible to test and modify robotic surgical system designs before testing physical prototypes.

In another variation, a virtual reality system can be used to test the control mode of a robotic surgical part. A method of testing a control for a robot surgical component can include, for example, creating a virtual surgical environment that includes at least a virtual component corresponding to the component, simulating the control mode in the virtual surgical environment and moving the virtual component in accordance to the emulated mode in response to user input. Moving the virtual component can include, in some variations, passing status information about the atleast one virtual component from a virtual operating environment application to a kinematics applicaiton, generating an actuator command based upon the status information and emulated control mode and passing the actuation commands from the second applicaiton to the first applicaiton, and then moving the atleast one virtual component within the virtual surgical environment.

For instance, the control mode that will be tested could be a trajectory-following control mode for robotic arms. The virtual reality system can be used to simulate the movement of a robotic arm in trajectory following. When the system is used for emulating a trajectory-following control mode, an actuation command may be generated by a Kinematics application that includes generating actuated commands for each of the plurality of virtual joint in the virtual robot arm. This set of actuated command may be implemented by an application for a virtual operating system to move the virtual robot arm in the virtual space, thereby allowing collision, volume, or workspace of movement testing, etc.

The following sections describe in detail other variations and examples of virtual-reality systems, their user interfaces and modes, and the applications and uses that virtual-reality systems can be used for.

Click here to view the patent on Google Patents.