Abstract
Laparoscopic surgery is becoming popular as a minimally invasive operation. It is widely used as a technique for various kinds of surgical operation. In this surgery, a surgeon uses an endoscopic camera, which is limited view volume, and forceps into the patient body. It has been needed to support system such as navigation system.
Therefore, we are studying about the AR laparoscopic surgery navigation system support the surgeon to find these organs by using semi-automatic registration method to make an overlay video image of virtual organs and real organs. In this paper, we focus on the guidance method by using the endoscopic view image and the endoscopic position. We introduce our study about registration method of the position and orientation of the endoscope camera. Our method tracks the tools position and orientation by using markers. Then, it uses the camera reference frame based on this position and orientation for generating the overlay image of virtual organs and real organs. The result of using our method can be showed a potential for efficient guidance method of AR surgical navigation system.
You have full access to this open access chapter, Download conference paper PDF
1 Introduction
Laparoscopic surgery is becoming popular as a minimally invasive operation. It is widely used as a technique for various kinds of surgical operation. It allows the surgeon to access the target organs in the patient body without making large incisions on the skin. It has been able to decrease post-procedure complications and the post-operative trauma of the patient. However, there is a higher risk of damaging the internal organs, nerves, and major blood vessels, which are like arteries and veins. Because the surgeon has to make surgical operation with the limitation of manipulation space and viewing angle which can be seen from the endoscope camera. Therefore, it has been needed to make the effective system the surgical operation training and support the surgeon during the preoperative period that is like navigation system.
There are many kinds of research about laparoscopic surgery simulation and navigation system. some of them about surgical simulation focus on the very specific topic and beginning to appear it on the market [1,2,3,4,5]. Hence, the research about navigation system for laparoscopic surgery has been also many efficient results [6,7,8]. Some of them have appeared on the market. The best impact laparoscopic surgery navigation systems, like da Vinci [9], aims to focus on supporting total surgical operation. They are not capable of supporting the surgeon to find the affected part of the organ, nerves, and the blood vessels around the target organ. Some of them focus on supporting the surgeon to guide and find these organs by using VR/AR technology [7, 8]. However, most of them need to adjust registration between virtual organs and real organs by the operator from the endoscopic view image.
Therefore, we are aiming to develop the AR laparoscopic surgery navigation system that supports the surgeon to find these organs by using semi-automatic registration method to make an overlay video image of virtual organs and real organs. Particularly, we focus on the guidance method by using the endoscopic view image and the endoscope camera position. In this paper, we introduce our study about measuring method by using the endoscope camera position and orientation. Our method uses specific markers to track the position and orientation. Moreover, we introduce the prototype system. The system allows a user to operate the endoscope camera with showing the overlay image of virtual organs and real organs produced by our method.
2 Guidance Method
2.1 Overview
Our proposed method image is showed in Fig. 1. At first, the system measures position and orientation of the endoscope camera during laparoscopic surgery. After that, it generated the 3D virtual scene in which it includes 3D CG organ model. Finally, the system displays the 3D CG organ model in which it is overlaid on video image from the camera. In this situation, the endoscope camera is in patient body and it cannot be measured from outside the body directly. Therefore, our system has to measure the camera position and orientation through the opposite side of endoscope camera which is seen the outside body.
2.2 Tracking Method
There are also two kinds of tracking method used in general. One of that use a unique marker and detect its position and orientation. Other do not use the marker, also use just image processing like future detection technique. It is useful for tracking object such as existing tools like an endoscopic camera. However, it needs more cost to detect the object precisely than marker-based tracking method. Therefore, our system use marker based tracking method for making our tracking method robustly.
Figure 2 shows the maker and a prototype endoscopic camera tool which is used in our system. The design of it is aiming for simple and robustness. It uses a hexagonal mount for attaching marker because it can detect at least any two markers for measuring the object position and rotation. This mount is made in advance precisely by using 3D CAD and 3D printer system. Figure 3 shows the mount used in our method.
To detect camera position and orientation precisely during the operation, our method defines the relative vector between markers and camera device, which is on the edge of an endoscope camera [10]. Figure 4 shows the coordinate system overview.
Our method uses concrete registration method assuming a tracking device coordinate system \( \Sigma _{d} \) and an endoscopic camera coordinate system \( \Sigma _{c} \). To acquire the relative vector, one must set the endoscopic camera to the origin point \( P_{table}^{d} \) of the fixed marker \( M_{table} \) on the flat table. The position \( P_{cam}^{d} \) and orientation \( R_{cam}^{d} \) of the marker attached to the camera \( M_{cam} \) are measured in \( \Sigma _{d} \). The relative vector \( P_{rel}^{c} \) is calculated by
in \( \Sigma _{c} \). To convert \( P_{rel}^{c} \) to \( P_{rel}^{d} \) in \( \Sigma _{d} \),
Therefore, the position of camera device on the tip of endoscopic camera \( P_{tip}^{d} \) is calculated by
The orientation of camera device on the tip of endoscopic camera \( R_{tip}^{d} \) is calculated by
where \( R_{n}^{c} \) means constant matrix determined for each markers.
3 Implementation Result
3.1 Implementation
We implemented a prototype system including our proposed method and conducted a preliminary experiment. Our prototype system configuration follows.
Computer.
-
CPU: Intel Core i7-4710MQ, 2.50 GHz
-
Memory: 32 GB
-
OS: Microsoft Windows 8.1 ×64
Tracking device.
-
Model: Claron Technology Micron Tracker 3 (H3-60)
-
Measurement rate: 16 Hz
-
Sensor resolution: 1280 × 960 pixel
-
Lens: 6 mm, 50 × 38 degree
-
Accuracy of single marker: 0.20 mm RMS
-
(20,000 averaged positions at depths of 40–100 cm)
Prototype endoscopic camera.
-
Camera model: AVC-301B1
-
Camera resolution: 2.5 Megapixel
-
Camera lens: 70 degree
-
Camera size: 12 mm × 12 mm
-
Video capture (VC) model: I-O Data Inc. GV-USB2
-
VC resolution: 720 × 480 pixel
-
VC capture frame rate: 30fps
The system overview shows in Fig. 5. It uses a training tool for laparoscopic surgery and 3D organ model designed from CT scan data. The tracking device is installed in such a way as to see the target object from above.
3.2 Experiment
The purpose of this experiment is to confirm the accuracy of alignment precision between actual and measuring position and orientation. Figure 6 shows the experimental environment.
The experimental task for the position is to measure the 4 points placed at the equal interval, 50 mm, on each orthogonal axis. As a result, it is confirmed that the average deviation of position is almost 0.81 mm, and the average deviation of orientation is almost 0.75°.
3.3 Pilot Operation for Laparoscopic Surgery Navigation
We confirmed the pilot operation for laparoscopic surgery navigation task by using our prototype system. Figure 7 shows an example of the operation scene. The system uses a kidney model produced on a 3D printer by using CT scanning data from an actual patient. The system generates overlay video image that is 3D kidney model on the video image from the endoscope camera, while the user operates the endoscopic camera. An example of the overlay image shows in Fig. 8.
4 Conclusion
We introduced our study of guidance method for developing AR laparoscopic surgery navigation system. We proposed the measurement method of endoscope camera that is able to track the position and orientation of the camera during surgery operation. It uses specific markers that placed on the hexagonal mount attached endoscope camera. We developed a prototype navigation system provided with our measurement method and confirmed the accuracy of the method through conducting the experiment. Furthermore, it is confirmed the potential of efficiency by using the guidance method which generates the overlay video image which shows 3D virtual kidney model on the actual kidney model.
In the future, it needs to conduct the experiment that is even more detailed and improve the accuracy of the overlay video image, and plan to conduct the experiment in the case of an actual surgery operation.
References
Qian, K., Bai, J., Yang, X., Pan, J., Zhang, J.: Virtual reality based laparoscopic surgery simulation. In: Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, pp. 69–78 (2015). http://doi.org/10.1145/2821592.2821599
Coles, T.R., Meglan, D., John, N.W.: The role of haptics in medical training simulators: A survey of the state of the art. IEEE Trans. Haptics 4(1), 51–66 (2011). http://doi.org/10.1109/TOH.2010.19
Kalavakonda, N., Chandra, S., Thondiyath, A.: Development of virtual reality based robotic surgical trainer for patient-specific deformable anatomy. In: Proceedings of the 2015 Conference on Advances in Robotics - AIR 2015, pp. 1–5 (2015). http://doi.org/10.1145/2783449.2783465
Delingette, H., Ayache, N.: Hepatic surgery simulation. Commun. ACM 48(2), 31–36 (2005). http://doi.org/10.1145/1042091.1042116
Nicolau, S.A., Goffin, L., Soler, L.: A low cost and accurate guidance system for laparoscopic surgery. In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology - VRST 2005, pp. 124–133 (2005). http://doi.org/10.1145/1101616.1101642
Desai, J.P., Tholey, G., Kennedy, C.W.: Haptic feedback system for robot-assisted surgery. In: PerMIS 2007 - Workshop on Performance Metrics for Intelligent Systems, pp. 188–195 (2007). http://doi.org/10.1145/1660877.1660904
Hughes-Hallett, A., Mayer, E.K., Marcus, H.J., Cundy, T.P., Pratt, P.J., Darzi, A.W., Vale, J.A.: Augmented reality partial nephrectomy: Examining the current status and future perspectives. Urology 83(2), 266–273 (2014). http://doi.org/10.1016/j.urology.2013.08.049
Su, L.M., Vagvolgyi, B.P., Agarwal, R., Reiley, C.E., Taylor, R.H., Hager, G.D.: Augmented reality during robot-assisted laparoscopic partial nephrectomy toward real-time 3D-CT to stereoscopic video registration. Urology 73(4), 896–900 (2009). http://doi.org/10.1016/j.urology.2008.11.040
Intuitive Surgical - Products: http://www.intuitivesurgical.com/products/
Koeda, M., et al.: Depth camera calibration and knife tip position estimation for liver surgery support system. In: Stephanidis, C. (ed.) HCI 2015. CCIS, vol. 528, pp. 496–502. Springer, Cham (2015). doi:10.1007/978-3-319-21380-4_84
Acknowledgement
This work was supported by JSPS KAKENHI Grant Number JP15K00291.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Onishi, K., Miki, Y., Okuda, K., Koeda, M., Noborio, H. (2017). A Study of Guidance Method for AR Laparoscopic Surgery Navigation System. In: Marcus, A., Wang, W. (eds) Design, User Experience, and Usability: Designing Pleasurable Experiences. DUXU 2017. Lecture Notes in Computer Science(), vol 10289. Springer, Cham. https://doi.org/10.1007/978-3-319-58637-3_43
Download citation
DOI: https://doi.org/10.1007/978-3-319-58637-3_43
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58636-6
Online ISBN: 978-3-319-58637-3
eBook Packages: Computer ScienceComputer Science (R0)