1 Introduction

Laparoscopic surgery is becoming popular as a minimally invasive operation. It is widely used as a technique for various kinds of surgical operation. It allows the surgeon to access the target organs in the patient body without making large incisions on the skin. It has been able to decrease post-procedure complications and the post-operative trauma of the patient. However, there is a higher risk of damaging the internal organs, nerves, and major blood vessels, which are like arteries and veins. Because the surgeon has to make surgical operation with the limitation of manipulation space and viewing angle which can be seen from the endoscope camera. Therefore, it has been needed to make the effective system the surgical operation training and support the surgeon during the preoperative period that is like navigation system.

There are many kinds of research about laparoscopic surgery simulation and navigation system. some of them about surgical simulation focus on the very specific topic and beginning to appear it on the market [1,2,3,4,5]. Hence, the research about navigation system for laparoscopic surgery has been also many efficient results [6,7,8]. Some of them have appeared on the market. The best impact laparoscopic surgery navigation systems, like da Vinci [9], aims to focus on supporting total surgical operation. They are not capable of supporting the surgeon to find the affected part of the organ, nerves, and the blood vessels around the target organ. Some of them focus on supporting the surgeon to guide and find these organs by using VR/AR technology [7, 8]. However, most of them need to adjust registration between virtual organs and real organs by the operator from the endoscopic view image.

Therefore, we are aiming to develop the AR laparoscopic surgery navigation system that supports the surgeon to find these organs by using semi-automatic registration method to make an overlay video image of virtual organs and real organs. Particularly, we focus on the guidance method by using the endoscopic view image and the endoscope camera position. In this paper, we introduce our study about measuring method by using the endoscope camera position and orientation. Our method uses specific markers to track the position and orientation. Moreover, we introduce the prototype system. The system allows a user to operate the endoscope camera with showing the overlay image of virtual organs and real organs produced by our method.

2 Guidance Method

2.1 Overview

Our proposed method image is showed in Fig. 1. At first, the system measures position and orientation of the endoscope camera during laparoscopic surgery. After that, it generated the 3D virtual scene in which it includes 3D CG organ model. Finally, the system displays the 3D CG organ model in which it is overlaid on video image from the camera. In this situation, the endoscope camera is in patient body and it cannot be measured from outside the body directly. Therefore, our system has to measure the camera position and orientation through the opposite side of endoscope camera which is seen the outside body.

Fig. 1.
figure 1

Our proposed method overview.

2.2 Tracking Method

There are also two kinds of tracking method used in general. One of that use a unique marker and detect its position and orientation. Other do not use the marker, also use just image processing like future detection technique. It is useful for tracking object such as existing tools like an endoscopic camera. However, it needs more cost to detect the object precisely than marker-based tracking method. Therefore, our system use marker based tracking method for making our tracking method robustly.

Figure 2 shows the maker and a prototype endoscopic camera tool which is used in our system. The design of it is aiming for simple and robustness. It uses a hexagonal mount for attaching marker because it can detect at least any two markers for measuring the object position and rotation. This mount is made in advance precisely by using 3D CAD and 3D printer system. Figure 3 shows the mount used in our method.

Fig. 2.
figure 2

Prototype endoscopic camera.

Fig. 3.
figure 3

Hexagonal maker mount.

To detect camera position and orientation precisely during the operation, our method defines the relative vector between markers and camera device, which is on the edge of an endoscope camera [10]. Figure 4 shows the coordinate system overview.

Fig. 4.
figure 4

Calculating relative vector.

Our method uses concrete registration method assuming a tracking device coordinate system \( \Sigma _{d} \) and an endoscopic camera coordinate system \( \Sigma _{c} \). To acquire the relative vector, one must set the endoscopic camera to the origin point \( P_{table}^{d} \) of the fixed marker \( M_{table} \) on the flat table. The position \( P_{cam}^{d} \) and orientation \( R_{cam}^{d} \) of the marker attached to the camera \( M_{cam} \) are measured in \( \Sigma _{d} \). The relative vector \( P_{rel}^{c} \) is calculated by

$$ P_{rel}^{c} = P_{table}^{d} - P_{cam}^{d} $$
(1)

in \( \Sigma _{c} \). To convert \( P_{rel}^{c} \) to \( P_{rel}^{d} \) in \( \Sigma _{d} \),

$$ P_{rel}^{d} = R_{cam}^{{d^{ - 1} }} \cdot P_{rel}^{c} $$
(2)

Therefore, the position of camera device on the tip of endoscopic camera \( P_{tip}^{d} \) is calculated by

$$ P_{tip}^{d} = R_{cam}^{d} \cdot P_{rel}^{d} + P_{cam}^{d} $$
(3)

The orientation of camera device on the tip of endoscopic camera \( R_{tip}^{d} \) is calculated by

$$ R_{tip}^{d} = R_{n}^{c} \cdot R_{cam}^{d} $$
(4)

where \( R_{n}^{c} \) means constant matrix determined for each markers.

3 Implementation Result

3.1 Implementation

We implemented a prototype system including our proposed method and conducted a preliminary experiment. Our prototype system configuration follows.

Computer.

  • CPU: Intel Core i7-4710MQ, 2.50 GHz

  • Memory: 32 GB

  • OS: Microsoft Windows 8.1 ×64

Tracking device.

  • Model: Claron Technology Micron Tracker 3 (H3-60)

  • Measurement rate: 16 Hz

  • Sensor resolution: 1280 × 960 pixel

  • Lens: 6 mm, 50 × 38 degree

  • Accuracy of single marker: 0.20 mm RMS

  • (20,000 averaged positions at depths of 40–100 cm)

Prototype endoscopic camera.

  • Camera model: AVC-301B1

  • Camera resolution: 2.5 Megapixel

  • Camera lens: 70 degree

  • Camera size: 12 mm × 12 mm

  • Video capture (VC) model: I-O Data Inc. GV-USB2

  • VC resolution: 720 × 480 pixel

  • VC capture frame rate: 30fps

The system overview shows in Fig. 5. It uses a training tool for laparoscopic surgery and 3D organ model designed from CT scan data. The tracking device is installed in such a way as to see the target object from above.

Fig. 5.
figure 5

System overview.

3.2 Experiment

The purpose of this experiment is to confirm the accuracy of alignment precision between actual and measuring position and orientation. Figure 6 shows the experimental environment.

Fig. 6.
figure 6

Experimental environment.

The experimental task for the position is to measure the 4 points placed at the equal interval, 50 mm, on each orthogonal axis. As a result, it is confirmed that the average deviation of position is almost 0.81 mm, and the average deviation of orientation is almost 0.75°.

3.3 Pilot Operation for Laparoscopic Surgery Navigation

We confirmed the pilot operation for laparoscopic surgery navigation task by using our prototype system. Figure 7 shows an example of the operation scene. The system uses a kidney model produced on a 3D printer by using CT scanning data from an actual patient. The system generates overlay video image that is 3D kidney model on the video image from the endoscope camera, while the user operates the endoscopic camera. An example of the overlay image shows in Fig. 8.

Fig. 7.
figure 7

Pilot navigation system.

Fig. 8.
figure 8

Example of our system view image.

4 Conclusion

We introduced our study of guidance method for developing AR laparoscopic surgery navigation system. We proposed the measurement method of endoscope camera that is able to track the position and orientation of the camera during surgery operation. It uses specific markers that placed on the hexagonal mount attached endoscope camera. We developed a prototype navigation system provided with our measurement method and confirmed the accuracy of the method through conducting the experiment. Furthermore, it is confirmed the potential of efficiency by using the guidance method which generates the overlay video image which shows 3D virtual kidney model on the actual kidney model.

In the future, it needs to conduct the experiment that is even more detailed and improve the accuracy of the overlay video image, and plan to conduct the experiment in the case of an actual surgery operation.