1 Introduction

In the last few decades, computer simulation has become an important tool to investigate various phenomena in cardiac biology, including studies of single ion channel properties [9], action potentials of the myocyte [3, 5], dynamics of action potential propagation in tissue [2], subcellular calcium dynamics [7], etc. In spite of the advancement of computational technology, the simulation of action potential waves in three-dimensional (3D) cardiac tissue with a realistic geometry is still considered as a “large-scale simulation.”

General-purpose computing on GPUs (GPGPU) is a recently emerging technology [1, 4, 8], which uses GPUs, instead of CPUs, to compute large simulations in parallel. GPUs are massively parallel single instruction multiple data processing units. Each GPU may contain 128–240 “stream processors” whereas today’s CPUs contain 2, 4, or 8 cores. In this paper, we demonstrate that the GPU is about 30~40 times faster than the CPU, enabling it to perform whole heart electrophysiology simulations within practical time.

In this study, we chose the simulation of the propagation of the action potential in cardiac tissue, which is modeled as the propagation of a wave in an excitable medium. Therefore, this technique can be applied to a number of phenomena in physics, chemistry, and biology.

2 Methods

We used two test models. The first was a 2D homogeneous sheet, and the second was an anatomic rabbit ventricular model with ‘fiber rotation’ [10], that is, an anisotropy that varies from point to point in the heart. Each model was simulated using both the GPUs and CPUs.

The GPU simulation was performed with a single NVIDIA Geforce 8800 GT 1GB Graphic random-access memory (RAM) and an NVIDIA Geforce 9800 GX2 1GB Graphic RAM. These graphic cards were installed into a system with a dual-core 2.0 GHz AMD Opteron processor and 4GB error correction code (ECC) RAM. The operating system is OpenSUSE 10.2. Our programs are written in C++. We used GNU C++ compiler version 4.1.2 and NVIDIA CUDA version 1.1.

The CPU simulation was performed with an 8-node high performance-computing (HPC) cluster. Each node has two dual-core 2.0 GHz AMD Opteron processors (i.e., 4 cores in each node) and 4GB ECC RAM. The operating system is Fedora Core 5. We used an Intel C++ compiler 10.1. In order to parallelize on this cluster, we used Message Passing Interface 1.0. The FORTRAN version of this code was used in some of our previous studies [10].

All 2D simulations, and all 3D simulations with one GPU, were performed with the NVIDIA Geforce 8800 GT. 3D simulations with two or four GPUs were performed with the NVIDIA Geforce 9800 GX2.

Because these GPUs support only single precision, all floating-point calculations were done using single precision across both GPU and CPU simulations.

The code for the GPU is called a “kernel.” When the GPU kernel code is executed, it is similar to a CPU based parallel implementation accomplished through a series of threads, with each thread running independently in parallel. Similar to a CPU implementation, it was necessary to synchronize all threads after each ordinary differential equation (ODE) or partial differential equation (PDE) kernel execution. We can then thread these intra-GPU as they control the processing within a single GPU.

In addition to having to manage threads intra-GPU, it was also necessary to have inter-GPU threads to control each GPU. For instance, the NVIDIA Geforce 9800 GX2 graphics card has two GPUs on one card. In order to utilize each GPU there must be a corresponding thread created from the main program.

As with a CPU cluster with distributed memory, it is also necessary to manage the distributed GPU memory. However, unlike a CPU where data can be moved from one CPU to another, GPUs can and must communicate with the CPU memory, that is, data is transferred from one GPU to the other GPU via the main RAM; GPU1↔RAM↔GPU2.

The cardiac tissue was modeled using the following partial differential equation:

$$ {\frac{\partial V}{\partial t}} = - {\frac{I}{{C_{m} }}} + \nabla \cdot D\nabla V, $$

where V is the transmembrane voltage, I is the total ionic current, C m is the transmembrane capacitance, and D is the diffusion tensor. The cell model used in this study was phase I of the Luo–Rudy action potential model [3]. We solved this reaction-diffusion equation with the forward Euler method, using the technique of operator splitting [6]. The time step was adaptively varied between 0.01 and 0.1 ms and the space step was 0.015 cm. Details of the modeling of cardiac tissue are described in our previous study [10]. For each time step, the ODE part was solved once and the PDE part was solved four times for the 2D simulation and six times for the 3D simulation (Fig. 1).

Fig. 1
figure 1

Sample GPU code to solve the reaction-diffusion equation in 2D tissue. At each time step, the ODE part is called once and the PDE part is called four times. The ODE kernel code and the PDE kernel code are in the Appendix

To test the GPU code, we induced spiral waves in 2D and 3D tissue using ‘cross-field’ stimulation, that is, two successive perpendicular rectilinear wave fronts. In each case, we simulated 1 s of real world cardiac time (Fig. 2).

Fig. 2
figure 2

Action potential propagation in 2D tissue and in the anatomic heart model. a Action potential propagation in 2D tissue. Tissue was placed from the corner. b a spiral wave in 2D tissue. The spiral wave was induced by cross-field stimulation. c Spiral wave breakup in 2D tissue. d Action potential propagation in the anatomic heart. e a spiral wave in the anatomic heart. f Spiral wave breakup. g Electrocardiogram from the anatomic heart simulation f

For the 2D tissue simulations, the benchmark protocol involved pacing the tissue from the corner for 3 s of simulated time at a pacing cycle length of 150 ms. Tissue size was varied from 100 × 100 (1.5 cm × 1.5 cm) to 800 × 800 (12 cm × 12 cm). For the 3D tissue simulations, the benchmark protocol consisted of pacing the whole heart from the apex for 3 s of simulated time, at a pacing cycle length of 150 ms.

Finally, we investigated where the computational ‘bottlenecks’ occurred. We split the program into three parts, the ODE calculation, the PDE calculation, and the data transfer. In order to measure the data transfer time, the ODE calculation and the PDE calculation were skipped, and the total time elapsed was then assigned to data transfer. Then, skipping the ODE calculation, we could measure the time for the PDE calculation plus the data transfer. The time for the PDE calculation was then estimated by subtracting the data transfer time from the (PDE + data transfer) time. The time for the ODE calculation was obtained by subtracting the (PDE + data transfer) time from the time for the whole simulation.

3 Results

When tissue is homogeneous, parallel computation is very efficient. To compute 1 s of simulated time in 100 × 100 tissue, a single GPU took 8.2 s, whereas the CPU took 201 s. For larger tissue (800 × 800) the GPU took 283 s, while the CPU took 13,113 s. This is because as the tissue size becomes larger, the boundary/non-boundary ratio becomes smaller and the parallel computation becomes more efficient. In these cases, a single GPU is 24~46 times faster than the single CPU (Fig. 3a).

Fig. 3
figure 3

Comparison between GPU and CPU. a Time to compute 3 s of simulation time in 2D tissue. X-axis is the tissue size. Left Y-axis is computation time. Right Y-axis is acceleration (i.e., computation time with CPU/computation time with GPU). b Time to simulate the whole heart for 1 s of the simulation time. c Computation ratio of the ODE, the PDE, and the data transfer for each simulation

To simulate the anatomic rabbit ventricular model [10] for 1 s, the HPC cluster with 32 CPUs (8 nodes) took 45 min. On the other hand, one GPU took about 72 min and two and four GPUs took about 43 and 27 min respectively for the same simulation (Fig. 3b).

The bottleneck of the computation with CPUs is mainly in the ODE part. On the other hand, the bottleneck of the computation with GPUs is mainly in the PDE part (Fig. 3c).

4 Conclusions

We demonstrate that GPUs are substantially faster than CPUs in the simulation of action potential propagation in cardiac tissue. A single GPU simulation of the whole heart is only 1.6 times slower than the simulation in an HPC cluster, and two or four GPUs are even faster than the HPC cluster, making the GPU a new tool for cardiac simulations. Utilizing GPUs poses additional programming requirements over that of traditional parallel CPU implementations. However, like parallel CPU implementations, management of threads and memory must be well thought out if maximum performance is to be achieved.