Next Article in Journal
Uplink Sparse Channel Estimation for Hybrid Millimeter Wave Massive MIMO Systems by UTAMP-SBL
Previous Article in Journal
A Novel Robust Smart Energy Management and Demand Reduction for Smart Homes Based on Internet of Energy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future

by
David Ahmedt-Aristizabal
1,2,*,
Mohammad Ali Armin
1,
Simon Denman
2,
Clinton Fookes
2 and
Lars Petersson
1
1
Imaging and Computer Vision Group, CSIRO Data61, Canberra 2601, Australia
2
Signal Processing, Artificial Intelligence and Vision Technologies (SAIVT) Research Program, Queensland University of Technology, Brisbane 4000, Australia
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(14), 4758; https://doi.org/10.3390/s21144758
Submission received: 7 June 2021 / Revised: 5 July 2021 / Accepted: 7 July 2021 / Published: 12 July 2021
(This article belongs to the Section Intelligent Sensors)

Abstract

:
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered, which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interacting nodes connected by edges whose weights can be determined by either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure, and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.

1. Introduction

Medical diagnosis refers to the process by which one can determine which disease or condition explains a patient’s symptoms. The required information for diagnosis is obtained from a patient’s medical history, and various medical tests that capture the patient’s functional and anatomical structures through diagnostic imaging methods such as functional magnetic resonance imaging (fMRI), magnetic resonance imaging (MRI), computed tomography (CT), and other diagnostic tools including the electroenchephalogram (EEG). However, given the often time-consuming diagnosis process which is prone to subjective interpretation and inter-observer variability, clinical experts have begun to benefit from computer-assisted interventions. Automation is of benefit in situations where there is limited access to healthcare services and physicians. Automation is also being pursued to increase the quality and decrease the cost of healthcare systems [1].
Deep learning offers an exciting avenue to address these demands. The success of deep learning in many fields is due in part to the availability of rapidly increasing computing resources and large experimental datasets, and in part to the ability of deep learning to extract representations from data structured as regular grids (i.e., images) through stacked convolutional operations. There are several review papers available that analyse the benefits of traditional machine learning and deep learning methods for the detection and segmentation of medical anomalies and anatomical structures, and computer-aided diagnosis [2,3]. Although CNNs have shown impressive performance in the medical field for imaging (MRI, CT) and non-imaging applications (fMRI, EEG), their conventional formulation is limited to data structured in an ordered, grid-like fashion. Several physical human processes generate data that are naturally embedded in a graph structure as illustrated in Figure 1 (Top). Traditional CNNs analyse local areas based on fixed connectivity (determined by the convolutional kernel), leading to limited performance, difficulty in interpreting the functional and anatomical structures being modeled, and an inability to capture complex neighbourhood information. Therefore, machine-learning models that can exploit graph structures are at an advantage as they enable an effective representation of complex physical entities and processes, and irregular relationships.
Graph networks belong to an emerging area that has also made a tremendous impact across many technological domains. Much of the information coming from disciplines such as chemistry, biology, genetics, and healthcare is not well suited to vector-based representations, and instead requires complex data structures. Graphs inherently capture relationships between entities, and are thus potentially very useful for these applications to encode relational information between variables [4]. Hence, effort has been devoted to the generalization of graph neural networks (GNN) into non-structural (unordered) and structural (ordered) scenarios. However, while the use of graph-based representations is becoming more common in the medical domain, such approaches are still scarce compared to conventional deep learning methods, and their potential to address many challenging medical problems is yet to be fully realised.
The adaptation of deep learning from images to graphs has resulted in a new cross-domain field of graph-based deep learning that seeks to learn informative representations of graphs in an end-to-end manner. Graph convolutional networks (GCNs) have extended the theory of signal processing on graphs [9] to enable the representation learning power of CNNs to be applied to irregular graph data. GCNs generalize the convolution operation to non-Euclidean graph data. The graph convolutional operation aims to generate representations for vertices by aggregating the features of a given vertex with the features of its neighbours. The relationship-aware representations generated by GCNs greatly enhance the discriminative power of CNN features, and the improved model interpretability can help clinicians to determine, for example, the parts of the brain that are most involved in one particular task. The popularity of the rapidly growing field of deep learning on GNNs is also reflected by the numerous recent surveys on graph representations and their applications. Existing reviews provide a comprehensive overview of deep learning for non-Euclidean data, graph deep learning frameworks and a taxonomy of existing techniques [4,10] or introduce general applications that cover biology and signal processing domains [11,12,13].
In this paper, we endeavour to provide a thorough and methodological review of multiple GNN models proposed for use in medical diagnosis and analysis. We seek to explain the fundamental reasons why GNNs are worth investigating for this domain, and highlight the emerging medical analytics challenges that GNNs are well placed to address. Although some papers have surveyed medical image analysis using deep learning techniques and have introduced the concept of GNNs for the assessment of neurological disorders [14], to the best of our knowledge, no systematic review exists that introduces and discusses the current applications of GNNs to unstructured medical data.

1.1. Why Graph-Based Deep Learning for Medical Diagnosis and Analysis?

Recent progress in deep learning has increased the potential of medical image analysis by enabling the discovery of morphological, textural, and temporal representations from images and signals solely from the data. GNNs have seen a surge in popularity due to their successes in modeling unstructured and structured relational data including brain signals (fMRI and EEG), and in the detection and segmentation of organs (MRI, CT) as represented in Figure 1 (Bottom). Below, we outline several application domains which are well suited to graph networks, and outline the reasons why graph neural networks are becoming more widely used within these domains.

1.1.1. Brain Activity Analysis

Brain signals are an example of a graph signal, and the graph representation can encode the complex structure of the brain to represent either physical or functional connectivity across different brain regions. At the structural level, the network is defined by the anatomical connections between regions of brain tissue. At the functional level, the graph nodes represent brain regions of interest (ROI), while edges capture the relationships between the regions and their activities, computed via an fMRI correlation matrix [15].
GNN models also offer advantages when considering the need to develop deep-learning models that allow a direct interpretation of non-Euclidean spaces. The explanations obtained by such models can help to identify and localize regions relevant to a model’s decisions for a given task. An example is how certain brain regions, defined as biomarkers, are related to a specific neurological disorder [16,17].
Graphs also provide a natural way to represent population data and model complex interactions and associations between subjects for disease analysis [18].

1.1.2. Brain Surface Representation

The structures in medical images have a spherical topology (i.e., brain cortical or subcortical surfaces). These are often represented by triangular meshes with large inter- and intra-subject variations in vertex numbers and changes in local connectivity. Due to the absence of a consistent and regular neighbourhood definition, conventional CNNs cannot be directly applied to these surfaces [19]. GCNs, however, can be applied to graphs with a varying number of nodes and connectivity [20]. Spherical CNN architectures can render valid parametrizations in a spherical space without introducing spatial distortions on the sphere (spherical mapping) [21], and geometric features can be augmented by utilizing surface registration methods [22]. GCNs can also offer more flexibility to parcellate the cerebral cortex (surface segmentation) by providing better generalization on target-domain datasets where surface data are aligned differently, without the need for manual annotations or explicit alignment of these surfaces [23].

1.1.3. Segmentation and Labeling of Anatomical Structures

Segmentation of vessels and organs is a critical but challenging stage in the medical image processing pipeline due to anatomical complexity. Traditional deep learning segmentation approaches classify each pixel of an image into a class by extracting high-level semantic features. CNNs struggle because regions in images are rarely grid-like and require non-local information. Compared with these pixel-wise methods, a graph-based method learns and regresses the location of the vessels and organs directly, and allows the model to learn local spatial structures [24,25]. GCNs can also propagate and exchange local information across the whole image to learn the semantic relationships between objects.

1.2. Scope of Review

The application of graph neural networks to medical signal processing and analysis is still in its nascent stages. In this paper, we present a survey that captures the current efforts to apply GNNs for medical data understanding and diagnosis The total number of applications considered in our survey is 92 with a chronology of publication as follows: 2017 (4), 2018 (7), 2019 (37), 2020 (40), and 2021 (4). The area of digital pathology (WSI) is omitted from this review due to the diverse applications of GCNs to this domain, which we feel merit their own separate review paper [26].

1.3. Contribution and Organisation

Compared to other recent reviews that cover the theoretical aspects of graph networks in multiple domains, our manuscript has novel contributions which are summarized as follows:
  • We identify a number of challenges facing traditional deep learning when applied to medical signal analysis, and highlight the contributions of graph neural networks to overcome these.
  • We introduce and discuss diverse graph frameworks proposed for medical diagnosis and their specific applications. We cover work for biomedical imaging applications using graph networks combined with deep learning techniques.
  • We summarise the current challenges encountered by graph-based deep learning and propose future directions in healthcare based on currently observed trends and limitations.
In Section 2, we briefly describe the most common graph-based deep learning models used in this domain, including GCNs and its variants, with temporal dependencies and attention structures.
In Section 3, we explain the use cases identified in the literature review. We organise publications according to the input data (functional connectivity, electrical-based, and anatomical structure) and cluster approaches based on specific applications (e.g., Alzheimer’s disease, organ segmentation, or brain data regression).
Finally, Section 4 highlights the limitation of current GNNs adopted for medical diagnosis and introduces graph-based deep learning techniques that can be utilised in this domain. We also provide some research directions and future possibilities for the use of GNNs in healthcare that have not been covered in the literature, such as for behavioural analysis.

2. Graph Neural Networks Background

In this section, we introduce several graph-based deep learning models including GCNs and their variants with temporal dependencies, and attention structures, which have been used as the foundation for the medical applications. We aim to provide technical insights regarding the architectures. A deep analysis of each architecture can be found in multiple survey papers in this domain [4,12,13].

2.1. Graph Representation

A graph can be represented as G = ( V , E , W ) , where V represents the set of N nodes, | V | = N ; E denotes the set of edges connecting these nodes, and W is the adjacency matrix. The adjacency matrix describes the connections between any two nodes in V , in which the importance of the connection between the i-th and the j-th nodes is measured by the entry of W in the i-th row and j-th column, and denoted by w i j . Commonly used methods to determine the entries, w i j , of W include the Pearson correlation-based graph, the K-nearest neighbour (KNN) rule method, and the distance-based graph [9]. Figure 2 demonstrates an example of a graph containing six vertices and the edges connecting the nodes of the graph, along with the graph adjacency matrix.

2.2. Graph Neural Network Architectures

Graph convolutional networks learn abstract feature representations for each feature in a node via message passing, in which nodes iteratively aggregate feature vectors from their neighbourhood to compute a new feature vector at the next hidden layer in the network. Different GNN variants use different aggregators to gather information from each node’s neighbours, and use varied methods to update the hidden states of nodes.
GCNs can be categorised as: spectral-based [27,28] and spatial-based [29,30]. Spectral-based GCNs rely on the concept of spectral convolutional neural networks that build upon the graph Fourier transform and the normalized Laplacian matrix of the graph. Spatial-based GCNs define a graph convolution operation based on the spatial relationships that exist among the graph nodes.
Based on the original graph neural networks in [31], we explore the most representative GNN variants that have been proposed for several clinical applications.

2.2.1. ChebNet

For spectral-based GCNs, the convolution operation is defined in the Fourier domain by computing the eigendecomposition of the graph Laplacian [32]. The normalized graph Laplacian is defined as L = I N D 1 / 2 A D 1 / 2 = U Λ U T (D is the degree matrix and A is the adjacency matrix of the graph), where the columns of U is the matrix of eigenvectors and Λ is a diagonal matrix of its eigenvalues. The operation can be defined as the multiplication of a signal x R N (a scalar for each node) with a filter g θ = diag ( θ ) , parameterized by θ R N
g θ 🟉 x = U g θ ( Λ ) U T x .
Defferrard et al. [27] proposed the ChebyNet, which approximates the spectral filters by truncated Chebyshev polynomials, avoiding the computation of the Fourier basis. A Chebyshev polynomial T m ( x ) of order m evaluated at L ˜ is used [27], and the operation is defined as
g θ 🟉 x m = 0 M 1 θ m T m ( L ˜ ) x ,
where L ˜ is a diagonal matrix of scaled eigenvalues defined as L ˜ = 2 L / λ max I N . λ max denotes the largest eigenvalue of L. The Chebyshev polynomials are defined as T m ( x ) = 2 x T k 1 ( x ) T k 2 ( x ) with T 0 ( x ) = 1 and T 1 ( x ) = x . By introducing Chebyshev polynomials, ChebNet does not require calculating the eigenvectors of the Laplacian matrix, and this reduces the computational cost. A graph pooling layer in the GCN pools information from multiple vertices to one vertex, which reduces the graph size and expands the receptive field of the graph filters. The feature vectors from the last graph convolutional layer are concatenated into a single feature vector, which is fed to a fully connected layer to obtain classification results.

2.2.2. Graph Convolutional Network

A GCN is a spectral-GNN with mean pooling aggregation. Kipf and Welling [28] presented the GCN using a localized first-order approximation of spectral convolutions on the graph. It uses a simple layer-wise propagation rule to encode the relationships of nodes from the graph structure into node features. By reducing the size of the convolution filter K = 1 to alleviate the problem of overfitting to the local neighbourhood structure of graphs with a very wide node degree distribution [28], and a further approximation λ 2 , Equation (2) can be simplified to
g θ 🟉 x θ 0 x + θ 1 x ( L I N ) x = θ 0 x + θ 1 D 1 / 2 A D 1 / 2 x .
Here, θ 0 , θ 1 are two unconstrained variables. To restrain the number of parameters and avoid overfitting, GCN further assume that θ = θ 0 = θ 1 , leading to the following definition of a graph convolution:
g θ 🟉 x θ ( I N + D 1 / 2 A D 1 / 2 ) x .
Stacking this operation will cause numerical instabilities and the explosion or disappearance of gradients. Thus, Kipf and Welling [28] generalize the definition to a signal X R N X C with C input channels and F filters for feature maps as follows:
Z = D ˜ 1 / 2 A ˜ D ˜ 1 / 2 X Θ ,
where Θ R C X F is the matrix formed by the filter bank parameters, and Z R N X F is the signal matrix obtained by convolution.

2.2.3. GraphSAGE

GraphSAGE is a spatial-GCN which uses a node embedding with max-pooling aggregation. Hamilton et al. [30] offer an extension of using GCNs for inductive unsupervised representation learning with trainable aggregation functions instead of simple convolutions applied to neighbourhoods in a GCN. The AGGREGATE operation can aggregate neighbouring node representations of the center node, while the COMBINE operation combines the neighbourhood node representation with the center node representation to obtain the updated center node representation. The authors propose a batch-training algorithm for GCNs to save memory at the cost of sacrificing time efficiency. The GraphSAGE framework generates embeddings by sampling and aggregating features from a node’s local neighbourhood,
h N v t = AGGREGATE t h u t 1 , u N v , h v t = σ ( W t · [ h v t 1 h N v t ] ) ,
where N v is the neighbourhood set of node v, h v t is the hidden state of node v at time step t, and W t is the weight matrix at layer t. Finally, σ denotes the logistic sigmoid function and ∥ denotes vector concatenation.
In [30], three aggregating functions are proposed: the element-wise mean, an LSTM, and max-pooling. The mean aggregator is an approximation of the convolutional operation from the transductive GCN framework [28]. An LSTM is adapted to operate on an unordered set by permutating the node’s neighbours. In the pooling aggregator, each neighbour’s hidden state is fed through a fully-connected layer, and then a max-pooling operation is applied to the set of the node’s neighbours. Unlike GCN’s aggregator, which assigns neighbour-specific, predefined weights based on node degree, GraphSAGE’s mean operator assigns the same weights to all neighbours of a given node.

2.2.4. Graph Isomorphism Network

The graph isomorphism network (GIN) [33] is a spatial-GCN that aggregates neighbourhood information by summing the representations of neighbouring nodes. Isomorphism graph-based models are designed to interpret graphs with different nodes and edges. GIN’s aggregation and readout functions are injective and thus are designed to achieve maximum discriminative power [33].

2.2.5. Graph Networks with Attention Mechanisms

Attention mechanisms are established in neuroscience and can be divided into two main types: soft-attention and self-attention mechanisms.
Soft-attention mechanisms: Soft-attention mechanisms allow the model to learn the most relevant parts of the input sequence during training. Soft-attention mechanisms are end-to-end approaches that can be learned by gradient-based methods [34]. Attention also provides a tool for interpreting network results and discovering the underlying dependencies that have been learnt. The attention mechanism can be formulated as follows:
u t = tanh ( W h t + b ) , α t = exp ( u t T u w ) j = 1 n exp ( u t T u w ) , s t = t α t h t ,
where h t is the output of each layer; W, u w , and b are trainable weights and bias. The importance of each element in h t is measured by estimating the similarity between u t and h t , which is randomly initialized. α t is a softmax function. The scores are multiplied by the hidden states to calculate the weighted combination, s t (attention-based final output).
Self-attention mechanisms Graph attention networks (GAT) [35] incorporate the attention mechanism into the propagation steps by modifying the convolution operation. In a traditional GCN, the weights typically depend on the degree of the neighbouring nodes, while, in GATs, the weights are computed by a self-attention mechanism based on node features (i.e., to learn neighbour-specific weights). Veličković et al. [35] constructed a graph attention network by stacking a single graph attention layer, a, which is a single-layer feedforward neural network, parametrized by a weight vector a R 2 F i . The layer computes the coefficients in the attention mechanisms of the node pair ( i , j ) by
α i , j = exp ( LeakyReLu ( a T [ W h i W h j ] ) ) k N i N exp ( LeakyReLu ( a T [ W h i W h k ] ) ) ,
where ∥ represents the concatenation operation. The attention layer takes as input a set of node features h = { h 1 , h 2 , . . . , h N } , h i R F , where N is the number of nodes of the input graph and F the number of features for each node, and produces a new set of node features h = { h 1 , h 2 , . . . , h N } , h i R F as its output. To generate higher-level features, as an initial step, a shared linear transformation, parametrized by a weight matrix W R F * F , is applied to every node and subsequently a masked attention mechanism can be applied to every node, resulting in the following scores:
e i j = a ( W h i , W h j ) ,
which indicates the importance of node j s features to node i. The final output feature of each node can be obtained by applying a nonlinearity, σ ,
h i = σ ( j N i α i j W h j ) .
The layer also uses multi-head attention to stabilise the learning process. K different attention heads are applied to compute mutually independent features in parallel, and then concatenate their features, resulting in the following representations:
h i = K = 1 K σ ( j N i α i j k W k h j ) ,
or by employing averaging and delay applying the final nonlinearity (usually a softmax or logistic sigmoid for classification problems),
h i = σ ( 1 K k = 1 K j N i α i j k W k h j ) ,
where α i j k is the normalized attention coefficient computed by the k-th attention mechanism.
Other GNN variants that were proposed in the paper surveyed in this review can be summarized as:
  • Adaptive graph convolutional network [36].
  • Graph domain adaptation [23].
  • Isomorphism graph-based model [16].
  • Synergic GCN [37,38].
  • Simple graph convolution network [39,40].
  • Graph-based segmentation (e.g., 3D Unet-graph [24,41], Spherical Unet [19,22]).
  • Attention mechanisms for feature representation [42].
  • Weighted GATs [43].
  • Edge-weighted GATs [17,44].
  • Attention based ST-GCN [45,46].
  • Cross-modality with GAT-based embedding [47].

2.3. Graph Neural Networks with Temporal Dependency

GNNs have primarily been developed for static graphs that do not change over time. However, several real-world graphs are dynamic and evolve over time (e.g., brain activity recorded using fMRI). These variants of GNNs known as dynamic graphs aim to learn hidden patterns from the spatial and temporal dependencies of a graph. These models can be divided into two main types:
  • RNN-based approaches: These methods capture spatio-temporal dependencies by using graph convolutions to filter inputs and hidden states before passing them to a recurrent unit.
  • CNN-based approaches: These approaches tackle spatio–temporal graphs in a non-recursive manner. They use temporal connections to extend static graph structures so that they can apply traditional GNNs on the extended graphs.

2.3.1. RNN-Based Approaches

The aim of these models is to learn node representations with recurrent neural architectures (RNNs). They assume a node in a graph constantly exchanges information/messages with its neighbours until a stable equilibrium is reached. In a deep learning model, RNNs introduce the notion of time by including recurrent edges that span adjacent time steps [48]. RNNs perform the same task for every element of a sequence, with the output being dependent on the previous computations and is therefore termed recurrent. LSTMs [49] were proposed to increase the flexibility of RNNs by employing an internal memory, termed the cell state, to address the vanishing gradient problem. Three logic gates are also introduced to adjust the cell state and produce the LSTM output. GRUs [50] are a variant of LSTMs which combines the forget and input gates to simplify the model.
DCRNN model: Diffusion convolutional recurrent neural networks (DCRNNs) [51] introduce the diffusion graph convolutional layer to capture spatial dependencies, and uses a sequence-to-sequence architecture with GRUs to capture temporal dependencies. A DCRNN uses a graph diffusion convolution layer to process the inputs of a GRU such that the recurrent unit receives historic information from the last time step as well as neighbourhood information from the graph convolution. The advantage of a DCRNN is its ability to handle long-term dependencies because of the recurrent network architectures.
GCRN model: The graph convolutional recurrent network (GCRN) [52] combines an LSTM network with ChebNet. A dynamic graph consists of time-varying connectivity among ROIs, and temporal information is handled by using LSTM units. Such a framework has been used in [53] for Alzheimer’s disease classification.

2.3.2. CNN-Based Approaches

Although RNN-based models are widely used for time series analysis, they still suffer from time-consuming iterations, complex gate mechanisms, and slow response to dynamic changes. CNN-based approaches operate with fast training, stable gradients, and low memory requirements [54]. These approaches interleave 1D-CNN layers with graph convolutional layers to learn temporal and spatial dependencies, respectively.
STGCN model: The spatio-temporal graph convolutional network proposed by Yu et al. [55] employed convolutional structures on the time axis to capture dynamic temporal behaviors. This model integrates a 1D convolutional layer with ChebNet or GCN layers.
Such adoption of CNNs to perform a convolution operation in the temporal dimension has been used for sleep state classification [45].
ST-GCN model: ST-GCNs are popular for solving problems that base predictions on graph-structured time series [56]. The main benefits of a temporal GCN are that it uses a feature extraction operation that is shared over time and space. The input to the ST-GCN is the joint coordinate vectors on the graph nodes. Multiple layers of spatio-temporal graph convolution operations process the input data and higher-level feature maps on the graph. The resultant classification is performed using a conventional dense layer and activation.
TGCN model: Traditional temporal convolutional neural networks (TCNN) show that variations of convolutional neural networks can achieve impressive results for sequential data [57]. TCNNs use dilated causal convolutional layers where an output at time t is convolved only with elements from time t or earlier in the previous layer, i.e., inputs have no influence on output steps that precede them in time. In a dilated convolutional layer, a filter is sequentially applied to inputs by skipping input values with a pre-defined step (dilatation rate). Wu et al. [58] proposed a method for multi-resolution modeling of temporal dependencies; their temporal model is based on dilated convolutions. This approach is based on the fact that subsequent layers have dilated receptive fields. Temporal graph convolutional networks (TGCN) take structural times series data as input and apply feature extraction operations that are shared over both time and space. TGCNs show promise in applications such as EEG electrode distributions, where several datasets of similar but not identical configurations need to be analyzed.
Other dynamic GNN variants adopted and introduced by research analysed in this review include:
  • Sequential GCN based on complex networks [59].
  • Temporal-adaptive GCN [60].

3. Case Studies of GNN for Medical Diagnosis and Analysis

Graph convolutional networks have been utilized in classification, prediction, segmentation and reconstruction tasks with non-structural (e.g., fMRI, EEG) and structural data (e.g., MRI, CT). There are several specificities in the usage of GNNs in each of the medical signals identified by our survey that we review in the following sections.
The case studies for medical diagnosis are organised according to the input data and baseline graph framework adopted or proposed with its corresponding application and dataset. Case studies have been divided into four main groups: functional connectivity analysis, electrical-based analysis and anatomical structure analysis classification/regression and segmentation, which are detailed in Table 1, Table 2, Table 3 and Table 4, respectively. Rather than presenting an exhaustive literature review for each studied case, we discuss prominent highlights of how GNNs were used in each case.
It is important to highlight that there are several interesting works that aim to map functions to brain regions, to model the non-stationary nature of functional connectivity, and analyse the brain’s responses to internal or external events using graph-based deep learning models. These approaches have been used for gender classification with brain connectivity or brain structure, emotion recognition, and brain motor imagery. Although the outcome of these studies can be used for potentially clinical applications, they are not directly related to detecting or classifying a disease. Thus, their contributions are not covered in this manuscript.

3.1. Functional Connectivity Analysis

This section mainly covers application of graph learning representation on brain functional connectivity as summarized in Table 1; to the best of our knowledge, there are no applications that involved other body functions in the reviewed literature.
Table 1. Summary of GCN approaches adopted for functional connectivity and their applications.
Table 1. Summary of GCN approaches adopted for functional connectivity and their applications.
AuthorsYearModalityApplicationDataset
Li et al. [17] †2020t-fMRIClassification: Autism disorderASD Biopoint Task (Yale Child Study Center [16]) (2 classes)
Li et al. [61]2020t-fMRIClassification: Autism disorderBiopoint [62] (2 classes)
Huang et al. [18]2020rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Rakhimberdina et al. [64]2020fMRIClassification: Autism disorderABIDE [63] (2 classes)
Li et al. [65]2020t-fMRIClassification: Autism disorderYale Child Study Center [16] (2 classes)
Jiang et al. [66]2020fMRIClassification: Autism disorderABIDE [63] (2 classes)
Li et al. [16]2019t-fMRIClassification: Autism disorderYale Child Study Center (private) (2 classes)
Kazi et al. [67]2019rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Yao et al. [68]2019rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Anirudh et al. [69]2019rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Rakhimberdina and Murata [40]2019fMRIClassification: Autism disorderABIDE [63] (2 classes)
Ktena et al. [70]2018rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Parisot et al. [15]2018rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Ktena et al. [71]2017rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Parisot et al. [72]2017rs-fMRIClassification: Autism disorderABIDE [63] (2 classes)
Rakhimberdina and Murata [40]2019fMRIClassification: SchizophreniaCOBRE [73] (2 classes)
Rakhimberdina and Murata [40]2019rs-fMRIClassification: Attention deficit disorderADHD-200 [74] (2 classes)
Yao et al. [68]2019rs-fMRIClassification: Attention deficit disorderADHD-200 [74] (2 classes)
Yao et al. [60] 🟉2020rs-fMRIClassification: Major depressive disorderMDD [75] (2 classes)
Yang et al. [44] †2019fMRI/sMRIClassification: Bipolar disorderBD (private)
Li et al. [61]2020rs-fMRIClassification: Brain response stimuliHCP 900 [76] (7 classes)
Zhang et al. [5]2019fMRIClassification: Brain response stimuliHCP S1200 [76] (21 classes)
Guo et al. [77]2017MEGClassification: Brain response stimuliVisual stimulus (private) (2 classes)
🟉 GCN with temporal structures for medical diagnostic analysis. † GCN with attention structures for medical diagnostic analysis.

3.1.1. Autism Spectrum Disorder

Autism spectrum disorder (ASD) is a complex neurodevelopmental disorder characterized by recurring difficulties in social interaction, speech and nonverbal communication, and restricted/repetitive behaviours. The screening of ASD is challenging due to uncertainties associated with its symptoms [78]. Resting-state fMRI (rs-fMRI) and task fMRI are the main modalities which are used to classify the population into ASD or health control (HC) groups.
The rapid development of GNNs has attracted interest in using these architectures to analyse fMRI and non-imaging data for disease classification. Graph-based models can be classified into two groups based on the node definition as illustrated in Figure 3: (a) Individual graph: nodes are brain regions and edges are functional correlations between time series observations from those regions. Therefore, each graph represents only one subject and graph comparison metrics are computed to analyse these graphs, which are represented in the left panel in Figure 3; (b) Population graph: in this approach, each node represents a subject with corresponding brain-connectivity data, and edges are determined as the similarity between subjects’ phenotypic features (age, gender, handedness, etc.), as is shown in the right panel in Figure 3.
Individual-based graph methods: Ktena et al. [71] proposed a GNN method to learn a similarity (distance) metric between irregular graphs, such as the functional connectivity graphs obtained from the Autism Brain imaging Data Exchange (ABIDE) dataset [63], to classify individuals as autism spectrum disorder (ASD) or healthy controls (HC).
The method of Ktena et al. [70] is based on their previous work [71] to learn a graph similarity metric in spectral graph domain obtained from brain connectivity networks via supervised learning. They applied their method to individual graphs constructed from the ABIDE database to classify subjects into ASD or HC. The graph construction is illustrated in Figure 4. They showed that their spectral graph matching method not only outperforms non-graph matching but is also superior to individual subject classification and manifold learning methods.
The graph similarity metric proposed by Ktena et al. [70] using a specific template for brain region of interest (ROI) parcellation could impose a limitation such as analysis of single spatial scale (i.e., a fixed graph). Yao et al. [68] dealt with this limitation by proposing a multi-scale triplet GCN. They constructed multi-scale functional connectivity patterns for each subject through multi-scale templates for coarse-to-fine ROI parcellation. A triple GCN model was designed to learn multi-scale graph features of brain networks. Their application on fMRI data obtained from the ABIDE dataset showed their high performance in ASD and HC classification.
For GCN methods, all nodes are required to be presented during training which result in low performance on unseen nodes. Li et al. [16] proposed a GCN algorithm to discover ASD brain biomarkers from t-fMRI. Different from the semi-supervised spectral GCN algorithm [28] used in [72], this GCN classifier is isomorphism graph-based which can interpret graphs with different nodes and edges. In other words, the GCN is trained on the whole graph and tested on sub-graphs, such that they could determine the importance of sub-graphs and nodes. In both works from Li et al. [17,61], the authors also improved their individual graph level analysis by proposing a BrainGNN and a pooling regularized GNN model to investigate the brain region related to a neurological disorder from t-fMRI data for ASD or HC classification.
In addition, the low signal-to-noise ratio of fMRI and its high dimensionality impose another limitation on using fMRI for graph level classification and detection of functional differences between ASD and HC groups. Li et al. [65] dealt with this challenge by modeling the whole brain fMRI as a graph. This allowed them to preserve the geometrical and temporal information and learn a better graph embedding. They implemented their method on a group of 75 ASD children and 43 age- and IQ-matched healthy controls collected at the Yale Child Study Center [16]. Their results indicated a more robust classification of ASD or HC.
Population-based graph methods: Population graphs have been shown to be effective for brain disorder classification. Parisot et al. [72] investigated the performance of GCN for brain analysis in a population where the authors built a population graph using both rs-fMRI and non-imaging data (acquisition information). They applied their model on the ABIDE dataset [63] to classify subjects as ASD or HC. Their semi-supervised method showed better performance in comparison to a standard linear classifier (which only considered the individual features for classification). In an extension of this work, Parisot et al. [15] proposed a spectral GCN model which takes into account both the pairwise similarity between subjects (phenotypic information) and information obtained from subject-specific imaging features to classify subjects as ASD or HC in a population.
As illustrated in Figure 5, Rakhimberdina and Murata [40] applied a linear simple graph convolution (SGC) [39] for brain disorder classification. They construct the population graphs by using the Hamming distance between phenotypic features of the subjects as weights of the edges of the graph. Their results on the ABIDE dataset [63] showed a high performance and efficiency of the linear SGC over the GCN based model deployed by Parisot et al. [15] on the same dataset.
As there is no standard method to construct graphs for a GNN, Anirudh et al. [69] proposed a bootstrapped version of GCNs that made models less sensitive to the initialisation of the construction of the population graph. They generated random graphs from the initial population graph (from the ABIDE dataset [63]) to train weakly a GCN for ASD and HC classification, and fused their prediction as the final result. To avoid the spatial limitation of a single template and learn multi-scale graph features of brain networks, Yao et al. [68] proposed a multi-scale triplet GCN model. These solutions, however, are problem specific, and choosing a particular graph definition over the other has remained a challenging problem. Rakhimberdina et al. [64] proposed a population graph-based multi-model ensemble method to deal with this problem. Their results on the ABIDE dataset [63] showed a 2.91% improvement in comparison to the best result reported for a non-graph solution [79].
The heterogeneity of the graph is challenging. Kazi et al. [67] proposed Inception-GCN as a spectral domain architecture for deep learning on graphs for node-level classification of disease prediction. This inception graph model is capable of capturing intra- and inter-graph structural heterogeneity during convolutions. The Inception-GCN could improve the performance of node classification in comparison to Parisot [72] as the baseline GCN using s-fMRI data from ABIDE.
To preserve the the topology information in the population network and their associated individual brain function network, Jiang et al. [66] proposed a hierarchical GCN framework to map the brain network to a low-dimensional vector while preserving the topology information. Their method leveraged a correlation mechanism in populating the network which could capture more information and result in more accurate brain network representation, and thus better classification of ASD from the ABIDE dataset [63] in comparison to Eigenpooling GCN [80] and the other population GCN [72] methods.
Finally, as stated earlier, uncertainties associated with ASD make it challenging [78], and thus Huang et al. [18] proposed an Edge-Variational GCN (EV-GCN) model with a learnable adaptive population graph core to incorporate multi-modal data for uncertainty-aware disease detection. Their model was tested on ASD/HC data, collected at the Yale Child Study Center [16] and showed the efficacy of the proposed method for embedding ASD and HC brain graphs.

3.1.2. Schizophrenia

Automatic classification of schizophrenia (SZ) based on fMRI data has also attracted attention. SZ is a devastating mental disease with extraordinary complexity characterized by behavioral symptoms such as hallucinations and disorganized speech. SZ shows local abnormalities in brain activity and in functional connectivity networks which can have unusual or disrupted topological properties. Rakhimberdina and Murata [40] exploited the simple linear graph [39] model for SZ detection, achieving an accuracy of 80.55% for a binary classification task. The use of the linear model within the graph model has a clear impact on decreasing its computational time. However, the edge construction strategy can be further improved by incorporating techniques to learn the edge weights such as self-attention weight features.

3.1.3. Major Depressive Disorder

Major depressive disorder (MDD) is a mental disease characterised by a depressed mood, diminished interests and impaired cognitive function. Among various neuroimaging techniques, rs-fMRI can observe dysfunction in brain connectivity on BOLD signals, and has been used to discriminate between MDD patients and healthy controls. Yao et al. [60] exploited time-varying dynamic information with a temporal adaptive GCN on rs-fMRI data to learn the periodic brain status changes to detect MDD. The model learns a data-based graph topology and captures dynamic variations of the brain fMRI data, and outperforms traditional GCN [28] and GAT [35] models.

3.1.4. Bipolar Disorder

Bipolar disorder (BD), or manic depression, is a mental health condition that causes extreme mood swings. Functional and structural brain studies have identified quantitative differences between BD and healthy controls; thus, combining modalities may uncover hidden relationships. Yang et al. [44] proposed a graph-attention based method that integrates structural MRI and fMRI to detect bipolar disorder. The main challenges in multimodal data fusion are the dissimilarity of the data types being fused and the interpretation of the results. One of the advantages of attention mechanisms is that they allow for the use of variable-sized inputs when focusing on the most important parts of the data to make decisions, which can then be used to interpret the salient input features. The model showed superiority over other machine learning classifiers and alternative GCN formulations.

3.1.5. Brain Responses to Stimulus

Identifying the relationship between brain regions in relation to specific cognitive stimuli has been an important area of neuroimaging research. An emerging approach is to study this brain dynamic using fMRI data. To identify these brain states, traditional methods rely on acquisition of brain activity over time to accurately decode a brain state.
Zhang et al. [5] proposed a GCN for classifying human brain activity on 21 cognitive tasks by associating a given window of fMRI data with the task used. The GCN takes a short series of fMRIs as input (10 s), propagates information among inter-connected brain regions, generates a high-level domain-specific graph representation, and predicts the cognitive state. This model outperforms a multi-class support vector machine classifier in identifying a variety of cognitive states in the HCP dataset [76]. However, the model only incorporates spatial graph convolutions, thus potentially losing the fine temporal information present in the BOLD signal [5].
Identifying the particular brain regions that relate to a specific neurological disorder or cognitive stimuli is also critical for neuroimaging research. GNNs have been widely applied as a graph analysis method. Nodes in the same brain graph have distinct locations and unique identities. Thus, applying the same kernel over all nodes is problematic. Li et al. [61] adopted weighted graphs from fMRI and ROI-aware graph convolutional layers to infer which ROIs are important for prediction of cognitive tasks. The model maps regional and cross-regional functional activation patterns for classification of cognitive task decoding in the HCP 900 dataset [76]. The framework is also capable of learning the node grouping and extracts graph features jointly, providing the flexibility to choose between individual-level and group-level explanations.
Deep learning has also been considered a competitive approach for analysing high-dimensional spatio-temporal data such as MEG signals. These signals are captured with 306 sensors (electrodes) distributed across the scalp that record the cortical activation. For reliable analysis, it is critical to learn discriminative low-dimensional intrinsic features. Guo et al. [77] proposed a spectral GCN model that integrates brain connectivity information to predict visual tasks using MEG data. The authors introduced an autoencoder-based network that integrates graph information to extract meaningful representations in an unsupervised manner, and classify whether a subject visualises a face or an object. This work focused on learning a low-dimensional representation from the input of MEG signals (i.e., a dimensionality reduction technique).

3.2. Electrical-Based Analysis

This section mainly covers application of graph learning representation on electrical activity including electroencephalogram (EEG), intracranial EEG, electrocardiogram (ECG), and polysomnography (PSG) as summarized in Table 2.
Table 2. Summary of GCN approaches adopted to electrical-based analysis and their applications.
Table 2. Summary of GCN approaches adopted to electrical-based analysis and their applications.
AuthorsYearModalityApplicationDataset
Jang et al. [81]2019EEGClassification: Affective mental statesDEAP [82] (40 classes)
Jang et al. [83]2018EEGClassification: Affective mental statesDEAP [82] (40 classes)
Mathur et al. [84]2020EEGClassification: Seizure detectionUniversity of Bonn [85] (2 classes)
Wang et al. [59] 🟉2020EEGClassification: Seizure detectionUniversity of Bonn [85] (2 classes), SSW-EEG (private) (2 classes)
Covert et al. [86] 🟉2019EEGClassification: Seizure detectionCleveland Clinic Foundation (private) (2 classes)
Lian et al. [42] †2020iEEGRegression: Seizure prediction (preictal)Freiburg iEEE (EPILEPSIAE) [87]
Wagh et al. [88]2020EEGClassification: Abnormal EEGTUH EEG corpus [89], MPI LEMON [90] (2 classes)
Wang et al. [43] †2020ECGClassification: Heart abnormalityHFECGIC [91] (34 classes)
Sun et al. [92]2020EGMClassification: Heart abnormalityEGM open-heart surgery [93] (2 classes)
Jia et al. [45] 🟉†2020PSGClassification: Sleep stagingMASS-SS3 [94] (5 classes)
🟉 GCN with temporal structures for medical diagnostic analysis. † GCN with attention structures for medical diagnostic analysis.

3.2.1. Affective Mental States

Brain signals provide comprehensive information regarding the mental state of a human subject. Jang et al. [83] proposed the first method to apply deep learning on graph signals to EEG-based visual stimulus identification. The model converts the EEG into graph signals with appropriate graph structures and signal features as input to GCNs to identify the visual stimulus watched by a human subject. Compared to fMRI signals, EEG analysis is limited to observing a smaller number of brain regions (i.e., electrodes) which may not allow for a sufficiently rich graph representation. Thus, the authors create a graph containing both intra-band and inter-band connectivity. This proposed approach is illustrated in Figure 6. Defining the graph connectivity structure for a given task is an ongoing problem and current models still have the limitation that appropriate graph structures need to be manually designed. To address this, Jang et al. [81] proposed an EEG classification model that can determine an appropriate multi-layer graph structure and signal features from a collection of raw EEG signals and classify them. In contrast to approaches that use a pre-defined connectivity structure, this method for learning the graph structure enhances classification accuracy.

3.2.2. Epilepsy

Epilepsy is one of the most prevalent neurological disorders characterised by the disturbance of the brain electrical activity, and recurrent and unpredictable seizures. Machine learning applications have been used for seizure prediction, seizure detection and seizure classification through the analysis of EEG/iEEG signals. CNNs and RNNs have shown success in analysing these signals for Epilepsy related tasks, but they suffer from a loss of neighbourhood information. On the other hand, GCNs represent the relationships between electrodes using edges, and can thus preserve rich connection information.
Seizure detection from time-series refers to recognising the ictal activity or that a seizure is occurring (i.e., determine the presence or absence of ongoing seizures). Mathur et al. [84] presented a method for detecting ictal activity using a visibility graph on the EEG by employing a Gaussian kernel function to assign edge weight. A graph discrete Fourier transform is also applied to obtain features which are used in the classification phase. Some works have proven the relationship between epilepsy and EEG components on certain frequencies, and this frequency–domain representation can generate highly interpretable results. Wang et al. [59] introduced a sequential GCN that preserves the sequential information in 1D signals. The model is based on a complex network that represents a 1D signal as a graph [95], in which each data point corresponds to a node and each edge is computed by a connection rule. The authors first transform the time-domain signal using a fast Fourier transform to produce a sequence of frequency–domain features that are aligned in the time domain, from which they develop a graph representation. Then, a GCN is adopted to learn features from the input network to improve the classification performance. By combining the frequency–domain network representation with the GCN, the model can detect conventional seizures in the Bonn dataset [85], and a seizure type known as absence epilepsy from a private dataset. However, multi-channel EEG signals were not considered in the experimental setup. Covert et al. [86] proposed a temporal graph convolutional network (TGCN) which consists of feature extractors that are localized and shared over both time and space. TGCN is inherently invariant to when and where the patterns occur. The authors investigate the benefits of TGCN’s interpretability in terms of assisting clinicians in determining when seizures occur and which areas of the brain are most involved. However, the model is limited to allow for varying graph structures.
Seizure prediction aims to predict upcoming seizures or the pre-ictal brain state (i.e., before a seizure). The underlying relationship in the pre-ictal period can be diverse across patients, making it difficult to build a predefined graph that is effective for a large number of patients. To address this, instead of directly using a prior graph, Lian et al. [42] proposed to build a graph based on the influences of relationships. The authors introduced global-local GCNs that jointly learn the structure and connection weights to optimize the task-related learning of iEEG signals. The connections in nodes are updated with attention and gating mechanisms, but the model requires a large volume of data for training.

3.2.3. Abnormal EEG in Neurological Disorders

The application of machine learning techniques to automatically detecting anomalies in medical data are particularly attractive considering the difficulties in consistency and objectivity identifying anomalies. There exist numerous medical anomaly detection tasks, including identifying abnormal EEG recordings of patients with neurological disorders. An assessment is made when analysing an EEG recording to see whether the recorded signal appears to indicate abnormal or regular brain activity patterns.
Recent GCNs have addressed the challenges of learning the spatio-temporal relationships in EEG data. Wagh et al. [88] introduced a GCN that captures both spatial and functional connectivity for multi-channel EEG data to distinguish between “normal” EEGs on patients with neurological diseases and the EEGs of healthy individuals. First, a graph-based representation with its corresponding node-level embedding is extracted from 10-second windows of EEG signals fed through a GCN model. Then, a graph-level embedding is computed using an averaging operation, the output of which is input into a fully connected network to obtain the output class. Finally, a maximum likelihood estimation based on the window-level prediction is adopted to determine if the entire EEG recording was recorded from a particular patient (i.e., subject prediction). Results on two large-scale scalp EEG databases, TUH EEG corpus [89] and MPI LEMON [90], significantly outperform traditional machine learning models. The authors also evaluated the effect of depth on GCNs, and find higher depth offers only a marginal improvement in performance. However, the data from patients and control participates were collected using different systems which may help to distinguish both classes and a feature engineering phase was considered which limits the model’s ability to directly discover the optimal features from the data.

3.2.4. Heart Abnormalities

Electrocardiograms (ECG) are widely used to identify cardiac abnormalities and a variety of methods have been proposed for the classification of ECG signals. However, an ECG record may contain multiple concurrent abnormalities and current deep learning methods may ignore the correlations between classes, and looks at each class independently. This can be addressed via graph-based representations.
The GAT architecture has matched or surpassed state-of-the-art results across graph learning benchmarks. Still, it is designed to only classify nodes within a single network, and it can only deal with binary graphs. Wang et al. [43] proposed a multi-label weighted graph attention network to classify 34 kinds of electrocardiogram abnormalities. In this model, ECG features are extracted from a CNN (1D ResNet). The features of each class are fed into an improved GAT by integrating a co-occurrence weight with masked attentional weights. The weighted GAT helps capture the relationships within the ECG abnormalities. Then, the features learnt by the CNN and GAT are concatenated to output the probability of each class.
The epicardial electrogram (EGM) is measured on the heart’s surface and has been used to analyse atrial fibrillation, a clinical arrhythmia correlated with stroke and sudden death. Conventional signal processing methods are less suitable for joint space time and frequency domain analysis. Sun et al. [92] represented the spatial relationships of epicardial electrograms through a graph to formulate a high-level model for atrial activity. The authors evaluated the spatio-temporal variation of EGM data with a graph-time spectral analysis framework and identified spectral differences between normal heart rhythms and atrial fibrillation from EGM signals taken during open heart surgery [93].

3.2.5. Sleep Staging

Sleep stage classification, the process of segmenting a sleep period into epochs, is essential for clinical assessment of sleep disorders including insomnia, circadian rhythm disorders, and sleep-related breathing and movement disorders [96], which may lead to serious health problems affecting quality of life. Sleep staging analysis is conducted through the analysis of electro-graphic measurements of the brain, eye movement, chin muscles, cardiac and respiratory activity, and is collected with a polysomnography (PSG). The manual determination of sleep stages on PSG records is a complex, costly, and problematic process that requires expertise. Although traditional CNN and RNN models can achieve high accuracy for automatic sleep stage classification, the models ignore the connections among brain regions and capturing the transition between sleep stages continues to be challenging. Sleep experts identify one sleep stage according to both EEG patterns and the class label of its neighbours. To address these challenges, Jia et al. [45] adopted an adaptive graph connection representation with attention, ST-GCN [46], for automatic sleep stage classification and to capture sleep transition rules temporally. First, the pairwise relationship between nodes (EEG channels) is constructed dynamically; then, a ST-GCN model with attention is adopted to extract both spatial and temporal features. Experimental results in classifying five sleep stages on the PSG dataset MASS-SS3 [94] achieves the best performance compared to SVM, CNN and RNN baselines.

3.3. Anatomical Structure Analysis (Classification and Prediction)

This section covers application of graph learning representation on anatomical structure analysis for classification with input data such as magnetic resonance image (MRI), T1 weighted image (T1W1), difussion MRI (DMRI), computed tomography (CT), X-ray and ultrasound (US) as summarized in Table 3.
Table 3. Summary of GCN approaches adopted for anatomical structure analysis and their applications (Group 1).
Table 3. Summary of GCN approaches adopted for anatomical structure analysis and their applications (Group 1).
AuthorsYearModalityApplicationDataset
Ma et al. [97] †2020MRIClassification: Alzheimer’s diseaseADNI [98] (2 classes)
Huang et al. [99]2020MRI/fMRIClassification: Alzheimer’s diseaseADNI [100] (3 classes)
Huang et al. [18]2020MRIClassification: Alzheimer’s diseaseADNI [100] (3 classes), TADPOLE [101] (3 classes)
Yu et al. [102]2020MRIClassification: Alzheimer’s disease/MCIADNI [100] (3 classes)
Gopinath et al. [20]2020MRIClassification: Alzheimer’s diseaseADNI [100] (2 classes)
Zhao et al. [103]2019MRIClassification: Alzheimer’s disease/MCIADNI [100] (2 classes)
Wee et al. [104]2019MRIClassification: Alzheimer’s diseaseADNI [100] (2 classes), Asian cohort (private) (2 classes)
Kazi et al. [67]2019MRIClassification: Alzheimer’s diseaseTADPOLE [101] (3 classes)
Song et al. [105]2019MRIClassification: Alzheimer’s diseaseADNI [100] (4 classes)
Gopinath et al. [36]2019MRIClassification: Alzheimer’s diseaseADNI [100] (2 classes)
Guo et al. [106]2019PETClassification: Alzheimer’s diseaseADNI 2 [107] (2/3 classes)
Parisot et al. [15]2018MRIClassification: Alzheimer’s diseaseADNI [100] (3 classes)
Parisot et al. [72]2017MRIClassification: Alzheimer’s diseaseADNI [100] (3 classes)
Xing et al. [53] 🟉2019T1WI/fMRIClassification: Alzheimer’s disease/EMCIADNI [98] (2 classes)
Zhang et al. [108]2018sMRI/DTIClassification: Parkinson’s diseasePPMI [109] (2 classes)
McDaniel and Quinn [110] †2019sMRI/dMRIClassification: Parkinson’s diseasePPMI [109] (2 classes)
Zhang et al. [47] †2020sMRI/dMRIClassification: Parkinson’s diseasePPMI [109] (2 classes)
Yang et al. [37]2019MRIClassification: Brain abnormalityBrain MRI images (private) (2 classes)
Wang et al. [111]2020CTClassification: COVID-19 detectionChest CT scans (private) (2 classes)
Yu et al. [112]2020CTClassification: COVID-19 detectionHospital of Huai’an City (private) (2 classes)
Wang et al. [113]2021CTClassification: TuberculosisChest CT scans (private) (2 classes)
Hou et al. [114] †2021X-rayClassification: Chest phatologiesIU X-ray [115] (14 classes), MIMIC-CXR [116] (14 classes)
Zhang et al. [117] †2020X-rayClassification: Chest phatologiesIU-RR [115] (20 classes)
Chen et al. [118]2020X-rayClassification: Chest phatologiesChestX-ray14 [119] (14 classes), CheXpert [120] (14 classes)
Zhang et al. [121]2021X-rayClassification: Breast Cancermini-MIAS (mammogram) [122] (6 classes)
Du et al. [123]2019X-rayClassification: Breast cancerINbreast (full field digital mammogram) [124] (2 classes)
Yin et al. [125]2019USClassification: Kidney diseaseChildren’s Hospital of Philadelphia (private) (2 classes)
Liu et al. [126]2020MRIRegression: Relative brain agePreterm MRI (private)
Gopinath et al. [20]2020MRIRegression: Relative brain ageADNI [100]
Gopinath et al. [36]2019MRIRegression: Relative brain ageADNI [100]
Chen et al. [127]2020DMRIRegression: Brain dataBCP [128]
Kim et al. [129]2019DMRIRegression: Brain dataDMRI neonate (private)
Hong et al. [130]2019DMRIRegression: Brain dataDMRI infant (private)
Hong et al. [7]2019DMRIRegression: Brain dataHCP [131]
Hong et al. [132]2019DMRIRegression: Brain dataHCP [131]
Cheng et al. [133]2020MRFRegression: Brain data3D MRF (private)
🟉 GCN with temporal structures for medical diagnostic analysis. † GCN with attention structures for medical diagnostic analysis.

3.3.1. Alzheimer’s Disease

Alzheimer’s disease (AD) is an irreversible brain disorder which destroys memory and cognitive ability. There is as yet no cure for AD and monitoring its progress [Cognitively Normal (CN), Significant Memory Concern (SMC), Mild Cognitive Impairment (MCI) (including early MCI (EMCI), and late MCI (LMCI)) and AD] is essential to adjust the therapy plan for each stage.
Similar to Autism Spectum Disorder (ASD), GCNs can be used to classify subjects into healthy or AD. Parisot et al. [15] constructed a population graph by integrating subject-specific imaging (MRI) and pairwise interactions using non imaging (phenotypic) data, then fed the sparse graph to a GCN to perform a semi-supervised node classification. Their experiments on the ADNI dataset for AD classification (conversion from (MCI) to AD) showed a high performance in comparison to a non-graph method [134]. In addition, comparing to their prior work [72], they showed a better graph structure (combining APOE4 gene data and eliminating AGE information) that could increase the accuracy of binary classification of AD on the ADNI dataset.
Huang et al. [18] applied their edge-variational GCN (EV-GCN) method to the ADNI dataset for AD classification (the data were prepared in the same manner as Parisot [72]). In addition, they applied their method on TADPOLE [101] which is a subset of ADNI for classifying subjects into cognitive normal, MCI, and AD. For TADPOLE, the authors constructed a graph by using the segmentation features inferred from MRI and PET data, phenotypic data, APOE, and FDG-PET biomarkers. Their results on both datasets showed a high performance in comparison to Parisot [72] and Inception GCN [67].
Zhao et al. [103] developed a GCN based method to predict MCI (EMCI vs. NC, LMCI vs. NC and LMCI vs. EMCI) from rs-fMRI. They constructed the MCI-graph using both imaging data extracted from rsf MRI and non-imaging data including gender and collection device information. They classified the nodes in the generated MCI-graph using GCN and Cheby-GCN and compared the results with a Ridge, a random forest classifier and a multilayer perceptron, and demonstrated a high performance for Cheby-GCN over those methods.
Xing et al. [53] proposed a model consisting of dynamic spectral graph convolution networks (DS-GCNs) to predict early mild cognitive impairment (EMCI), and two assistive networks for gender and age to provide guidance for the final EMCI prediction. They constructed graphs using T1-weighted and fMRI images from the ADNI [98] dataset. Apart from predicting age and gender for EMCI prediction, their model used an LSTM which could extract temporal information related to the EMCI prediction.
Yu et al. [102] used a multi-scale enhanced GCN (MSE-GCN) and applied it to a population graph which was built by combining imaging data(rs-fMRI and diffusion tensor imaging (DTI)) and demographic relationships (e.g., gender and age) to predict EMCI. This resulted in better performance in comparison to the prior methods of Zhao et al. [103] and Xing et al. [53]. Huang et al. [99] processed multi-modal data, MRI and rs-fMRI, to identify EMCI. First, feature representation and multi-task feature selection are applied to each input. Then, a graph was developed using imaging and non-imaging (phenotypic measures of each subject) data. Finally, a GCN was used to perform the EMCI identification task from the ADNI dataset [100].
Song et al. [105] built a structural connectivity graph from DTI data from the ADNI imaging dataset and implemented a multi-class GCN classifier for the four class classification of subjects on the AD spectrum. The receiver operating characteristic (ROC) curve was compared between GCN and SVM classifiers for each class and demonstrated the capability of GCN over SVM (which relies on a predefined set of input features) for AD classification.
For the subject-specific aggregation of cortical features (MRI images), Gopinath et al. [20,36] proposed an end-to-end learnable pooling strategy. This method is a two-stream network, one for calculating latent features for each node of the graph, and another for predicting node clusters for each input graph. The learnable pooling approach can handle graphs with a varying number of nodes and connectivity. The results of their binary classification on the ADNI dataset [98] for NC vs. AD, MCI vs. AD, and NC vs. MC, showed the value of leveraging geometrical information in the GCN.
Guo et al. [106] constructed a graph from the ROI of each subject’s PET images from the ADNI 2 dataset [107], and proposed a PETNet model based on GCNs for EMCI, LMCI, or NC prediction. The proposed method is computationally inexpensive and more flexible in comparison to voxel-level modeling.
Ma et al. [97] proposed an Attention-Guided Deep Graph Neural (AGDGN) network model to derive both structural and temporal graph features from the ADNI dataset [98]. This dataset contains four classes; however, due to a shortage of data to train this model, they combined CN and SMC to form the CN group, and MCI and AD to form the AD group. This resulted in a two-class classification problem. They used an attention-guided random walk (AGRW) process to extract noise-robust graph embeddings. Their results indicated that the identified AD characteristics detected by the proposed model aligned with those reported by clinical studies.
To reduce the burden of creating a reliable population-specific classifier from scratch, generalization of classifiers to other datasets or populations, especially those with a limited sample size, is critical. Wee et al. [104] employed a spectral graph CNN that incorporates the cortical thickness and geometry from MRI scans to identify AD. To demonstrate the generalisation and the feasibility to transfer classifiers learned from one population to another, the authors trained on a sizable caucasian dataset from the ADNI cohort [100], and evaluate how well the classifier can predict the diagnosis of an Asian population. To transfer the spectral graph-CNN model, the model that worked best on the ADNI cohort’s testing set was fine-tuned on the training set of the Asian population. The performance of the fine-tuned model was then assessed using the testing set of the Asian cohort.

3.3.2. Parkinson’s Disease

Parkinson’s Disease (PD) is a neurological disorder characterized by motor and non-motor impairments. Motor deficits include bradykinesia, rigidity, postural instability, tremor, and dysarthria; and non-motor deficits include depression, anxiety, sleep disorders, and slowing of thought. Neuroimaging research using structural, functional, and molecular modalities have also shed light on the underlying mechanism of Parkinson’s disease. Many imaging based biomarkers have been demonstrated to be closely related to the progression of PD. Zhang et al. [108] developed a framework for analyzing neuroimages using GCNs to learn similarity metrics between subjects with PD and HC using data from the PPMI dataset [109]. Structural brain MRIs are divided into a set of ROIs where each region is treated as a node on an undirected and weighted brain geometry graph. The authors showed the effectiveness of GCNs to learn features from similar regions and proposed a multi-view structure to fuse different MRI acquisitions. However, in this approach, temporal dependency is not considered.
McDaniel and Quinn [110] addressed the issue of analyzing multi-modal MRI data together by implementing a GAT layer to perform whole-graph classification. Instead of making predictions based on pairwise examples, GCNs predict the class of neuroimage data directly.
The features on each vertex must be pooled to generate a single feature vector for each input in order to convert the task from classifying each node to classifying the entire graph. The self-attention mechanism in GAT is used to compute the importance of graph vertices in a neighbourhood, allowing for a weighted sum of the vertices’ features during pooling. The results of combining diffusion and anatomical data from the PPMI dataset [109] with the proposed model outperforms baseline algorithms on the features constructed from the diffusion data alone. The GAT attention layer also enables the possibility to interpret the magnitude of each node’s attention weight as the relative importance of a brain area for discriminating PD participants.
Current brain network methods either ignore the intrinsic graph topology or are designed for a single modality. To address these challenges, Zhang et al. [47] proposed a graph representation to fuse functional (fMRI) and structural brain networks (MRI). The cross-modality relationships and encoding is generated by an encoder–decoder process. The authors adopted the idea of the GAT model for a dynamic adjustment of the weights. Here, three aggregation mechanisms are dynamically combined (graph attention weight, the original edge weight, and the binary weight) through a multi-stage graph convolutional kernel. This model achieves the best prediction performance compared to CNN-based and graph-based approaches. The model is capable of localizing 10 key regions associated with PD classification via a saliency map (e.g., the bilateral hippocampus and basal ganglia which are structures conventionally conceived as PD biomarkers).

3.3.3. Brain Abnormality

The ability to correctly recognize anomalous data is a deciding and crucial factor, so a highly accurate abnormality detection model is needed. Yang et al. [37] proposed a synergic graph-based model for a normal/abnormal classification of brain MRI images. The synergic deep learning method [38] can address the challenges faced by a GCN in distinguishing intra-class variation and inter-class similarity. To improve the efficiency, the authors first extract the ROI of the image and use segmentation models as input to the model. The network consists of a dual GCN component (a pair of GCN models of identical construction) and a synergic training component. The synergic training component is used to predict whether a pair of images in the input layer belong to the same class and gives feedback if there is a synergic error.

3.3.4. Coronavirus 2 (SARS-CoV-2 or COVID-19)

Early diagnosis of coronavirus is significant for both infected patients and doctors providing treatments. Viral nucleic acid tests and CT screening are the most widely used techniques to detect pneumonia which is caused by the virus, and thus to make a diagnosis. Although CNNs have demonstrated a powerful capability to extract and combine spatial features from CT images, they are hindered because the underlying relationships between each element are ignored. Thus, GCNs are receiving attention in the analysis of COVID-19 patient CT images. Yu et al. [112] develop a graph framework that combined a graph representation with a CNN suitable for COVID-19 detection. A CNN model is used for feature extraction and graphs of the extracted features are constructed. Each feature is taken as one node of the graph while the edges between nodes are built according to the top k neighbours with the highest similarity. The distance between nodes is measured by the Euclidean distance, while edges are quantified by the adjacency matrix. Classification performance into healthy and infected classes shows promising results, but the search domain of the size of batch and the number of neighbours needs further exploration.
Wang et al. [111] also proposed an improved CNN that is combined with a GCN for higher classification accuracy. CNNs yield an individual image-level representation and the GCN focuses on a relation-aware representation. These representations are fused at the feature-level for COVID-19 detection from CT images. Although the model outperforms traditional CNN architectures, the method is limited in handling other modalities such as chest X-rays which are widely used to assist COVID-19 detection due to its availability, quick response, and cost-effective nature.

3.3.5. Tuberculosis

Tuberculosis (TB) is an infectious decease that can affect different organs such as abdomen and nervous system, but normally infects the lungs and is known as pulmonary TB (PTB). Two main categories of PTB are primary pulmonary tuberculosis (PPT) and secondary pulmonary tuberculosis (SPT). Wang et al. [113] investigated the GCN model to recognize the SPT as many PTB cases are turned to be an SPT type. They proposed a rank-based pooling neural network (RAPNN) by which individual image-level features can be extracted, then integrated the GCN to RAPNN and built a new model called GRAPNN to identify the SPT. The explainability of the proposed model was analyzed using Grad-ACM, and their results outperformed SOTA including CNN models.

3.3.6. Chest Pathologies

Chest X-ray imaging has been used to assist clinical diagnosis and treatment of several thoracic diseases where an individual image might be associated with multiple abnormalities, necessitating a multi-label image classification task. Several approaches have transformed a multi-label classification problem into multiple disjoint binary classification problems without acknowledging any label correlations. Abnormalities may be closely linked and label co-occurrence and interdependencies between these abnormal patterns (i.e., strong correlations among pathologies) are important for diagnosis.
To address the limitations of current models that lack a robust ability to model label co-occurrences and capture interdependencies between labels and regions, Chen et al. [118] introduced a label co-occurrence learning framework based on GCNs to find dependencies between pathologies from chest X-ray imaging. This framework consists of two modules, an image feature embedding module that learns high-level features from images and a label co-occurrence learning module that classifies different pathology categories. In the framework which is illustrated in Figure 7, each pathology is illustrated with semantic vectors via a word embedding, and the graph representation is learned from the co-occurrence matrix of training data. The classifiers are combined with image-level features to adaptively revise prediction beliefs for each pathology in two large-scale chest X-ray datasets, ChestX-ray14 [119] and CheXpert [120]. Although this approach models the correlations among disease labels, the utilization of medical reports paired with radiology images was not covered.
Zhang et al. [117] adapted attention mechanisms and GCNs to learn graph embedded features to improve classification and report generation. In this approach, a CNN feature extractor and attention mechanism are used to compute initial node features. Then, a graph is developed with prior knowledge on chest findings to learn discriminatory features and the relationship between them for classifying disease findings. Each node in the graph corresponds to a finding category. Once the classification network is trained, a two-level decoder with recurrent units (LSTMs) is trained to generate reports. The decoder learns to attend to different findings on the graph, and focuses on one concept in each sentence. The performance demonstrated with the IU-RR dataset [115] indicates that graphs with prior knowledge help to generate more accurate reports. Hou et al. [114] employed a transformer encoder as the feature-fusion model of both visual features and label embeddings (semantic features pre-trained on large free-text medical reports). These features are fed to a GCN model which is built as the knowledge graph to model the correlations among different thoracic diseases. The graph is constructed by a data-driven method from medical reports, with primary and auxiliary nodes that correspond to disease labels and other medical labels, respectively. However, its extension to handle other domains is limited because the graph is not built automatically.

3.3.7. Breast Cancer

For abnormal breast tissue detection, the aim is to not only learn the image-level representation automatically, but also the relation-aware representation to more accurately detect abnormal masses using mammography. Zhang et al. [121] fused a CNN pipeline with a GCN pipeline to attain superior performance in classifying six abnormal types in the mini-MIAS dataset [122]. First, a CNN extracts individual image-level features; then, a GCN estimates a relation-aware representation. These features are combined via a dot product and a linear projection with trainable weights. This framework is illustrated in Figure 8. Although the proposed model achieves high accuracy when analysing mammographic data, further optimization on larger datasets was not considered and other combination mechanisms of GCN and CNN should be assessed.
In clinical practice, experts review medical images by zooming into ROIs for a close-up examination. Thus, Du et al. [123] model the zoom-in mechanism of radiologists’ operation with a hierarchical graph-based model to detect abnormal lesions with full field digital mammogram (FFDM) images from the INbreast dataset [124]. A pre-trained CNN trained on lesion patches is used to extract features and a GAT model classifies nodes to predict whether to zoom or not into the next level to predict a benign or malignant mammogram. By adding the zoom-in mechanism, model interpretability is improved. However, the INbreast dataset is relatively small making this method difficult to assess, and a new loss is required to supervise the zoom-in mechanism.

3.3.8. Kidney Disease

In nephrology, ultrasound (US) data are widely used for diagnostic studies of the kidneys and urinary tract and the anatomic measurement of the renal parenhymal area is correlated with kidney function. Machine learning studies have shown promising performance for the tasks of segmentation and classification of US data; however, kidney disease diagnosis is still a challenging task due to the heterogeneous appearance of multiple 2D US scans of the same kidney from different views. Multiple instance learning has been used to estimate instance-level classification probabilities and fuse them to generate a bag-level classification probability, but correlation between instances has not been well explored. To improve these methods, Yin et al. [125] introduced a graph-based methodology to detect children with congenital anomalies of the kidneys and urinary tract in 2D US images. A CNN is used to learn informative US image features at the instance level and a GCN is used as a permutation-invariant operator to further optimize the instance-level CNN features by exploring potential correlations among different instances of the same bag. The authors also adopted attention-based multiple instance learning pooling to learn a bag-level classifier using an instance-level supervision to enhance the learning of instance features and the bag-level classification.

3.3.9. Relative Brain Age

Predicted brain age is a meaningful index that characterises the current status of brain development which may be associated with functional brain abilities in the future. Measurements of morphological changes, including sulcal depth and cortical thickness, can be key features for brain age prediction. Traditional approaches applied to surface morphological features have not taken into account the topology of surfaces, which is defined with meshes. Therefore, CNN-based methods may not be appropriate for the analysis of cortical surface data. A relative brain age has been used as a metric computed as the predicted age minus the true age of the subject. Liu et al. [126] exploited the brain mesh topology as a sparse graph to predict brain age from MRIs for preterm neonates using vertex-wise cortical thicknesses and sulcal depth as input to a GCN. This model enables the convolutional filtering of input features through the surface topology in the context of spectral graph theory. The GCN predicted the ages of preterm neonates better than machine learning and deep learning methods that did not use surface topological knowledge. The authors also generated cortical sub-meshes that represent brain regions to predict which region estimates the age more accurately, and if they are associated with brain functional abilities in the future. As discussed previously in the subsection of AD analysis, Gopinath et al. [20,36] also demonstrated their adaptive graph convolution pooling in a regression problem where the brain age is estimated using the geometry of the brains with point-wise surface-based measurements. The model is trained using data labeled as normal cognition from the ADNI dataset [100], and the graph model uses cortical thickness, sulcal depth and spectral information to predict the brain age.

3.3.10. Brain Data Prediction

Diffusion MRI (DMRI) provides unique insights into the developing brain, owing to its sensitivity examination of brain tissue microstructures and white matter properties which are useful for diagnosis of brain disorders. However, DMRI suffers from long acquisition times and is more susceptible to low signal-to-noise ratio, motion artifacts, and partial volume effects. Missing data are also a common problem in longitudinal studies due to unsuccessful scans and subject dropouts, and the high variability in diffusion wave-vector sampling (q-space) makes the longitudinal prediction of DMRI data a challenging task. Therefore, some methods have been developed for DMRI reconstruction.
To improve acquisition speed, Hong et al. [7] introduced a method for DMRI reconstruction from under-sampled slide data, where only a sub-sample of equally-spaced slices are used to acquire a full diffusion-weighted (DW) image volume. A GCN learns the nonlinear mapping from the sub-sampled to full DW image, and spatio-angular relationships are considered when constructing the graph. To improve perceptual quality, the GCN is employed as the generator in a generative adversarial network. The same authors [132] proposed a super-resolution reconstruction framework based on an orthogonal under-sampling scheme to increase complementary information within the under-sampled DW volume. The set of wave-vectors is divided into three subsets of scan directions (axial, coronal, or sagittal) and they are fitted to individual GCNs. A refinement GCN is used to generate the final DW volume by considering the correlation across scan directions as illustrated in Figure 9. These graph-based methods outperform traditional interpolation methods and 3D UNet based reconstruction methods.
Kim et al. [129] introduced a graph-based model for longitudinal prediction of DMRI data by considering the relationship between sampling points in the spatial and angular domains, i.e., a graph-based representation of the spatio-angular space. Then, the authors implemented a residual learning architecture with graph convolutions to capture brain longitudinal changes to predict missing DMRI data over time in a patch-wise manner. The proposed model showed improved performance in predicting missing DMRI data from neonate images so that longitudinal analysis can be performed. Hong et al. [130] also proposed a GCN-based method for predicting missing infant brain DMRI data. This model exploits information from the spatial domain and diffusion wave-vector domain jointly for effective prediction. Generative adversarial networks (GANs) are also adopted to better model the nonlinear prediction mapping and performance improvement. Here, the generator estimates the source image and the discriminator distinguishes the source image from the estimated one, where the generator is the GCN and the discriminator is developed via consecutive graph convolutional layers. However, the model cannot predict missing DMRI data for arbitrary time points.
Although DMRI is a powerful tool for the characterization of tissue microstructures, several microstructure models need DMRI data densely sampled in q-space that is defined by the number of acquired diffusion-weighted images. Traditional deep learning models learn the relationship between sparsely sampled q-space data and high-quality microstructure indices estimated from densely sampled q-space, but these models do not consider the q-space data structure. Chen et al. [127] adopted GCNs to estimate tissue microstructure from DMRI data represented as graphs. The graph encodes the geometric structure of the q-space sampling points which harnesses information from angular neighbours to improve estimation accuracy. Results on the baby connectome project dataset [128] demonstrated high-quality intra-cellular volume fraction maps that are close to the gold standard.
Most quantitative MRI methods are comparatively slow and provide a single tissue property at a time, which limit their adoption in routine clinical settings. Magnetic resonance fingerprinting (MRF) is a rapid and efficient quantitative imaging method that has been used for simultaneous quantification of multiple tissue properties in a single acquisition [135]. 2D MRF has been extended to 3D using stack-of-spirals acquisitions, but the high spatial resolution and volumetric coverage prolong the acquisition time. Cheng et al. [133] adopted a GCN to accelerate high-resolution 3D MRF acquisition by interpolating the under-sampled data along the slice-encoding direction. A network is further applied to generate tissue property maps. For efficient tissue quantification, a UNet is used along the temporal domain.

3.4. Anatomical Structure Analysis (Segmentation)

Among different medical image segmentation and labeling methods, graph-based methods are showing promising results in clinical applications. Graph-based segmentation approaches play an important role in medical image segmentation. A graph maps pixels or regions in the original image to nodes in the graph. Then, the segmentation problem can be transformed into a labeling problem which requires assigning the correct label to each node according to its properties [136]. GCNs can propagate and exchange the local short-range information through the whole image to learn the semantic relationships between objects. We cover only application with evidence of graph representation learning in anatomical structures including the vasculature system and organs as summarized in Table 4.
Table 4. Summary of GCN approaches adopted for anatomical structure analysis and their applications (Group 2).
Table 4. Summary of GCN approaches adopted for anatomical structure analysis and their applications (Group 2).
AuthorsYearModalityApplicationDataset
Wolterink et al. [137]2019CTASegmentation: Coronary arteryCoronary Artery Stenoses Detection [138]
Zhai et al. [139]2019CTSegmentation: Pulmonary artery-veinSun Yat-sen University Hospital (private)
Noh et al. [24]2020FA / FundusSegmentation: Retinal vesselsFundus and FA (private), RITE A/V [140]
Shin et al. [141]2019RGB/FA/XRASegmentation: Retinal vesselsDRIVE [142], STARE [143], CHASE_DB1 [144], HRF [145]
Chen et al. [146]2020MRASegmentation: Intracranial arteriesMRA [147], UNC [148]
Yao et al. [149]2020CTASegmentation: Head and neck vesselsHead and neck CTA (private)
Lyu et al. [150]2021MRISegmentation: Cerebral cortexNORA-pediatric [151], HCP-adult [152]
Gopinath et al. [23]2020MRISegmentation: Cerebral cortexMindBoggle [153]
Gopinath et al. [20]2020MRISegmentation: Cerebral cortexMindBoggle [153]
Hao et al. [22]2020T1WISegmentation: Cerebral cortexUniversity of California Berkeley Brain Imaging Center (private)
He et al. [154] †2020MRISegmentation: Cerebral cortexMindBoggle [153]
Gopinath et al. [155]2019MRISegmentation: Cerebral cortexMindBoggle [153]
Wu et al. [21]2019MRISegmentation: Cerebral cortexNeonatal brain surfaces (private)
Parvathaneni et al. [156]2019T1WISegmentation: Cerebral cortexCortical surface (private)
Zhao et al. [19]2019MRISegmentation: Cerebral cortexInfant brain MRI (private)
Cucurull et al. [157] †2018MRISegmentation: Cerebral cortexHPC mesh [76,158]
Selvan et al. [8]2020CTSegmentation: Pulmonary airwayDanish Lung Cancer Screening trial [159]
Juarez et al. [41]2019CTSegmentation: Pulmonary airwayDanish Lung Cancer Screening trial [159]
Selvan et al. [160]2018CTSegmentation: Pulmonary airwayDanish Lung Cancer Screening trial [159]
Yan et al. [161]2019MRISegmentation: Brain tissueBrainWeb18 [162], IBSR18 [163]
Meng et al. [164,165] †2020FASegmentation: Optic disc/cupRefuge [166], Drishti-GS [167], ORIGA [168], RIGA [169], RIM-ONE [170]
Meng et al. [164,165] †2020USSegmentation: Fetal headHC18-challenge [171]
Soberanis-Mukul et al. [172,173]2020CTSegmentation: Pancreas and SpleenNIH pancreas [174], MSD-spleen [175]
Tian et al. [25]2020MRISegmentation: Prostate cancerPROMISE12 [176], ISBI2013 [177], in-house (private)
Chao et al. [178]2020CT/PETSegmentation: Lymph node gross tumorEsophageal radiotherapy (private)
GCN with temporal structures for medical diagnostic analysis. † GCN with attention structures for medical diagnostic analysis.

3.4.1. Vasculature Segmentation

Coronary arteries: Quantitative examination of coronary arteries is an important step for the diagnosis of cardiovascular diseases, stenosis grading, blood flow modeling and surgical planning. Coronary CT angiography (CTA) images are used to determine the anatomical or functional severity of coronary artery stenosis (i.e., a narrowing in the artery). Methods for coronary artery segmentation are related to lumen (i.e., vessel wall) segmentation. Deep learning-based segmentation predicts dense segmentation probability maps (voxel-based segmentation methods), or incorporates a shape prior to exploiting the fact that vessel segment has a roughly tubular shape. Thus, the segmentation can be obtained by deforming the wall of this tube to match the visible lumen in the CTA image.
Graph convolutional networks have also been investigated by Wolterink et al. [137] for coronary artery segmentation in CTA. The authors proposed to use GCNs to directly optimize the position of the tubular surface mesh vertices. The locations of these tubular surface mesh vertices were directly optimized using vertices on the coronary lumen surface mesh as graph nodes. Predictions for vertices rely on both local features and representations of adjacent vertices on the surface. The authors demonstrated that, by considering the information from neighbouring vertices, the GCN generates smooth surface meshes without post-processing.
Pulmonary arteries and veins: Separation of pulmonary arteries and veins is challenging due to their similarity in morphology and the complexity of their anatomical structures. Using chest CT, vasculopathy or disease affecting blood vessels can be quantified automatically by detecting pulmonary vessels. Zhai et al. [139] proposed a method that links CNNs with GCNs and can be trained in an end-to-end manner. The model includes both local image and graph connectivity features for pulmonary artery-vein separation. Instead of using entire graphs, the authors proposed a batch-based technique for CNN-GCN training and validation. In this approach, the size of the adjacency matrix can be reduced as the nodes in the GCN are from sub-sampled pixel or voxel grids.
Retinal vessels: Assessment of retinal vessels is needed to diagnose various retinal diseases including hypertension and cerebral disorders. Fluorescein angiography (FA) and fundus images have been used for artery and vein classification and segmentation techniques because arteries and veins are highlighted separately at different times due to the flow of the fluorescent dye through the vessels. Noh et al. [24] combined both the fundus image sequence and FA image as input for artery and vein classification. The proposed method comprises a feature extractor CNN for the input images and a hierarchical connectivity GNN based on Graph UNets [179] to incorporate higher order connectivity into classification. Shin et al. [141] also incorporated a GCN into a unified CNN architecture for 2D vessel segmentation on retinal image datasets. A CNN was trained for feature extraction of local appearance and vessel probabilities and a GCN was trained to predict the presence of a vessel based on global connectivity of vessel structures. The vessel segmentation is generated by using the relationship between the neighbourhood of vessels pixels. This is based on the local appearance of vessels instead of vessel structure. The method achieved competitive results, but the classifier cannot be trained end-to-end.
Intracranial arteries: Characterization of intracranial arteries (ICA), including labeling each artery segment with its anatomical name, is beneficial for clinical evaluation and research. Many natural and disease related (e.g., stenosis) variations in ICA are challenging for automated labeling. Chen et al. [146] proposed a GNN model with hierarchical refinement to label arteries in magnetic resonance angiography (MRA) data by classifying types of nodes and edges in an attributed relational graph. GNNs based on the message passing framework [180] take a graph with edge and node features as input and return a graph with other features for node and edge types.
Head and neck vessels: Vessel segmentation and anatomical labeling are important for vascular disease analysis. The direct use of CNNs for segmentation of vessels in 3D images encounters great challenges. Specifically, head and neck vessels have long and tortuous tubular-like vascular structures with different sizes and shapes. Therefore, it is challenging to automatically and accurately segment and label vessels to expedite vessel quantification. Point cloud representations of head and neck vessels enables quantification of spatial relationships among vascular points. Yao et al. [149] proposed a GCN-based point cloud learning framework to label head and neck vessels and improve CNN-based vessel segmentation on CTA images. To refine vessel segmentation, a point cloud network is first incorporated to the points formed by initial vessel voxels. Then, a GCN is adopted on the point cloud to leverage the anatomical shapes and vascular structures to label the vessel into 13 major segments.

3.4.2. Organ Segmentation

Cerebral cortex: The cerebral cortex is the outermost layer of the brain, and is the most prominent visible feature of the human brain. Different regions of the cortex are involved in complex cognitive processes. Reconstructions of the cortical surface captured with sMRI are used to analyse healthy brain organization as well as abnormalities in neurological and neuropsychiatric conditions. Separating the cerebral cortex into anatomically distinct regions based on structure or function is known as parcellation. Traditional CNN approaches have dealt with the mesh segmentation problem by using irregular data represented using graph or mesh structures.
Cucurull et al. [157] investigated the usefulness of graph networks in which contextual information can be exploited for cortical mesh segmentation using the Human Connectome Project data [76,158] (i.e., functional and structural features from cortical surface patches are used for segmentation). The model receives a mesh as input and produces one output label for each node of the mesh, and parcellates the cerebral cortex into three parcels using a graph attention-based model (GAT) [35]. However, brain meshes are constrained within a particular graph structure, ignoring the complex geometry of the surface and hinder all meshes to use the same mesh geometry. Furthermore, the authors conducted cortical parcellation on only selected regions due to memory capacity.
Gopinath et al. [155] leveraged recent advances in spectral graph matching to transfer surface data across aligned spectral domains, and to learn a node-wise prediction. Authors proposed better capabilities for full cortical parcellation on adult brains with GCNs on the MindBoggle dataset [153]. The authors also extend this previous work and proposed a method that learns an intrinsic aggregation of graph nodes based on graph spectral embeddings for cortical region size regression [20].
Despite offering more flexibility to analyse unordered data, GCNs are also domain-dependent and are limited to generalize to new domains (datasets) without explicit re-training. Spectral GCNs cannot be used to compare multiple graphs directly and need an explicit alignment of graph eigenbases as an additional pre-processing step. Thus, Gopinath et al. [23] proposed an adversarial graph domain adaptation method for surface segmentation. This approach focused on generalizing parcellation across multiple brain surface domains by eliminating the dependency on domain-specific alignment. In this approach, two networks are trained in an adversarial manner, a fully-convolutional GCN segmentator and a GCN domain discriminator. These networks operate on the spectral components of surface graphs as illustrated in Figure 10. The authors also demonstrate that the model could be useful for semi-supervised surface segmentation, by that alleviating the need for large numbers of labeled surfaces.
Zhao et al. [19] suggested a convolution filter on a sphere, termed Direct Neighbor, which is used to develop surface convolution, pooling and transposed convolution in spherical space. The authors extend the UNet architecture to spherical surface domains as illustrated in Figure 11. The spherical UNet is efficient in learning useful features to predict cortical surface parcellation and cortical attribute map development. Although the method does not rely on spherical registration, it still needs to map cortical surfaces onto a sphere. Spherical mapping is susceptible to topological noise and cortical surfaces are required to be topologically correct before mapping. Therefore, Wu et al. [21] proposed to parcellate the cerebral cortex on the original cortical surface manifold without the need for spherical mapping by taking advantage of the high learning potential of GCNs (i.e., the model is free of spherical mapping and registration). The GCN receives intrinsic patches from the original cortical surface manifold that are mapped using the intrinsic local coordinate system. The extracted intrinsic patches are then combined with the trained models to predict parcellation labels.
Spectral graph matching has been used to transfer surface data across aligned spectral domains, enabling the learning of spectral GCNs across multiple surface data. However, this involves an explicit computation of a transformation map for each brain towards one reference template. He et al. [154] introduced a spectral graph transformer (SGT) network to learn this transformation function across multiple brain surfaces directly in the spectral domain, mapping input spectral coordinates to a reference set. The spectral decomposition of a brain graph is randomly sub-sampled as an input point cloud to a SGT network. The SGT learns the transformation parameters aligning the eigenvectors of multiple brains. The learnt transformation matrix is multiplied by the original spectral coordinates and fed to the GCN for parcellation.
While Laplacian-based graph convolutions are more efficient than spherical convolutions, they are not exactly equivariant. Graph-based spherical CNNs strike an interesting balance, with a controllable trade-off between cost and equivariance (which is linked to performance) [181]. Parvathaneni et al. [156] adopted a deep spherical UNet [182] to encode a relatively large surface mesh. Using a spherical surface registration process, the authors computed deformation fields to produce deformed geometric features that best match ground-truth parcel boundaries. The same authors also implement a spherical UNet for cortical sulci labeling from relatively few samples in a developmental cohort [22]. To enhance the capability of the spherical UNet with limited samples, the authors augmented the geometric features from the training data with their deformed features guided by the intermediate deformation fields. In another work [150], the authors proposed a context-aware training and co-registered every possible pair of training samples for the automated labeling of sulci in the lateral prefrontal cortex in pediatric and adult cohorts.
Pulmonary airway: The segmentation of tree-like structures such as the airways from chest CT images is a complex task, with branches of varying sizes and different orientations. Quantifying morphological changes in the chest can indicate the presence and stage of related diseases (e.g., bronchial stenosis). Unlike spheroid-like organs such as liver and kidney, tree-like airways are divergent, thin and tenuous. GNNs were investigated as a way to integrate neighbourhood information in feature utilization for mapping airwaves in the lungs [8,41]. Juarez et al. [41] explored the application of GCNs to improve the segmentation of tubular structures like airways. The authors designed a UNet-GNN architecture by replacing the convolutional layers at the deepest level of the 3D-UNet with a GCN module. The GNN module uses a graph structure obtained from the dense feature maps resulting from the contracting path of the UNet. The GNN learns variations of the input feature maps based on the graph topology, and then outputs a new graph with the same nodes as the input graph, as well as a vector of learnt features for each node. These output feature maps are fed to the up-sampling path of the UNet as illustrated in Figure 12. By introducing a GCN, the method is able to learn and combine information from a larger region of CT chest scans, and is evaluated on the Danish Lung Cancer Screening trial dataset [159].
Selvan et al. [8] also used this volumetric dataset to explore the extraction of tree-structures with a focus on airway extraction, formulated as a graph refinement task, extending the authors’ own prior work [160]. The input image data are first processed to create a graph-like representation, which consists of nodes containing information derived from local image neighbourhoods. Then, a GCN predicts the refined subgraph that corresponds to the structure of interest in a supervised setting, where edge probabilities are predicted from learnt edge embeddings. However, the proposed work treats graph structure learning to be an expensive approximation of a combinatorial optimization problem.
Brain tissues: In brain MRI analysis, image segmentation is used for analyzing brain changes, for measuring a brain’s anatomical structures, for delineating pathological regions, and for surgical planning and image-guided interventions. In MRIs of low contrast and resolution, volume effects appear where individual voxels contain different tissues which makes brain tissue segmentation challenging. From several methods in the literature, voxel-wise MRI image segmentation approaches neglect the spatial information within data. As brain MRIs consist of approximately piecewise constant regions, they are well suited to supervoxel generation which has been increasingly used for high dimensional 3D brain MRI volumes. Yan et al. [161] proposed a segmentation model based on GCNs. First, supervoxels from the brain MRI volume are generated; then, a graph is developed from these supervoxels with the k-nearest neighbour algorithm used to identify the nodes. Finally, a GCN is adopted to classify supervoxels into different types of tissue such as cerebrospinal fluid, grey matter, and white matter. This framework is illustrated in Figure 13.
Optic disc/cup and fetal head: The size of the optic disc and optic cup in color fundus images is also of great importance for the diagnosis of glaucoma, an irreversible eye disease. Meng et al. [164] developed a multi-level aggregation network to regress the coordinates of the boundary of instances instead of using a pixel-wise dense prediction. This model combines a CNN with an attention refine module and a GCN. The attention module works as a filter between the CNN encoder and the GCN decoder to extract more effective semantic and spatial features. Compared to a previous work from the same authors [165], this model also extracts feature correlations among different layers in the GCN. Meng et al. [164] also demonstrated the effectiveness of the network in the segmentation of the fetal head in ultrasound images. Fetal head circumference in ultrasound images is a critical indicator for prenatal diagnosis and can be used to estimate the gestational age and to monitor the growth of the fetus [171]. In this application, the feature map and vertex map size will be different because of different input sizes and the number of contours of instances in the HC18-Challenge dataset [171].
Pancreas and spleen: Organ segmentation in CT volumes is an important pre-processing phase for assisted intervention and diagnosis. However, the limitations of expert example annotation and the inter-patient variability of anatomical structures may lead to potential errors in the model prediction. The incorporation of a post-processing refinement phase is a traditional approach to improve the segmentation results. This additional knowledge about the accuracy of the prediction may be helpful in the process. Related to this idea, CNN uncertainty estimation has been used as an attention mechanism for finding potentially misclassified regions for segmentation refinement [183]. Soberanis-Mukul et al. [172,173] proposed a semi-supervised graph learning problem operating over CT data for pancreas and spleen segmentation refinement task. First, a Monte Carlo dropout process is applied to a CNN (2D UNet) to extract the model’s expectation and uncertainty which are used to divide the CNN output into high confidence points (i.e., to find incorrectly estimated elements). This process is represented by a binary mask indicating voxels with high uncertainty. Then, these confidence predictions are used to train a GCN in a semi-supervised way with partially-labeled nodes to refine (reclassify) the output of the CNN. The authors investigated various connectivity and weighting mechanisms to construct the graph. A sparse representation is established that takes into account local and long-range relations between high and low uncertainty elements. Gaussian kernels are used to define the weights for the edges considering the intensity and the 3D position associated with the node. Results on pancreas [174] and MSD-spleen [175] datasets show better performance over traditional CNN prediction with conditional random field refinement.
Prostate cancer: MRI is being increasingly used for prostate cancer diagnosis and treatment planning. Accurate segmentation of the prostate has several applications in the management of this disease. However, it is difficult to develop a fully automatic prostate segmentation method that can address various issues, such as variations of shape and appearance patterns in basal and apical regions. Tian et al. [25] proposed an interactive GCN-based prostate segmentation method for MRI. The method is similar to Curve-GCN [184] and adopts a GCN to obtain the coordinates of the contour vertices by regression. The graph module takes the output feature from the CNN encoder applied to the cropped image as its input. Then, the coordinates of a fixed number of vertices from the initial contour are adjusted to fit the target. The interactive GCN model improves the accuracy by correcting the points on the prostate contour with user interactions. Finally, neighbouring points/vertices are connected with spline curves to form the prostate contour. The model outperforms several state-of-the-art segmentation methods on the PROMISE12 dataset [176].
Lymph node gross tumor: Gross tumor volume (GTV) delineations are a critical step in cancer radiotheraphy planning. In cancer treatment, all metastasis-suspicious lymph nodes (LN) are also required to be treated, which is referred to as lymph node gross tumor volume. The identification of the small and scattered metastasis LNs is especially challenging in non-contrast RTCT. Chao et al. [178] combined two networks, a 3D-CNN and a GNN, to model instance-wise appearance and the inter-lymph node relationships, respectively. Figure 14 depicts this framework. The GNN also models the partial priors computed as the 3D distances and angles for each GTV with respect to the primary tumor. PET imaging is included as an additional input in the CNN to provide complementary information. The model delineates the location of esophageal cancer on an esophageal radiotherapy dataset, outperforming traditional CNN models.

4. Research Challenges and Future Directions

Deep learning techniques such as CNN and RNN-based models have demonstrated success in supporting a wide variety of prediction problems in healthcare. However, they are inefficient when dealing with non-Euclidean data representations and when modelling global contextual information. Graph representation learning provides a research avenue to deal with these limitations, by representing such an irregular domain in a meaningful manner, and encoding entity level interactions. As demonstrated throughout this review, graph-based deep learning has attracted significant attention for the analysis of medical data through graph representations of functional connectivity, anatomical structures and electrical activity. However, there are challenges associated with their adoption in this domain that merit further discussion. Advances addressing these challenges would permit GNNs to be extended to a broader variety of domains and applications in circumstances where traditional 2D grid representations are limited. We discuss five major challenges that need to be addressed to unlock the full power of graph deep learning: (1) Graph representation; (2) Dynamicity and temporal graphs; (3) Training paradigms and complexity of graph models; (4) Generalization of graph models and deployment; (5) Explainability and interpretability.
At the end of this section, we suggest a new application domain, human behaviour analysis in the medical domain, to which GCNs have not yet been considered, but have great potential to improve patient outcomes.

4.1. Graph Representation

Graph neural networks have been used to directly model graph representations of electrical physiological data and medical images including fMRI, EEG, MRI and CT data. Defining the graph connectivity structure for a given task is an ongoing problem and in the majority of proposals discussed in this survey, the graph structure is manually designed [67,72]. As there is no standard method for constructing graphs for a GNN model, some authors have used, for example, a bootstrapped version of GCNs that made models less sensitive to the initialisation of the graph structure [69].
Models that infer graph topology from data would be particularly useful when representing diverse types of medical signals with several possible nodes and edges. For ASD analysis, Rakhimberdina et al. [64] used a method to analyse different sets of configurations to build a set of graphs and selected the best performing graph. Jang et al. [81] proposed a model that can automatically extract a multi-layer graph structure and feature representation directly from raw EEG data for affective mental states analysis. However, some essential requirements are still needed to improve the generation process as follows:
Dynamic weights and node connectivity: The adjacency matrix should be dynamically learned instead of using predetermined connectivity. During the training process, the weight matrices learn the dynamic latent graph structure. Such approaches have been proposed for MDD [60], ASD [36] and emotion recognition [6,185]. Furthermore, the model should learn both local and global spatial information; however,, in the majority of surveyed works, each node is traditionally connected only to its spatially closest neighbours, which leads to a very limited information exchange between distant nodes.
Edge attributes: Edge embeddings in a graph are a poorly studied field with only a few approaches. Learning is principally conducted on the vertices, where the edge attributes supplement the learning as auxiliary information. Edge-weighted models have been studied for the purpose of ASD [17] and BD [44] analysis.
Embedding knowledge: Medical domain knowledge can be exploited to solve specific problems by creating networks that seek to mimic the way medical doctors analyse medical data [186]. Graph based mapping with label representations (word embeddings) that guide the information propagation among nodes has been explored [187]. As a result, graph-based analysis motivates researchers to investigate the incorporation of task-specific prior knowledge (e.g., disease label embeddings) in the construction of graph representations such as in the case of chest pathology analysis [114,117].
Building graph generation models using neural networks has also attracted increasing attention to model graphs with complicated topologies and constrained structural properties (e.g., GraphGAN [188]). By modeling graph generation as a sequential process, the model can compute complex dependencies between generated edges. Several of these scalable auto-regressive frameworks that deserve exploration within the medical domain are GraphRNN [189] and GRANs [190].
The above discussion has exemplified the challenges of estimating a graph structure from data with the desired characteristics. While there is work in this field, it is ripe for further exploration. Automated graph generation where a graph model infers the structural content from data are also less explored in the clinical domain.

4.2. Dynamicity and Temporal Graphs

Many real-world medical applications are dynamic in nature. In a graph context, this means that a graph’s nodes, edges, and features can change over time. Thus, static embeddings work poorly in temporal scenarios. Several methods that analyse rs-fMRI or EEG data discard the temporal dynamics of brain activity, or overlook the functional dependencies between different brain regions in a network. These studies implicitly assume that the brain functional connectivity network is temporally stationary during the entire scanning period. To address this limitation, some works have adopted generic temporal graph frameworks (RNN-based or CNN-based approaches). Such spatio-temporal GCNs that exploit time-varying dynamic information have been demonstrated to outperform traditional GCNs for AD [53] and MDD [60] classification.
Although the dynamicity of graphs can be partly addressed by ST-GCNs, few clinical applications consider how to perform graph convolutions when dynamic spatial relations are present (i.e., nodes, connections or attributes are altered). The majority of spatio-temporal methods in this survey use a predefined graph structure which assumes that the graph has fixed relationships among nodes. Graph WaveNet [58] proposes a self-adaptive adjacency matrix to learn latent static graph structures automatically from data, and the model performs well without being given an adjacency matrix by using a complex CNN-based ST-GNN. Nevertheless, learning latent dynamic spatial dependencies may further improve model accuracy. One method that deserves attention in this direction is ASTGCN [46] which includes a spatial attention function and a temporal attention function to learn latent dynamic spatial and temporal dependencies.
In this challenge, we consider temporal graphs and the need for dynamic graphs which can be applied to data where dependencies and the underlying structure changes over time. This challenge is tightly coupled to the previous challenge of learning graph representations but adds the additional complication of seeking to learn how connections change over time. Apart from the work proposed for sleep stage classification [45], the research on adaptive graphs for spatio-temporal analysis is limited.

4.3. Training Paradigms and Complexity of Graph Models

Training a GNN remains difficult due to their high memory consumption and inference latency. However, the adoption of efficient training approaches is uncommon in the applications surveyed. Various graph sampling approaches have been proposed as a way to alleviate the cost of training GNNs such as GraphSage [30], which proposes a batch-training algorithm to save memory. Other potential methods of improvement in speed and optimization include PinSage [191], fast learning with GCN (FastGCN) [192], stochastic GCNs (StoGCN) [193], Cluster-GCN [194] and layer-wise GCN (L-GCN) [195]. Hence, further investigation should be carried out to introduce such strategies into the medical domain.
To learn rich representations, the majority of approaches discussed in this survey require task-dependent labels which is a critical challenge for medical applications that rely on supervised training. As such, it could be extremely beneficial if the training mechanism can extract information from unlabeled samples. Although the issue of scarce or missing annotations is a general problem not specific to the graph domain, only a few works have adopted viable solutions to address this limitation by following research directions such as weakly supervised learning and semi-supervised learning. Weakly-supervised and semi-supervised algorithms with graph models have been widely explored in computational histopathology [26], but further research is required to investigate such methods for use with electrical activity data and anatomical structures. Only a few works have used semi-supervised learning for the segmentation of the cerebral cortex [23] and organs [172].
Self-supervised learning is another research direction with significant potential. When no class labels are available in the graph, graph embeddings can be learnt using an end-to-end method in an entirely unsupervised manner. Although self-supervision in graphs [196] and graph convolutional adversarial networks for unsupervised domain adaptation [197] have been investigated to improve the performance of models, we observe that their adoption has not yet emerged in the medical domain works surveyed. Therefore, further investigation can be carried out to assess the viability of such techniques. One method is to use an autoencoder framework, in which the encoder uses graph convolutional layers to embed the graph in a latent representation, which is then reconstructed using a decoder [4]. Recent works on contrastive learning by optimizing mutual information between node and graph representations [198] have achieved state-of-the-art results on both node classification [199] and graph classification tasks [200].
One aspect of graph-based deep learning not yet discussed for medical applications is reinforcement learning (RL), which learns from experiences by interacting with the environment. RL can address the limitations of supervised learning with robust and intuitive algorithms trainable on small datasets [201]. There are opportunities to employ RL in multi-task and multi-agent learning paradigms where graph convolutions adapt to the dynamics of the underlying graph of the multi-agent environment [202], or to apply RL for graph classification using structural attention to actively select informative regions in the graph [203].
Another limitation that we observe in present research is that GCNs share considerable computational complexity from their deep learning lineage which can be burdensome for scaling and deploying GNNs. Some experiments have shown that the performance of GCNs drops dramatically with an increase in the number of graph convolutional layers [204]. This raises the question of whether going deeper is still a good strategy for learning from graph data [205]. Therefore, the choice of GNN architecture should be treated as a hyperparameter in the proposed clinical application.
Efficient and simple architectures have been also proposed to reduce the complexity of GCNs. One example is the simple graph convolution (SGC) network [39] which reduces complexity by collapsing multiple weight matrices into a single linear transformation, and eliminating nonlinearities between GCN layers. This model was adopted for the evaluation of ASD and ADHD [40]. Other approaches that merit consideration are the simple scalable inception GNN (SIGN) [206] and the efficient graph convolution (EGC) [207].
There has been a growing interest in the literature in graph embedding problems, but these approaches do not usually scale well to real-world graphs. Recent advances in distributed and batch training for graph neural networks look promising but they require hours of CPU training, even for small and medium sized graphs. The use of accelerators such as GPU-based tools to deal with graphs is largely under-explored [208]. To address this, Akyildiz et al. [209] introduced GOSH, which utilizes a graph partitioning and coarsening approach, a process in which a graph is compressed into smaller graphs, to provide fast embedding computation on a single GPU with minimal constraints.
How to effectively compute GNNs in order to realise their full potential will be a key research topic in the coming years. Several hardware accelerators have been developed to cope with the high density of GNNs and alternating computing requirements, but there is not a clear solution applicable to multiple GNN variants [210]. On the software side, current deep learning frameworks including extensions of popular libraries such as TensorFlow and PyTorch have limitations when implementing dynamic computation graphs along with specialized tensor operations [210]. Thus, there is a need to further develop libraries such as, for example, DGL [211], which may handle the sparsity of GNN operations efficiently, as well as complex tensor operations in CUDA with GPU computation acceleration.

4.4. Generalization of Graph Models and Deployment

Several graph methods suffer from challenges posed by inter-site heterogeneity caused by different scanning parameters and protocols, and from subject populations at different sites. It is difficult to build accurate and robust learning models with heterogeneous data. Due to patient privacy and clinical data management requirements, truly centralized open source medical big data corpora for deep learning are rare. Medical applications are hindered by non-generalizability that limits deployment to specific institutions. To alleviate the heterogeneity, simultaneously learning adaptive classifiers and transferable features across multiple sites and subjects offers a promising direction.
Transfer learning provides a potential solution by transferring well-trained networks on large scale datasets (related to the to-be-analyzed disease) to a small sample dataset. Such generalization across datasets was demonstrated by capturing essential dementia-associated patterns from different datasets [104].
Another interesting research direction for investigation is domain adaptation in which the source and target domains have the same feature space but different distributions. This also aims to deal with multiple domains and even multiple heterogeneous tasks [212]. The generalizability of trained classifiers in subject-independent classification settings is hampered by the considerable variation in physiological data across different subjects. Some works have attempted to tackle this challenge by introducing domain adversarial training [213]. Graph adversarial methods adopt adversarial training techniques to enhance the generalization of graph-based models. Such domain adaptation methods have been introduced for cerebral cortex segmentation [23], brain data prediction [7,130] and emotion recognition [214].
Meta-learning, a sub-field of transfer learning, has been used in areas including task-generalization problems such as few-shot learning. Meta-learning is the ability to learn, also known as “learning to learn”. Few-shot learning (FSL) aims to automatically and efficiently solve new tasks with few labeled samples based on knowledge obtained from previous experiences. These models are emerging in the medical domain [215] for decoding brain signals [216] and a few approaches have explored GNNs for few-shot learning [217]. Another interesting variant of transfer learning is zero-shot learning (ZSL), which aims to predict the correct class without being exposed to any instances belonging to that class in the training dataset. Although zero-shot learning is flourishing in the field of computer vision [218], it is seldom used for biomedical signal analysis, though zero-shot learning has recently been used to recognize unknown EEG signals [219]. Recently, GCNs have shown a lot of promise for zero-shot learning. When encountered with a lack of data, these models are highly sample efficient because related concepts in the graph structure share statistical strength, allowing generalization to new classes [220]. Knowledge graphs can also be used to guide zero-shot recognition classification as extra information [220], making these models a notable future research prospect.
We analyze the challenges that affect the ability of GCNs to generalize, including to unknown tasks and domains, and the potential research directions this offers. A number of interesting paths exist, including the development of meta-models improving knowledge generalization, enabling the more rapid deployment of applications.

4.5. Explainability and Interpretability

A lack of transparency is identified as one of the main barriers for AI adoption in clinical practice. Physicians are reluctant to trust a machine learning model’s predictions because of a lack of evidence and difficulty in interpreting the reasons for a decision, particularly in disease diagnosis. A step towards trustworthy AI is the development of explainable AI [221]. Explainable AI seeks to create insights into how and why AI models produce predictions. A natural question that arises is if the decision-making process in deep learning models can be interpretable.
Several prominent explainability methods for deep models have been used to provide input-dependent explanations using visual explanations and salient regions. Existing methods include gradient-based methods such as guided backpropagation, or class activation maps (CAM), and the generalized versions Grad-CAM and Grad-CAM++ [222]. While these methods have been explored in a number of areas, they were not proposed to address interpretability in a clinical context. Clinical areas bring additional requirements such as the incorporation of the physician’s interpretation (i.e., to satisfy a meaningful explanation, the model should “explain” its output in such a way that physicians can understand). Interpretability for graph-based deep learning is significantly more challenging than for CNN or RNN-based models because graph nodes and edges are often heavily interconnected. The main challenge of redesigning and applying existing CNN explainers to GNNs is that they fail to incorporate relationship information, which results in ill-defined visual heatmaps of important regions [223].
Model-based and post-hoc interpretability are the two most common types of interpretation approaches. The former constrains the model so that it readily provides useful details about the uncovered relationships (such as sparsity, modularity, etc). The latter attempts to extract information about the model’s learned relationships [224,225]. These post-hoc methods are typically used to analyze individual feature input and output pairs, limiting their explainability to an individual-level. Several explanation methods have been presented in the literature including layer-wise relevance propagation (LRP) [225], excitation backpropagation [224], graph pruning (GNNExplainer) [223], gradient-based saliency (GraphGrad-CAM) [226], GraphGrad-CAM++ [227]), and layerwise relevance propagation (GraphLRP) [228].
While there is much interesting research within the field of interpretability for GNNs, it is immature and there are only a few papers that explore these methods across the ones surveyed. Post-hoc explanation techniques have been used to visualize attention weights or to recognize relevant subgraphs for classification. Such explanation techniques have been useful, for example, to identify brain regions that are most involved when a seizure occurs [59,86], and key brain regions associated with biomarkers for PD [47] or ASD [16,17,61] prediction. The activation map and gradient sensitivity of graph models are also used to interpret the salient input features at both the group and individual levels for the analysis of BD [44] and ASD [61]. Both individual-level and group-level explanations are critical in medical research. Individual-level biomarkers are desirable for planning targeted care in precision medicine, while group-level biomarkers are essential for understanding disease-specific characteristic patterns. Although attention weights for edges can be used to measure edge importance, it is noted that they can only explain GAT models without explaining node features, unlike, for example, GNNExplainer [223].
We believe further extensive investigation of traditional gradient-based (GraphGrad-CAM) and perturbation-based methods (GNNExplainer) for instance-level interpretation in the medical domain will allow rapid explainability at the node, edge, or node feature levels. Other methods that could be extensively utilised within this research direction are PGM-explainer [229] and SubgraphX [230]. Studies examining the effect of explanations on clinical end-user decisions show generally positive results. Thus, further studies should also investigate how to integrate the clinical workflow into the model, and how a clinical expert could refine a model decision via a human-in-the-loop process. One such example introduced an interactive GCN-based prostate segmentation method [25] where the annotator can choose any wrong control points and correct these via user interactions.
The implementation of deep-learning based models raises complex clinical and ethical challenges due to difficulties in understanding the logic involved in these models. Interpretability is essential as it can help informed decision-making during diagnosis and treatment planning. However, GCNs are complex models and interpreting the model’s outcome remains a challenging task. Interpretability techniques are gaining importance in recent years. However, aside from the study in computational pathology [26] which is outside the scope of this review, the interpretability of graph neural networks in a clinical context has not been addressed sufficiently. Considering the spread of graph-based processing for various medical applications, graph explainability and its quantitative evaluation with a focus on usability by clinicians are crucial.

4.6. Future Prospects of Graph Neural Networks for Patient Behavioural Analysis

Medical applications have benefited from rapid progress in the field of computer vision. Up to this point, the majority of studies have concerned themselves with analysing data that results from diagnostic procedures, and using this to predict the presence of a disease. As a result of this focus, the areas of patient behaviour assessment have received less attention. While several in-clinic systems using CNN and RNN-based models have been introduced to enable comprehensive data analysis through accurate and granular quantification of a patient’s movements [231,232,233], these methods are not yet sufficiently accurate for widespread clinical use, yet we argue that graph neural networks have a great potential in these application areas.
Image classification, regression and segmentation have been addressed with CNN models by excelling at modeling local relations. However, GCNs can take into account different neighbouring relations (global relation) by going beyond the local pixel neighbourhoods used by convolutions. Graph embeddings have appeared in other computer vision tasks where relations between objects can be efficiently described by graphs, or for the purpose of graph-structured image data analysis [4,12]. The adoption of graph-based deep learning models have also been explored for the analysis of human actions. Below, we highlight research directions based on graph models to support the highly relevant clinical domain of behavior monitoring, and motor and mental disorder assessment tools.
  • Facial analysis: Clinical experts rely on certain facial modifications and symptoms for assistive medical diagnosis, and computer vision has been introduced to offer an automatic and objective assessment of facial features. Interesting results have been obtained by incorporating graph-based models for facial expression recognition [234], action unit detection [235] and micro-expression recognition [236].
    Potential applications: Postoperative pain management, monitoring vascular pulse, facial paralysis assessment, and several neurological and psychiatric disorders including seizure semiology, ADHD, autism, bipolarity and schizophrenia.
  • Human pose localization: Since human pose estimation is related to graph structure, it is important to design appropriate models to estimate joints that are ambiguous or occluded. GCNs can process skeleton data in a flexible way to improve the skeleton structure’s expressive power. GCNs have been used to refine 2D human pose localization [237], 3D human pose estimation [238], and multi-person pose estimation [239].
    Potential applications: In-bed pose estimation to track pressure injuries from surgery and illness recovery, and other sleep disorders such as apnea, pressure ulcers, and carpal tunnel syndrome.
  • Pose-based action recognition: Movement assessment and monitoring is a powerful tool during clinical observations where uncontrolled motions can aggravate wounds and injuries, or aid the diagnosis of motor and mental disorders. These motions are represented as continuous time-series of the kinematics of the head, limbs and trunk movements. Given a time-series of human joint locations, GCNs have been widely used to estimate human action patterns [56,240,241,242]
    Potential applications: Motor disorders (Epilepsy, Parkinson’s, Alzheimer’s, stroke, tremor, Huntington and neurodevelopmental disorders); mental disorders (Dementia, schizophrenia, major depressive, bipolar and autism spectrum); and other situations including breathing disorders, inpatient fall prediction, and health conditions such as agitation, depression, delirium, and unusual activity.
Graph representations for skeleton-based action recognition are gaining prominence over the last couple of years. Apart from initial studies to flag abnormal behaviour in dementia [243], assessment of Parkinsonian leg agility and gait [244,245], and human emotion based on skeleton detection [246], graph neural networks in the context of in-bed pose estimation and patient behaviour estimation are poorly investigated compared to other computer science fields.

5. Conclusions

Functional, anatomical and electrical data provide essential information on many diseases’ etiology, onset, and progression, as well as treatment efficacy. Our survey provides a comprehensive review of research on graph neural networks and their application to medical domains and applications including functional connectivity, electrical, and anatomical analysis. Digital pathology has not been the main focus of this survey, and we have sparsely mentioned the applications of GCNs to this domain. However, considering the comprehensive application of deep learning to digital pathology (WSI), readers are referred to a complementary survey that thoroughly covers the potential applications of GCN to WSI [26].
As we have shown in this review, the growing mass of literature in this space and the rapid development and search for new tools and methods suggest that we are at the verge of a paradigm shift. Furthermore, considering the remarkable ability of GCNs in dealing with unordered and irregular data such as brain signals, and their simplicity and scalability, graph-based deep learning will progressively take a more prominent role and complement traditional machine learning approaches.
Recent advances in the adoption of graph-based deep learning models for classification, regression and segmentation of medical data show great promise. However, we have outlined several challenges related to their adoption, including the graph representation and estimation, graph complexity, dynamicity, interpretability and generalization of graphs. These and many other challenges lead to a vast amount of open research directions, the solutions to which will benefit the field and lead to many applications in the medical domain. This constitutes a clear challenge to the neuroengineering scientific community, and it is hoped that the community will increase their efforts to address these emerging challenges. Although one will never replace the power of individual clinical expertise, by providing more quantitative evidence and appropriate decision support, one can definitely improve medical decisions and ultimately the standard of care provided to patients.

Author Contributions

Conceptualization, D.A.-A., C.F., M.A.A.; Methodology, D.A.-A.; Analysis, D.A.-A., M.A.A., S.D., C.F.; Investigation, D.A.-A., M.A.A.; Writing—Original Draft Preparation, D.A.-A., M.A.A.; Writing—Review and Editing, all authors; Supervision, L.P.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Imaging and Computer Vision group at CSIRO Data61 Canberra, Australia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sutton, R.T.; Pincock, D.; Baumgart, D.C.; Sadowski, D.C.; Fedorak, R.N.; Kroeker, K.I. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit. Med. 2020, 3, 1–10. [Google Scholar] [CrossRef] [Green Version]
  2. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  3. Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Zhang, Y.; Bellec, P. Functional Annotation of Human Cognitive States using Graph Convolution Networks. In Proceedings of the 2019 Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  6. Song, T.; Zheng, W.; Song, P.; Cui, Z. EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans. Affect. Comput. 2018, 11, 532–541. [Google Scholar] [CrossRef] [Green Version]
  7. Hong, Y.; Chen, G.; Yap, P.T.; Shen, D. Multifold acceleration of diffusion MRI via deep learning reconstruction from slice-undersampled data. In Proceedings of the 26th International Conference Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 530–541. [Google Scholar]
  8. Selvan, R.; Kipf, T.; Welling, M.; Juarez, A.G.U.; Pedersen, J.H.; Petersen, J.; de Bruijne, M. Graph refinement based airway extraction using mean-field networks and graph neural networks. Med. Image Anal. 2020, 64, 101751. [Google Scholar] [CrossRef]
  9. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef] [Green Version]
  10. Bronstein, M.M.; Bruna, J.; LeCun, Y.; Szlam, A.; Vandergheynst, P. Geometric deep learning: Going beyond euclidean data. IEEE Signal Process. Mag. 2017, 34, 18–42. [Google Scholar] [CrossRef] [Green Version]
  11. Georgousis, S.; Kenning, M.; Xie, X. Graph Deep Learning: State of the Art and Challenges. IEEE Access 2021, 9, 22106–22140. [Google Scholar] [CrossRef]
  12. Zhang, Z.; Cui, P.; Zhu, W. Deep learning on graphs: A survey. IEEE Trans. Knowl. Data Eng. 2020. [Google Scholar] [CrossRef] [Green Version]
  13. Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
  14. Zhang, L.; Wang, M.; Liu, M.; Zhang, D. A Survey on Deep Learning for Neuroimaging-Based Brain Disorder Analysis. Front. Neurosci. 2020, 14, 779. [Google Scholar] [CrossRef]
  15. Parisot, S.; Ktena, S.I.; Ferrante, E.; Lee, M.; Guerrero, R.; Glocker, B.; Rueckert, D. Disease prediction using graph convolutional networks: Application to autism spectrum disorder and Alzheimer’s disease. Med. Image Anal. 2018, 48, 117–130. [Google Scholar] [CrossRef] [Green Version]
  16. Li, X.; Dvornek, N.C.; Zhou, Y.; Zhuang, J.; Ventola, P.; Duncan, J.S. Graph neural network for interpreting task-fmri biomarkers. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 485–493. [Google Scholar]
  17. Li, X.; Zhou, Y.; Dvornek, N.C.; Zhang, M.; Zhuang, J.; Ventola, P.; Duncan, J.S. Pooling regularized graph neural network for fmri biomarker analysis. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 625–635. [Google Scholar]
  18. Huang, Y.; Chung, A.C. Edge-Variational Graph Convolutional Networks for Uncertainty-Aware Disease Prediction. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 562–572. [Google Scholar]
  19. Zhao, F.; Xia, S.; Wu, Z.; Duan, D.; Wang, L.; Lin, W.; Gilmore, J.H.; Shen, D.; Li, G. Spherical U-Net on cortical surfaces: Methods and applications. In Proceedings of the 26th International Conference Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 855–866. [Google Scholar]
  20. Gopinath, K.; Desrosiers, C.; Lombaert, H. Learnable Pooling in Graph Convolution Networks for Brain Surface Analysis. IEEE Trans. Pattern Anal. Mach. Intell 2020. [Google Scholar] [CrossRef]
  21. Wu, Z.; Zhao, F.; Xia, J.; Wang, L.; Lin, W.; Gilmore, J.H.; Li, G.; Shen, D. Intrinsic patch-based cortical anatomical parcellation using graph convolutional neural network on surface manifold. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 492–500. [Google Scholar]
  22. Hao, L.; Bao, S.; Tang, Y.; Gao, R.; Parvathaneni, P.; Miller, J.A.; Voorhies, W.; Yao, J.; Bunge, S.A.; Weiner, K.S.; et al. Automatic Labeling of Cortical Sulci Using Spherical Convolutional Neural Networks in a Developmental Cohort. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, Iowa, USA, 3–7 April 2020; pp. 412–415. [Google Scholar]
  23. Gopinath, K.; Desrosiers, C.; Lombaert, H. Graph Domain Adaptation for Alignment-Invariant Brain Surface Segmentation. In UNSURE and GRAIL in Conjunction with MICCAI; Springer: Lima, Peru, 2020; pp. 152–163. [Google Scholar]
  24. Noh, K.J.; Park, S.J.; Lee, S. Combining Fundus Images and Fluorescein Angiography for Artery/Vein Classification Using the Hierarchical Vessel Graph Network. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 595–605. [Google Scholar]
  25. Tian, Z.; Li, X.; Zheng, Y.; Chen, Z.; Shi, Z.; Liu, L.; Fei, B. Graph-convolutional-network-based interactive prostate segmentation in MR images. Med. Phys. 2020, 47, 4164–4176. [Google Scholar] [CrossRef] [PubMed]
  26. Ahmedt-Aristizabal, D.; Armin, M.A.; Denman, S.; Fookes, C.; Petersson, L. A Survey on Graph-Based Deep Learning for Computational Histopathology. arXiv 2021, arXiv:2107.00272. [Google Scholar]
  27. Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In Proceedings of the Neural Information Processing Systems (NeurIPS), Barcelona, Spain, 5–10 December 2016; pp. 3844–3852. [Google Scholar]
  28. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  29. Niepert, M.; Ahmed, M.; Kutzkov, K. Learning convolutional neural networks for graphs. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 2014–2023. [Google Scholar]
  30. Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1024–1034. [Google Scholar]
  31. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef] [Green Version]
  32. Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral networks and locally connected networks on graphs. arXiv 2013, arXiv:1312.6203. [Google Scholar]
  33. Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How powerful are graph neural networks? In Proceedings of the International Conference on Learning (ICLR) . New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  34. Yang, Z.; Yang, D.; Dyer, C.; He, X.; Smola, A.; Hovy, E. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego, CA, USA, 12–17 June 2016; pp. 1480–1489. [Google Scholar]
  35. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. In Proceedings of the International Conference on Learning (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  36. Gopinath, K.; Desrosiers, C.; Lombaert, H. Adaptive graph convolution pooling for brain surface analysis. In Proceedings of the 26th International Conference Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 86–98. [Google Scholar]
  37. Yang, B.; Pan, H.; Yu, J.; Han, K.; Wang, Y. Classification of Medical Images with Synergic Graph Convolutional Networks. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering Workshops, Macao, China, 8–12 April 2019; pp. 253–258. [Google Scholar]
  38. Zhang, J.; Xia, Y.; Wu, Q.; Xie, Y. Classification of medical images and illustrations in the biomedical literature using synergic deep learning. arXiv 2017, arXiv:1706.09092. [Google Scholar]
  39. Wu, F.; Souza, A.; Zhang, T.; Fifty, C.; Yu, T.; Weinberger, K. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6861–6871. [Google Scholar]
  40. Rakhimberdina, Z.; Murata, T. Linear Graph Convolutional Model for Diagnosing Brain Disorders. In Proceedings of the International Conference on Complex Networks and Their Applications, Taragona, Spain, 10–12 December 2019; pp. 815–826. [Google Scholar]
  41. Juarez, A.G.U.; Selvan, R.; Saghir, Z.; de Bruijne, M. A joint 3D UNet-graph neural network-based method for airway segmentation from chest CTs. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Shenzhen, China, 13 October 2019; pp. 583–591. [Google Scholar]
  42. Lian, Q.; Qi, Y.; Pan, G.; Wang, Y. Learning graph in graph convolutional neural networks for robust seizure prediction. J. Neural Eng. 2020, 17, 035004. [Google Scholar] [CrossRef]
  43. Wang, H.; Zhao, W.; Li, Z.; Jia, D.; Yan, C.; Hu, J.; Fang, J.; Yang, M. A Weighted Graph Attention Network Based Method for Multi-label Classification of Electrocardiogram Abnormalities. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 418–421. [Google Scholar]
  44. Yang, H.; Li, X.; Wu, Y.; Li, S.; Lu, S.; Duncan, J.S.; Gee, J.C.; Gu, S. Interpretable multimodality embedding of cerebral cortex using attention graph network for identifying bipolar disorder. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 799–807. [Google Scholar]
  45. Jia, Z.; Lin, Y.; Wang, J.; Zhou, R.; Ning, X.; He, Y.; Zhao, Y. Graphsleepnet: Adaptive spatial-temporal graph convolutional networks for sleep stage classification. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Yokohama, Japan, 11–17 July 2020; pp. 1324–1330. [Google Scholar]
  46. Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 922–929. [Google Scholar]
  47. Zhang, W.; Zhan, L.; Thompson, P.; Wang, Y. Deep Representation Learning for Multimodal Brain Networks. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 613–624. [Google Scholar]
  48. Lipton, Z.C.; Berkowitz, J.; Elkan, C. A critical review of recurrent neural networks for sequence learning. arXiv 2015, arXiv:1506.00019. [Google Scholar]
  49. Greff, K.; Srivastava, R.K.; Koutník, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2222–2232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  51. Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In Proceedings of the International Conference on Learning (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  52. Seo, Y.; Defferrard, M.; Vandergheynst, P.; Bresson, X. Structured sequence modeling with graph convolutional recurrent networks. In Proceedings of the 25th International Conference Neural Information Processing, Siem Reap, Cambodia, 13–16 December 2018; pp. 362–373. [Google Scholar]
  53. Xing, X.; Li, Q.; Wei, H.; Zhang, M.; Zhan, Y.; Zhou, X.S.; Xue, Z.; Shi, F. Dynamic spectral graph convolution networks with assistant task training for early mci diagnosis. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 639–646. [Google Scholar]
  54. Gehring, J.; Auli, M.; Grangier, D.; Yarats, D.; Dauphin, Y.N. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1243–1252. [Google Scholar]
  55. Yu, B.; Yin, H.; Zhu, Z. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden, 13–19 July 2018. [Google Scholar]
  56. Yan, S.; Xiong, Y.; Lin, D. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  57. Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
  58. Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph wavenet for deep spatial-temporal graph modeling. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Macao, China, 10–16 August 2019. [Google Scholar]
  59. Wang, J.; Liang, S.; He, D.; Wang, Y.; Wu, Y.; Zhang, Y. A Sequential Graph Convolutional Network with Frequency-domain Complex Network of EEG Signals for Epilepsy Detection. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 785–792. [Google Scholar]
  60. Yao, D.; Sui, J.; Yang, E.; Yap, P.T.; Shen, D.; Liu, M. Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI. In Proceedings of the 11th International Workshop on Machine Learning in Medical Imaging, Lima, Peru, 4 October 2020; pp. 1–10. [Google Scholar]
  61. Li, X.; Duncan, J. BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis. bioRxiv 2020. [Google Scholar] [CrossRef]
  62. Venkataraman, A.; Yang, D.Y.J.; Pelphrey, K.A.; Duncan, J.S. Bayesian community detection in the space of group-level functional differences. IEEE Trans. Med. Imaging 2016, 35, 1866–1882. [Google Scholar] [CrossRef] [Green Version]
  63. Di Martino, A.; Yan, C.G.; Li, Q.; Denio, E.; Castellanos, F.X.; Alaerts, K.; Anderson, J.S.; Assaf, M.; Bookheimer, S.Y.; Dapretto, M.; et al. The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol. Psychiatry 2014, 19, 659–667. [Google Scholar] [CrossRef]
  64. Rakhimberdina, Z.; Liu, X.; Murata, T. Population Graph-Based Multi-Model Ensemble Method for Diagnosing Autism Spectrum Disorder. Sensors 2020, 20, 6001. [Google Scholar] [CrossRef]
  65. Li, X.; Dvornek, N.C.; Zhuang, J.; Ventola, P.; Duncan, J. Graph embedding using infomax for ASD classification and brain functional difference detection. In Proceedings of the Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging, Houston, TX, USA, 18–20 February 2020; Volume 11317, p. 1131702. [Google Scholar]
  66. Jiang, H.; Cao, P.; Xu, M.; Yang, J.; Zaiane, O. Hi-GCN: A hierarchical graph convolution network for graph embedding learning of brain network and brain disorders prediction. Comput. Biol. Med. 2020, 127, 104096. [Google Scholar] [CrossRef] [PubMed]
  67. Kazi, A.; Shekarforoush, S.; Krishna, S.A.; Burwinkel, H.; Vivar, G.; Kortüm, K.; Ahmadi, S.A.; Albarqouni, S.; Navab, N. InceptionGCN: Receptive field aware graph convolutional network for disease prediction. In Proceedings of the 26th International Conference Information Processing in Medical Imaging, Hong Kong, China, 2–7 June 2019; pp. 73–85. [Google Scholar]
  68. Yao, D.; Liu, M.; Wang, M.; Lian, C.; Wei, J.; Sun, L.; Sui, J.; Shen, D. Triplet Graph Convolutional Network for Multi-scale Analysis of Functional Connectivity Using Functional MRI. In Proceedings of the International Workshop Graph Learning in Medical Imaging, Shenzhen, China, 17 October 2019; pp. 70–78. [Google Scholar]
  69. Anirudh, R.; Thiagarajan, J.J. Bootstrapping graph convolutional neural networks for autism spectrum disorder classification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 3197–3201. [Google Scholar]
  70. Ktena, S.I.; Parisot, S.; Ferrante, E.; Rajchl, M.; Lee, M.; Glocker, B.; Rueckert, D. Metric learning with spectral graph convolutions on brain connectivity networks. NeuroImage 2018, 169, 431–442. [Google Scholar] [CrossRef]
  71. Ktena, S.; Parisot, S.; Ferrante, E.; Rajchl, M.; Lee, M.; Glocker, B.; Rueckert, D. Distance metric learning using graph convolutional networks: Application to functional brain networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 469–477. [Google Scholar]
  72. Parisot, S.; Ktena, S.I.; Ferrante, E.; Lee, M.; Moreno, R.G.; Glocker, B.; Rueckert, D. Spectral graph convolutions for population-based disease prediction. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; pp. 177–185. [Google Scholar]
  73. Bullmore, E.; Sporns, O. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 2009, 10, 186–198. [Google Scholar] [CrossRef]
  74. Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci. 2012, 13, 336–349. [Google Scholar] [CrossRef] [PubMed]
  75. Yan, C.G.; Chen, X.; Li, L.; Castellanos, F.X.; Bai, T.J.; Bo, Q.J.; Cao, J.; Chen, G.M.; Chen, N.X.; Chen, W.; et al. Reduced default mode network functional connectivity in patients with recurrent major depressive disorder. Proc. Natl. Acad. Sci. USA 2019, 116, 9078–9083. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Van Essen, D.C.; Smith, S.M.; Barch, D.M.; Behrens, T.E.; Yacoub, E.; Ugurbil, K.; Wu-Minn HCP Consortium. The WU-Minn human connectome project: An overview. Neuroimage 2013, 80, 62–79. [Google Scholar] [CrossRef] [Green Version]
  77. Guo, Y.; Nejati, H.; Cheung, N.M. Deep neural networks on graph signals for brain imaging analysis. In Proceedings of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 3295–3299. [Google Scholar]
  78. Mastrovito, D.; Hanson, C.; Hanson, S.J. Differences in atypical resting-state effective connectivity distinguish autism from schizophrenia. NeuroImage 2018, 18, 367–376. [Google Scholar] [CrossRef]
  79. Sherkatghanad, Z.; Akhondzadeh, M.; Salari, S.; Zomorodi-Moghadam, M.; Abdar, M.; Acharya, U.R.; Khosrowabadi, R.; Salari, V. Automated detection of autism spectrum disorder using a convolutional neural network. Front. Neurosci. 2020, 13, 1325. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Ma, Y.; Wang, S.; Aggarwal, C.C.; Tang, J. Graph convolutional networks with eigenpooling. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 723–731. [Google Scholar]
  81. Jang, S.; Moon, S.E.; Lee, J.S. Brain Signal Classification via Learning Connectivity Structure. arXiv 2019, arXiv:1905.11678. [Google Scholar]
  82. Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2011, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
  83. Jang, S.; Moon, S.E.; Lee, J.S. EEG-based video identification using graph signal modeling and graph convolutional neural network. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, AB, Canada, 15–20 April 2018; pp. 3066–3070. [Google Scholar]
  84. Mathur, P.; Chakka, V.K. Graph Signal Processing of EEG signals for Detection of Epilepsy. In Proceedings of the International Conference on Signal Processing and Integrated Networks, Noida, India, 27–28 February 2020; pp. 839–843. [Google Scholar]
  85. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907. [Google Scholar] [CrossRef] [Green Version]
  86. Covert, I.C.; Krishnan, B.; Najm, I.; Zhan, J.; Shore, M.; Hixson, J.; Po, M.J. Temporal Graph Convolutional Networks for Automatic Seizure Detection. In Proceedings of the Machine Learning for Healthcare, Arbor, MI, USA, 8–10 August 2019; pp. 160–180. [Google Scholar]
  87. Ihle, M.; Feldwisch-Drentrup, H.; Teixeira, C.A.; Witon, A.; Schelter, B.; Timmer, J.; Schulze-Bonhage, A. EPILEPSIAE–A European epilepsy database. Comput. Methods Programs Biomed. 2012, 106, 127–138. [Google Scholar] [CrossRef]
  88. Wagh, N.; Varatharajah, Y. EEG-GCNN: Augmenting Electroencephalogram-based Neurological Disease Diagnosis using a Domain-guided Graph Convolutional Neural Network. In Proceedings of the Machine Learning for Healthcare, New York, NY, USA, 7–8 August 2020; pp. 367–378. [Google Scholar]
  89. Obeid, I.; Picone, J. The temple university hospital EEG data corpus. Front. Neurosci. 2016, 10, 196. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  90. Babayan, A.; Erbey, M.; Kumral, D.; Reinelt, J.D.; Reiter, A.M.; Röbbig, J.; Schaare, H.L.; Uhlig, M.; Anwander, A.; Bazin, P.L.; et al. A mind-brain-body dataset of MRI, EEG, cognition, emotion, and peripheral physiology in young and old adults. Sci. Data 2019, 6, 180308. [Google Scholar] [CrossRef] [PubMed]
  91. TIANCHI. Hefei Hi-Tech Cup ECG Intelligent Competition. 2019. Available online: https://tianchi.aliyun.com/competition/entrance/231754/information (accessed on 27 August 2020).
  92. Sun, M.; Isufi, E.; de Groot, N.M.; Hendriks, R.C. Graph-time spectral analysis for atrial fibrillation. Biomed. Signal Process. Control 2020, 59, 101915. [Google Scholar] [CrossRef] [Green Version]
  93. Yaksh, A.; van der Does, L.J.; Kik, C.; Knops, P.; Oei, F.B.; van de Woestijne, P.C.; Bekkers, J.A.; Bogers, A.J.; Allessie, M.A.; de Groot, N.M. A novel intra-operative, high-resolution atrial mapping approach. J. Interv. Card. Electrophysiol. 2015, 44, 221–225. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. O’reilly, C.; Gosselin, N.; Carrier, J.; Nielsen, T. Montreal Archive of Sleep Studies: An open-access resource for instrument benchmarking and exploratory research. J. Sleep Res. 2014, 23, 628–635. [Google Scholar] [CrossRef] [PubMed]
  95. Zou, Y.; Donner, R.V.; Marwan, N.; Donges, J.F.; Kurths, J. Complex network approaches to nonlinear time series analysis. Phys. Rep. 2019, 787, 1–97. [Google Scholar] [CrossRef]
  96. Panossian, L.A.; Avidan, A.Y. Review of sleep disorders. Med. Clin. N. Am. 2009, 93, 407–425. [Google Scholar] [CrossRef]
  97. Ma, J.; Zhu, X.; Yang, D.; Chen, J.; Wu, G. Attention-Guided Deep Graph Neural Network for Longitudinal Alzheimer’s Disease Analysis. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 387–396. [Google Scholar]
  98. Jack Jr, C.R.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Britson, P.J.L.; Whitwell, J.; Ward, C.; et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging 2008, 27, 685–691. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  99. Liu, J.; Tan, G.; Lan, W.; Wang, J. Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks. BMC Bioinform. 2020, 21, 1–12. [Google Scholar] [CrossRef] [PubMed]
  100. Petersen, R.C.; Aisen, P.; Beckett, L.A.; Donohue, M.; Gamst, A.; Harvey, D.J.; Jack, C.; Jagust, W.; Shaw, L.; Toga, A.; et al. Alzheimer’s disease neuroimaging initiative (ADNI): Clinical characterization. Neurology 2010, 74, 201–209. [Google Scholar] [CrossRef] [Green Version]
  101. Marinescu, R.V.; Oxtoby, N.P.; Young, A.L.; Bron, E.E.; Toga, A.W.; Weiner, M.W.; Barkhof, F.; Fox, N.C.; Klein, S.; Alexander, D.C.; et al. Tadpole challenge: Prediction of longitudinal evolution in Alzheimer’s disease. arXiv 2018, arXiv:1805.03909. [Google Scholar]
  102. Yu, S.; Wang, S.; Xiao, X.; Cao, J.; Yue, G.; Liu, D.; Wang, T.; Xu, Y.; Lei, B. Multi-scale Enhanced Graph Convolutional Network for Early Mild Cognitive Impairment Detection. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 228–237. [Google Scholar]
  103. Zhao, X.; Zhou, F.; Ou-Yang, L.; Wang, T.; Lei, B. Graph Convolutional Network Analysis for Mild Cognitive Impairment Prediction. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 1598–1601. [Google Scholar]
  104. Wee, C.Y.; Liu, C.; Lee, A.; Poh, J.S.; Ji, H.; Qiu, A.; Initiative A.D.N.; others. Cortical graph neural network for AD and MCI diagnosis and transfer learning across populations. NeuroImage 2019, 23, 101929. [Google Scholar] [CrossRef] [PubMed]
  105. Song, T.A.; Chowdhury, S.R.; Yang, F.; Jacobs, H.; El Fakhri, G.; Li, Q.; Johnson, K.; Dutta, J. Graph Convolutional Neural Networks For Alzheimer’s Disease Classification. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Venice, Italy, 8–11 April 2019; pp. 414–417. [Google Scholar]
  106. Guo, J.; Qiu, W.; Li, X.; Zhao, X.; Guo, N.; Li, Q. Predicting Alzheimer’s Disease by Hierarchical Graph Convolution from Positron Emission Tomography Imaging. In Proceedings of the IEEE International Conference on Big Data, Los Angeles, CA, USA, 9–12 December 2019; pp. 5359–5363. [Google Scholar]
  107. Beckett, L.A.; Donohue, M.C.; Wang, C.; Aisen, P.; Harvey, D.J.; Saito, N.; Initiative, A.D.N. The Alzheimer’s Disease Neuroimaging Initiative phase 2: Increasing the length, breadth, and depth of our understanding. Alzheimer’s Dement. 2015, 11, 823–831. [Google Scholar] [CrossRef] [Green Version]
  108. Zhang, X.; He, L.; Chen, K.; Luo, Y.; Zhou, J.; Wang, F. Multi-view graph convolutional network and its applications on neuroimage analysis for parkinson’s disease. arXiv 2018, arXiv:1805.08801. [Google Scholar]
  109. Marek, K.; Jennings, D.; Lasch, S.; Siderowf, A.; Tanner, C.; Simuni, T.; Coffey, C.; Kieburtz, K.; Flagg, E.; Chowdhury, S.; et al. The parkinson progression marker initiative (PPMI). Prog. Neurobiol. 2011, 95, 629–635. [Google Scholar] [CrossRef]
  110. McDaniel, C.; Quinn, S. Developing a Graph Convolution-Based Analysis Pipeline for Multi-Modal Neuroimage Data: An Application to Parkinson’s Disease. In Proceedings of the Python in Science Conference, Austin, TX, USA, 8–14 July 2019; pp. 42–49. [Google Scholar]
  111. Wang, S.H.; Govindaraj, V.V.; Górriz, J.M.; Zhang, X.; Zhang, Y.D. Covid-19 classification by FGCNet with deep feature fusion from graph convolutional network and convolutional neural network. Inf. Fusion 2020, 67, 208–229. [Google Scholar] [CrossRef]
  112. Yu, X.; Lu, S.; Guo, L.; Wang, S.H.; Zhang, Y.D. ResGNet-C: A graph convolutional neural network for detection of COVID-19. Neurocomputing 2020, 452, 592–605. [Google Scholar] [CrossRef] [PubMed]
  113. Wang, S.H.; Govindaraj, V.; Gorriz, J.M.; Zhang, X.; Zhang, Y.D. Explainable diagnosis of secondary pulmonary tuberculosis by graph rank-based average pooling neural network. J. Ambient Intell. Humaniz. Comput. 2021, 1–14. [Google Scholar]
  114. Hou, D.; Zhao, Z.; Hu, S. Multi-Label Learning with Visual-Semantic Embedded Knowledge Graph for Diagnosis of Radiology Imaging. IEEE Access 2021, 9, 15720–15730. [Google Scholar] [CrossRef]
  115. Demner-Fushman, D.; Kohli, M.D.; Rosenman, M.B.; Shooshan, S.E.; Rodriguez, L.; Antani, S.; Thoma, G.R.; McDonald, C.J. Preparing a collection of radiology examinations for distribution and retrieval. J. Am. Med. Inform. Assoc. 2016, 23, 304–310. [Google Scholar] [CrossRef]
  116. Johnson, A.E.; Pollard, T.J.; Greenbaum, N.R.; Lungren, M.P.; Deng, C.Y.; Peng, Y.; Lu, Z.; Mark, R.G.; Berkowitz, S.J.; Horng, S. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs. arXiv 2019, arXiv:1901.07042. [Google Scholar]
  117. Zhang, Y.; Wang, X.; Xu, Z.; Yu, Q.; Yuille, A.; Xu, D. When radiology report generation meets knowledge graph. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12910–12917. [Google Scholar]
  118. Chen, B.; Li, J.; Lu, G.; Yu, H.; Zhang, D. Label Co-occurrence Learning with Graph Convolutional Networks for Multi-label Chest X-ray Image Classification. IEEE J. Biomed. Health Inform. 2020, 24, 2292–2302. [Google Scholar] [CrossRef] [PubMed]
  119. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106. [Google Scholar]
  120. Irvin, J.; Rajpurkar, P.; Ko, M.; Yu, Y.; Ciurea-Ilcus, S.; Chute, C.; Marklund, H.; Haghgoo, B.; Ball, R.; Shpanskaya, K.; et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 590–597. [Google Scholar]
  121. Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network. Inf. Process. Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
  122. Sucling, J.P. The mammographic image analysis society digital mammogram database. Digital Mammo 1994, 375–386. [Google Scholar]
  123. Du, H.; Feng, J.; Feng, M. Zoom in to where it matters: A hierarchical graph based model for mammogram analysis. arXiv 2019, arXiv:1912.07517. [Google Scholar]
  124. Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. Inbreast: Toward a full-field digital mammographic database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef] [Green Version]
  125. Yin, S.; Peng, Q.; Li, H.; Zhang, Z.; You, X.; Liu, H.; Fischer, K.; Furth, S.L.; Tasian, G.E.; Fan, Y. Multi-instance Deep Learning with Graph Convolutional Neural Networks for Diagnosis of Kidney Diseases Using Ultrasound Imaging. In UNSURE and CLIP in Conjunction with MICCAI; Springer: Shenzhen, China, 2019; pp. 146–154. [Google Scholar]
  126. Liu, M.; Duffy, B.A.; Sun, Z.; Toga, A.W.; Barkovich, A.J.; Xu, D.; Kim, H. Deep Learning of Cortical Surface Features Using Graph-Convolution Predicts Neonatal Brain Age and Neurodevelopmental Outcome. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, Iowa, USA, 3–7 April 2020; pp. 1335–1338. [Google Scholar]
  127. Chen, G.; Hong, Y.; Zhang, Y.; Kim, J.; Huynh, K.M.; Ma, J.; Lin, W.; Shen, D.; Yap, P.T.; Consortium U.B.C.P.; et al. Estimating Tissue Microstructure with Undersampled Diffusion Data via Graph Convolutional Neural Networks. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 280–290. [Google Scholar]
  128. Howell, B.R.; Styner, M.A.; Gao, W.; Yap, P.T.; Wang, L.; Baluyot, K.; Yacoub, E.; Chen, G.; Potts, T.; Salzwedel, A.; et al. The UNC/UMN baby connectome project (BCP): An overview of the study design and protocol development. NeuroImage 2019, 185, 891–905. [Google Scholar] [CrossRef]
  129. Kim, J.; Hong, Y.; Chen, G.; Lin, W.; Yap, P.T.; Shen, D. Graph-based deep learning for prediction of longitudinal infant diffusion MRI data. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 133–141. [Google Scholar]
  130. Hong, Y.; Kim, J.; Chen, G.; Lin, W.; Yap, P.T.; Shen, D. Longitudinal Prediction of Infant Diffusion MRI Data via Graph Convolutional Adversarial Networks. IEEE Trans. Med. Imaging 2019, 38, 2717–2725. [Google Scholar] [CrossRef]
  131. Sotiropoulos, S.N.; Jbabdi, S.; Xu, J.; Andersson, J.L.; Moeller, S.; Auerbach, E.J.; Glasser, M.F.; Hernandez, M.; Sapiro, G.; Jenkinson, M.; et al. Advances in diffusion MRI acquisition and processing in the Human Connectome Project. Neuroimage 2013, 80, 125–143. [Google Scholar] [CrossRef] [Green Version]
  132. Hong, Y.; Chen, G.; Yap, P.T.; Shen, D. Reconstructing high-quality diffusion MRI data from orthogonal slice-undersampled data using graph convolutional neural networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 529–537. [Google Scholar]
  133. Cheng, F.; Chen, Y.; Zong, X.; Lin, W.; Shen, D.; Yap, P.T. Acceleration of High-Resolution 3D MR Fingerprinting via a Graph Convolutional Network. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 158–166. [Google Scholar]
  134. Tong, T.; Gao, Q.; Guerrero, R.; Ledig, C.; Chen, L.; Rueckert, D.; Alzheimer’s Disease Neuroimaging Initiative. A novel grading biomarker for the prediction of conversion from mild cognitive impairment to Alzheimer’s disease. IEEE Trans. Biomed. Eng. 2016, 64, 155–165. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Ma, D.; Gulani, V.; Seiberlich, N.; Liu, K.; Sunshine, J.L.; Duerk, J.L.; Griswold, M.A. Magnetic resonance fingerprinting. Nature 2013, 495, 187–192. [Google Scholar] [CrossRef] [Green Version]
  136. Chen, X.; Pan, L. A survey of graph cuts/graph search based medical image segmentation. IEEE Rev. Biomed. Eng. 2018, 11, 112–124. [Google Scholar] [CrossRef]
  137. Wolterink, J.M.; Leiner, T.; Išgum, I. Graph convolutional networks for coronary artery segmentation in cardiac CT angiography. In Proceedings of the International Workshop Graph Learning in Medical Imaging, Shenzhen, China, 17 October 2019; pp. 62–69. [Google Scholar]
  138. Kirişli, H.; Schaap, M.; Metz, C.; Dharampal, A.; Meijboom, W.B.; Papadopoulou, S.L.; Dedic, A.; Nieman, K.; de Graaf, M.A.; Meijs, M.; et al. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography. Med. Image Anal. 2013, 17, 859–876. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  139. Zhai, Z.; Staring, M.; Zhou, X.; Xie, Q.; Xiao, X.; Bakker, M.E.; Kroft, L.J.; Lelieveldt, B.P.; Boon, G.J.; Klok, F.A.; et al. Linking Convolutional Neural Networks with Graph Convolutional Networks: Application in Pulmonary Artery-Vein Separation. In Proceedings of the International Workshop Graph Learning in Medical Imaging, Shenzhen, China, 17 October 2019; pp. 36–43. [Google Scholar]
  140. Hu, Q.; Abràmoff, M.D.; Garvin, M.K. Automated separation of binary overlapping trees in low-contrast color retinal images. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 436–443. [Google Scholar]
  141. Shin, S.Y.; Lee, S.; Yun, I.D.; Lee, K.M. Deep vessel segmentation by learning graphical connectivity. Med. Image Anal. 2019, 58, 101556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  142. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
  143. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  144. Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548. [Google Scholar] [CrossRef]
  145. Budai, A.; Bock, R.; Maier, A.; Hornegger, J.; Michelson, G. Robust vessel segmentation in fundus images. Int. J. Biomed. Imaging 2013, 2013, 154860. [Google Scholar] [CrossRef] [Green Version]
  146. Chen, L.; Hatsukami, T.; Hwang, J.N.; Yuan, C. Automated Intracranial Artery Labeling Using a Graph Neural Network and Hierarchical Refinement. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 76–85. [Google Scholar]
  147. Chen, L.; Sun, J.; Hippe, D.S.; Balu, N.; Yuan, Q.; Yuan, I.; Zhao, X.; Li, R.; He, L.; Hatsukami, T.S.; et al. Quantitative assessment of the intracranial vasculature in an older adult population using iCafe. Neurobiol. Aging 2019, 79, 59–65. [Google Scholar] [CrossRef]
  148. Bullitt, E.; Zeng, D.; Gerig, G.; Aylward, S.; Joshi, S.; Smith, J.K.; Lin, W.; Ewend, M.G. Vessel tortuosity and brain tumor malignancy: A blinded study1. Acad. Radiol. 2005, 12, 1232–1240. [Google Scholar] [CrossRef] [Green Version]
  149. Yao, L.; Jiang, P.; Xue, Z.; Zhan, Y.; Wu, D.; Zhang, L.; Wang, Q.; Shi, F.; Shen, D. Graph Convolutional Network Based Point Cloud for Head and Neck Vessel Labeling. In Proceedings of the 11th International Workshop on Machine Learning in Medical Imaging, Lima, Peru, 4 October 2020; pp. 474–483. [Google Scholar]
  150. Lyu, I.; Bao, S.; Hao, L.; Yao, J.; Miller, J.A.; Voorhies, W.; Taylor, W.D.; Bunge, S.A.; Weiner, K.S.; Landman, B.A. Labeling Lateral Prefrontal Sulci using Spherical Data Augmentation and Context-aware Training. NeuroImage 2021, 229, 117758. [Google Scholar] [CrossRef]
  151. Wendelken, C.; Ferrer, E.; Ghetti, S.; Bailey, S.K.; Cutting, L.; Bunge, S.A. Frontoparietal structural connectivity in childhood predicts development of functional connectivity and reasoning ability: A large-scale longitudinal investigation. J. Neurosci. 2017, 37, 8549–8558. [Google Scholar] [CrossRef] [Green Version]
  152. Van Essen, D.C.; Ugurbil, K.; Auerbach, E.; Barch, D.; Behrens, T.E.; Bucholz, R.; Chang, A.; Chen, L.; Corbetta, M.; Curtiss, S.W.; et al. The Human Connectome Project: A data acquisition perspective. Neuroimage 2012, 62, 2222–2231. [Google Scholar] [CrossRef] [Green Version]
  153. Klein, A.; Ghosh, S.S.; Bao, F.S.; Giard, J.; Häme, Y.; Stavsky, E.; Lee, N.; Rossa, B.; Reuter, M.; Chaibub Neto, E.; et al. Mindboggling morphometry of human brains. PLoS Comput. Biol. 2017, 13, e1005350. [Google Scholar] [CrossRef]
  154. He, R.; Gopinath, K.; Desrosiers, C.; Lombaert, H. Spectral graph transformer networks for brain surface parcellation. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, Iowa, USA, 3–7 April 2020; pp. 372–376. [Google Scholar]
  155. Gopinath, K.; Desrosiers, C.; Lombaert, H. Graph convolutions on spectral embeddings for cortical surface parcellation. Med. Image Anal. 2019, 54, 297–305. [Google Scholar] [CrossRef] [PubMed]
  156. Parvathaneni, P.; Bao, S.; Nath, V.; Woodward, N.D.; Claassen, D.O.; Cascio, C.J.; Zald, D.H.; Huo, Y.; Landman, B.A.; Lyu, I. Cortical Surface Parcellation using Spherical Convolutional Neural Networks. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 501–509. [Google Scholar]
  157. Cucurull, G.; Wagstyl, K.; Casanova, A.; Veličković, P.; Jakobsen, E.; Drozdzal, M.; Romero, A.; Evans, A.; Bengio, Y. Convolutional neural networks for mesh-based parcellation of the cerebral cortex. In Proceedings of the Medical Imaging with Deep Learning, Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
  158. Jakobsen, E.; Böttger, J.; Bellec, P.; Geyer, S.; Rübsamen, R.; Petrides, M.; Margulies, D.S. Subdivision of Broca’s region based on individual-level functional connectivity. Eur. J. Neurosci. 2016, 43, 561–571. [Google Scholar] [CrossRef] [PubMed]
  159. Pedersen, J.H.; Ashraf, H.; Dirksen, A.; Bach, K.; Hansen, H.; Toennesen, P.; Thorsen, H.; Brodersen, J.; Skov, B.G.; Døssing, M.; et al. The Danish randomized lung cancer CT screening trial—overall design and results of the prevalence round. J. Thorac. Oncol. 2009, 4, 608–614. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  160. Selvan, R.; Kipf, T.; Welling, M.; Pedersen, J.H.; Petersen, J.; de Bruijne, M. Extraction of airways using graph neural networks. In Proceedings of the Medical Imaging with Deep Learning, Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
  161. Yan, Z.; Youyong, K.; Jiasong, W.; Coatrieux, G.; Huazhong, S. Brain Tissue Segmentation based on Graph Convolutional Networks. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; pp. 1470–1474. [Google Scholar]
  162. Kwan, R.S.; Evans, A.C.; Pike, G.B. MRI simulation-based evaluation of image-processing and classification methods. IEEE Trans. Med. Imaging 1999, 18, 1085–1097. [Google Scholar] [CrossRef]
  163. Cocosco, C.A.; Kollokian, V.; Kwan, R.K.S.; Pike, G.B.; Evans, A.C. Brainweb: Online interface to a 3D MRI simulated brain database. NeuroImage 1997, 5, 425. [Google Scholar]
  164. Meng, Y.; Meng, W.; Gao, D.; Zhao, Y.; Yang, X.; Huang, X.; Zheng, Y. Regression of Instance Boundary by Aggregated CNN and GCN. In Proceedings of the European Conference on Computer Vision, online, 23–28 August 2020; pp. 190–207. [Google Scholar]
  165. Meng, Y.; Wei, M.; Gao, D.; Zhao, Y.; Yang, X.; Huang, X.; Zheng, Y. CNN-GCN aggregation enabled boundary regression for biomedical image segmentation. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 352–362. [Google Scholar]
  166. Orlando, J.I.; Fu, H.; Breda, J.B.; van Keer, K.; Bathula, D.R.; Diaz-Pinto, A.; Fang, R.; Heng, P.A.; Kim, J.; Lee, J.; et al. Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med. Image Anal. 2020, 59, 101570. [Google Scholar] [CrossRef]
  167. Sivaswamy, J.; Krishnadas, S.; Joshi, G.D.; Jain, M.; Tabish, A.U.S. Drishti-gs: Retinal image dataset for optic nerve head (onh) segmentation. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Beijing, China, 29 April–2 May 2014; pp. 53–56. [Google Scholar]
  168. Zhang, Z.; Yin, F.S.; Liu, J.; Wong, W.K.; Tan, N.M.; Lee, B.H.; Cheng, J.; Wong, T.Y. Origa-light: An online retinal fundus image database for glaucoma analysis and research. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Buenos Aires, Argentina, 1–4 September 2010; pp. 3065–3068. [Google Scholar]
  169. Almazroa, A.; Alodhayb, S.; Osman, E.; Ramadan, E.; Hummadi, M.; Dlaim, M.; Alkatee, M.; Raahemifar, K.; Lakshminarayanan, V. Retinal fundus images for glaucoma analysis: The RIGA dataset. In Proceedings of the Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications, Houston, TX, USA, 13–15 February 2018; Volume 10579, p. 105790B. [Google Scholar]
  170. Fumero, F.; Alayón, S.; Sanchez, J.L.; Sigut, J.; Gonzalez-Hernandez, M. RIM-ONE: An open retinal image database for optic nerve evaluation. In Proceedings of the 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Bristol, UK, 27–30 June 2011; pp. 1–6. [Google Scholar]
  171. van den Heuvel, T.L.; de Bruijn, D.; de Korte, C.L.; Ginneken, B.v. Automated measurement of fetal head circumference using 2D ultrasound images. PLoS ONE 2018, 13, e0200412. [Google Scholar] [CrossRef]
  172. Soberanis-Mukul, R.D.; Navab, N.; Albarqouni, S. An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation. arXiv 2020, arXiv:2012.03352. [Google Scholar]
  173. Soberanis-Mukul, R.D.; Navab, N.; Albarqouni, S. Uncertainty-based graph convolutional networks for organ segmentation refinement. arXiv 2020, arXiv:1906.02191. [Google Scholar]
  174. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  175. Simpson, A.L.; Antonelli, M.; Bakas, S.; Bilello, M.; Farahani, K.; Van Ginneken, B.; Kopp-Schneider, A.; Landman, B.A.; Litjens, G.; Menze, B.; et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv 2019, arXiv:1902.09063. [Google Scholar]
  176. Litjens, G.; Toth, R.; van de Ven, W.; Hoeks, C.; Kerkstra, S.; van Ginneken, B.; Vincent, G.; Guillard, G.; Birbeck, N.; Zhang, J.; et al. Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. Med. Image Anal. 2014, 18, 359–373. [Google Scholar] [CrossRef] [Green Version]
  177. Bloch, N.; Madabhushi, A.; Huisman, H.; Freymann, J.; Kirby, J.; Grauer, M.; Enquobahrie, A.; Jaffe, C.; Clarke, L.; Farahani, K. NCI-ISBI 2013 challenge: Automated segmentation of prostate structures. Cancer Imaging Arch. 2015, 370. [Google Scholar] [CrossRef]
  178. Chao, C.H.; Zhu, Z.; Guo, D.; Yan, K.; Ho, T.Y.; Cai, J.; Harrison, A.P.; Ye, X.; Xiao, J.; Yuille, A.; et al. Lymph Node Gross Tumor Volume Detection in Oncology Imaging via Relationship Learning Using Graph Neural Network. In Proceedings of the 23rd Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 772–782. [Google Scholar]
  179. Gao, H.; Ji, S. Graph u-nets. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 2083–2092. [Google Scholar]
  180. Battaglia, P.W.; Hamrick, J.B.; Bapst, V.; Sanchez-Gonzalez, A.; Zambaldi, V.; Malinowski, M.; Tacchetti, A.; Raposo, D.; Santoro, A.; Faulkner, R.; et al. Relational inductive biases, deep learning, and graph networks. arXiv 2018, arXiv:1806.01261. [Google Scholar]
  181. Defferrard, M.; Milani, M.; Gusset, F.; Perraudin, N. DeepSphere: A graph-based spherical CNN. In Proceedings of the International Conference on Learning (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  182. Jiang, C.; Huang, J.; Kashinath, K.; Marcus, P.; Niessner, M. Spherical CNNs on unstructured grids. In Proceedings of the International Conference on Learning (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  183. Dias, P.A.; Medeiros, H. Semantic segmentation refinement by monte carlo region growing of high confidence detections. In Proceedings of the Asian Conference on Computer Vision, Perth, Australia, 2–6 December 2018; pp. 131–146. [Google Scholar]
  184. Ling, H.; Gao, J.; Kar, A.; Chen, W.; Fidler, S. Fast interactive object annotation with curve-gcn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5257–5266. [Google Scholar]
  185. Li, X.; Qian, B.; Wei, J.; Li, A.; Liu, X.; Zheng, Q. Classify EEG and Reveal Latent Graph Structure with Spatio-Temporal Graph Convolutional Neural Network. In Proceedings of the IEEE International Conference on Data Mining, Beijing, China, 8–11 November 2019; pp. 389–398. [Google Scholar]
  186. Xie, X.; Niu, J.; Liu, X.; Chen, Z.; Tang, S. A Survey on Domain Knowledge Powered Deep Learning for Medical Image Analysis. arXiv 2020, arXiv:2004.12150. [Google Scholar]
  187. Chen, Z.M.; Wei, X.S.; Wang, P.; Guo, Y. Multi-label image recognition with graph convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5177–5186. [Google Scholar]
  188. Wang, H.; Wang, J.; Wang, J.; Zhao, M.; Zhang, W.; Zhang, F.; Xie, X.; Guo, M. Graphgan: Graph representation learning with generative adversarial nets. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  189. You, J.; Ying, R.; Ren, X.; Hamilton, W.; Leskovec, J. Graphrnn: Generating realistic graphs with deep auto-regressive models. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5708–5717. [Google Scholar]
  190. Liao, R.; Li, Y.; Song, Y.; Wang, S.; Nash, C.; Hamilton, W.L.; Duvenaud, D.; Urtasun, R.; Zemel, R.S. Efficient graph generation with graph recurrent attention networks. arXiv 2019, arXiv:1910.00760. [Google Scholar]
  191. Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 974–983. [Google Scholar]
  192. Chen, J.; Ma, T.; Xiao, C. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In Proceedings of the International Conference on Learning (ICLR), Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  193. Chen, J.; Zhu, J.; Song, L. Stochastic training of graph convolutional networks with variance reduction. arXiv 2017, arXiv:1710.10568. [Google Scholar]
  194. Chiang, W.L.; Liu, X.; Si, S.; Li, Y.; Bengio, S.; Hsieh, C.J. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 257–266. [Google Scholar]
  195. You, Y.; Chen, T.; Wang, Z.; Shen, Y. L2-gcn: Layer-wise and learned efficient training of graph convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2127–2135. [Google Scholar]
  196. You, Y.; Chen, T.; Wang, Z.; Shen, Y. When does self-supervision help graph convolutional networks? In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 10871–10880. [Google Scholar]
  197. Ma, X.; Zhang, T.; Xu, C. Gcan: Graph convolutional adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 8266–8276. [Google Scholar]
  198. Hassani, K.; Khasahmadi, A.H. Contrastive multi-view representation learning on graphs. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 4116–4126. [Google Scholar]
  199. Velickovic, P.; Fedus, W.; Hamilton, W.L.; Liò, P.; Bengio, Y.; Hjelm, R.D. Deep Graph Infomax. In Proceedings of the International Conference on Learning (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  200. Sun, F.Y.; Hoffmann, J.; Verma, V.; Tang, J. Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In Proceedings of the International Conference on Learning (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  201. Coronato, A.; Naeem, M.; De Pietro, G.; Paragliola, G. Reinforcement learning for intelligent healthcare applications: A survey. Artif. Intell. Med. 2020, 109, 101964. [Google Scholar] [CrossRef]
  202. Jiang, J.; Dun, C.; Huang, T.; Lu, Z. Graph convolutional reinforcement learning. In Proceedings of the International Conference on Learning (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  203. Lee, J.B.; Rossi, R.; Kong, X. Graph classification using structural attention. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &, Data Mining, London, UK, 19–23 August 2018; pp. 1666–1674. [Google Scholar]
  204. Li, Q.; Han, Z.; Wu, X.M. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
  205. Li, G.; Muller, M.; Thabet, A.; Ghanem, B. Deepgcns: Can gcns go as deep as cnns? In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 9267–9276. [Google Scholar]
  206. Rossi, E.; Frasca, F.; Chamberlain, B.; Eynard, D.; Bronstein, M.; Monti, F. Sign: Scalable inception graph neural networks. arXiv 2020, arXiv:2004.11198. [Google Scholar]
  207. Tailor, S.A.; Opolka, F.L.; Liò, P.; Lane, N.D. Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions. arXiv 2021, arXiv:2104.01481. [Google Scholar]
  208. Zhu, Z.; Xu, S.; Tang, J.; Qu, M. Graphvite: A high-performance cpu-gpu hybrid system for node embedding. In Proceedings of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 2494–2504. [Google Scholar]
  209. Akyildiz, T.A.; Aljundi, A.A.; Kaya, K. Gosh: Embedding big graphs on small hardware. In Proceedings of the International Conference on Parallel Processing, London, UK, 19–20 August 2020; pp. 1–11. [Google Scholar]
  210. Abadal, S.; Jain, A.; Guirado, R.; López-Alonso, J.; Alarcón, E. Computing Graph Neural Networks: A Survey from Algorithms to Accelerators. arXiv 2020, arXiv:2010.00130. [Google Scholar]
  211. Wang, M.; Zheng, D.; Ye, Z.; Gan, Q.; Li, M.; Song, X.; Zhou, J.; Ma, C.; Yu, L.; Gai, Y.; et al. Deep graph library: A graph-centric, highly-performant package for graph neural networks. arXiv 2019, arXiv:1909.01315. [Google Scholar]
  212. Dissanayake, T.; Fernando, T.; Denman, S.; Ghaemmaghami, H.; Sridharan, S.; Fookes, C. Domain Generalization in Biosignal Classification. IEEE Trans. Biomed. Eng. 2020, 68, 1978–1989. [Google Scholar] [CrossRef]
  213. Li, Y.; Zheng, W.; Zong, Y.; Cui, Z.; Zhang, T.; Zhou, X. A bi-hemisphere domain adversarial neural network model for EEG emotion recognition. IEEE Trans. Affect. Comput. 2018, 12, 494–504. [Google Scholar] [CrossRef]
  214. Zhong, P.; Wang, D.; Miao, C. EEG-Based Emotion Recognition Using Regularized Graph Neural Networks. arXiv 2020, arXiv:1907.07835. [Google Scholar] [CrossRef]
  215. Mahajan, K.; Sharma, M.; Vig, L. Meta-dermdiagnosis: Few-shot skin disease identification using meta-learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 730–731. [Google Scholar]
  216. Bontonou, M.; Farrugia, N.; Gripon, V. Few-shot Learning for Decoding Brain Signals. arXiv 2020, arXiv:2010.12500. [Google Scholar]
  217. Kim, J.; Kim, T.; Kim, S.; Yoo, C.D. Edge-labeling graph neural network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 11–20. [Google Scholar]
  218. Caceres, C.A.; Roos, M.J.; Rupp, K.M.; Milsap, G.; Crone, N.E.; Wolmetz, M.E.; Ratto, C.R. Feature selection methods for zero-shot learning of neural activity. Front. Neuroinform. 2017, 11, 41. [Google Scholar] [CrossRef] [PubMed]
  219. Duan, L.; Li, J.; Ji, H.; Pang, Z.; Zheng, X.; Lu, R.; Li, M.; Zhuang, J. Zero-Shot Learning for EEG Classification in Motor Imagery-Based BCI System. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2411–2419. [Google Scholar] [CrossRef]
  220. Kampffmeyer, M.; Chen, Y.; Liang, X.; Wang, H.; Zhang, Y.; Xing, E.P. Rethinking knowledge graph propagation for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 11487–11496. [Google Scholar]
  221. Markus, A.F.; Kors, J.A.; Rijnbeek, P.R. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. J. Biomed. Inform. 2020, 113, 103655. [Google Scholar] [CrossRef]
  222. Du, M.; Liu, N.; Hu, X. Techniques for interpretable machine learning. Commun. ACM 2019, 63, 68–77. [Google Scholar] [CrossRef] [Green Version]
  223. Ying, R.; Bourgeois, D.; You, J.; Zitnik, M.; Leskovec, J. Gnnexplainer: Generating explanations for graph neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  224. Pope, P.E.; Kolouri, S.; Rostami, M.; Martin, C.E.; Hoffmann, H. Explainability methods for graph convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 10772–10781. [Google Scholar]
  225. Baldassarre, F.; Azizpour, H. Explainability techniques for graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  226. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  227. Chattopadhay, A.; Sarkar, A.; Howlader, P.; Balasubramanian, V.N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, 12–15 March 2018; pp. 839–847. [Google Scholar]
  228. Schwarzenberg, R.; Hübner, M.; Harbecke, D.; Alt, C.; Hennig, L. Layerwise Relevance Visualization in Convolutional Text Graph Classifiers. arXiv 2019, arXiv:1909.10911. [Google Scholar]
  229. Vu, M.N.; Thai, M.T. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. In Proceedings of the Advances in Neural Information Processing Systems, online, 6–12 December 2020. [Google Scholar]
  230. Yuan, H.; Yu, H.; Wang, J.; Li, K.; Ji, S. On explainability of graph neural networks via subgraph explorations. arXiv 2021, arXiv:2102.05152. [Google Scholar]
  231. Liu, S.; Ostadabbas, S. Seeing Under the Cover: A Physics Guided Learning Approach for In-bed Pose Estimation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 236–245. [Google Scholar]
  232. Martinez, M.; Ahmedt-Aristizabal, D.; Väth, T.; Fookes, C.; Benz, A.; Stiefelhagen, R. A Vision-based System for Breathing Disorder Identification: A Deep Learning Perspective. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019; pp. 6529–6532. [Google Scholar]
  233. Ahmedt-Aristizabal, D.; Denman, S.; Nguyen, K.; Sridharan, S.; Dionisio, S.; Fookes, C. Understanding Patients’ Behavior: Vision-Based Analysis of Seizure Disorders. IEEE J. Biomed. Health Inform. 2019, 23, 2583–2591. [Google Scholar] [CrossRef]
  234. Zhou, J.; Zhang, X.; Liu, Y.; Lan, X. Facial Expression Recognition Using Spatial-Temporal Semantic Graph Network. In Proceedings of the IEEE International Conference on Image Processing, online, 25–28 October 2020; pp. 1961–1965. [Google Scholar]
  235. Liu, Z.; Dong, J.; Zhang, C.; Wang, L.; Dang, J. Relation modeling with graph convolutional networks for facial action unit detection. In Proceedings of the International MultiMedia Modeling Conference, Daejeon, Korea, 5–8 January 2020; pp. 489–501. [Google Scholar]
  236. Lo, L.; Xie, H.X.; Shuai, H.H.; Cheng, W.H. MER-GCN: Micro-Expression Recognition Based on Relation Modeling with Graph Convolutional Networks. In Proceedings of the International Conference on Multimedia Information Processing, Shenzhen, China, 9–11 April 2020; pp. 79–84. [Google Scholar]
  237. Wang, J.; Long, X.; Gao, Y.; Ding, E.; Wen, S. Graph-pcnn: Two stage human pose estimation with graph pose refinement. In Proceedings of the European Conference on Computer Vision, online, 23–28 August 2020; pp. 492–508. [Google Scholar]
  238. Zhao, L.; Peng, X.; Tian, Y.; Kapadia, M.; Metaxas, D.N. Semantic graph convolutional networks for 3d human pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3425–3435. [Google Scholar]
  239. Jin, S.; Liu, W.; Xie, E.; Wang, W.; Qian, C.; Ouyang, W.; Luo, P. Differentiable hierarchical graph grouping for multi-person pose estimation. In Proceedings of the European Conference on Computer Vision, online, 23–28 August 2020; pp. 718–734. [Google Scholar]
  240. Si, C.; Jing, Y.; Wang, W.; Wang, L.; Tan, T. Skeleton-based action recognition with spatial reasoning and temporal stack learning. In Proceedings of the European Conference on Computer Vision (ECCV), online, 8–14 September 2018; pp. 103–118. [Google Scholar]
  241. Shi, L.; Zhang, Y.; Cheng, J.; Lu, H. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 12026–12035. [Google Scholar]
  242. Li, M.; Chen, S.; Chen, X.; Zhang, Y.; Wang, Y.; Tian, Q. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3595–3603. [Google Scholar]
  243. Arifoglu, D.; Charif, H.N.; Bouchachia, A. Detecting indicators of cognitive impairment via Graph Convolutional Networks. Eng. Appl. Artif. Intell. 2020, 89, 103401. [Google Scholar] [CrossRef]
  244. Guo, R.; Shao, X.; Zhang, C.; Qian, X. Sparse Adaptive Graph Convolutional Network for Leg Agility Assessment in Parkinson’s disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2020. [Google Scholar] [CrossRef]
  245. Guo, R.; Shao, X.; Zhang, C.; Qian, X. Multi-scale Sparse Graph Convolutional Network for the Assessment of Parkinsonian Gait. IEEE Trans. Multimed. 2021, 28, 2837–2848. [Google Scholar]
  246. Tsai, M.F.; Chen, C.H. Spatial Temporal Variation Graph Convolutional Networks (STV-GCN) for Skeleton-Based Emotional Action Recognition. IEEE Access 2021, 9, 13870–13877. [Google Scholar] [CrossRef]
Figure 1. Traditional 2D grid representation and graph-based representation (the neighbours of a node are unordered and variable in size). (A,B) Brain graph of f MRI and EEG data for brain responses and emotion analysis, respectively; (C) DMRI sampling represented by a graph (DMRI brain reconstruction); (D) Graph-like representation for organ segmentation (CT -pulmonary airway). Image adapted from [4,5,6,7,8].
Figure 1. Traditional 2D grid representation and graph-based representation (the neighbours of a node are unordered and variable in size). (A,B) Brain graph of f MRI and EEG data for brain responses and emotion analysis, respectively; (C) DMRI sampling represented by a graph (DMRI brain reconstruction); (D) Graph-like representation for organ segmentation (CT -pulmonary airway). Image adapted from [4,5,6,7,8].
Sensors 21 04758 g001
Figure 2. Example of a directed graph (Left) and the corresponding adjacency matrix (Right). Image adapted from [6].
Figure 2. Example of a directed graph (Left) and the corresponding adjacency matrix (Right). Image adapted from [6].
Sensors 21 04758 g002
Figure 3. Proposed graph-based approaches for modeling with rs-fMRI data. Image taken from [40].
Figure 3. Proposed graph-based approaches for modeling with rs-fMRI data. Image taken from [40].
Sensors 21 04758 g003
Figure 4. Estimation of single subject connectivity matrix and labelled graph representation. Pearson’s correlation coefficient is used to obtain a functional connectivity matrix from the raw fMRI time series. Image taken from [70] .
Figure 4. Estimation of single subject connectivity matrix and labelled graph representation. Pearson’s correlation coefficient is used to obtain a functional connectivity matrix from the raw fMRI time series. Image taken from [70] .
Sensors 21 04758 g004
Figure 5. Proposed population graph-based approaches for subject classification. Image taken from [40].
Figure 5. Proposed population graph-based approaches for subject classification. Image taken from [40].
Sensors 21 04758 g005
Figure 6. Features are extracted from EEG signals to construct a graph-based architecture and classify mental states. Image adapted from [83].
Figure 6. Features are extracted from EEG signals to construct a graph-based architecture and classify mental states. Image adapted from [83].
Sensors 21 04758 g006
Figure 7. A GCN-based label co-occurrence learning framework to explore potential abnormalities with the guidance of semantic information, including the pathology co-occurrence and interdependency. Image adapted from [118].
Figure 7. A GCN-based label co-occurrence learning framework to explore potential abnormalities with the guidance of semantic information, including the pathology co-occurrence and interdependency. Image adapted from [118].
Sensors 21 04758 g007
Figure 8. Illustration of the framework that combines CNN and GCN features. Bottom row shows the CNN pipeline to extract image-based features while the top row illustrates the GCN pipeline to learn the interactions. Image adapted from [121].
Figure 8. Illustration of the framework that combines CNN and GCN features. Bottom row shows the CNN pipeline to extract image-based features while the top row illustrates the GCN pipeline to learn the interactions. Image adapted from [121].
Sensors 21 04758 g008
Figure 9. Individual GCNs model the axial, coronal and sagitall scan direction. A refinement GCN is used to generate the proposed super-resolution reconstruction. Image adapted from [132].
Figure 9. Individual GCNs model the axial, coronal and sagitall scan direction. A refinement GCN is used to generate the proposed super-resolution reconstruction. Image adapted from [132].
Sensors 21 04758 g009
Figure 10. Adversarial graph domain adaptation for segmentation. A cortical brain graph is mapped to a spectral domain. The source and target domain are aligned to a reference template. A GCN segmentator learns to predict a generic cortical parcel label for each domain. Finally, the discriminator classifies the segmentator predictions. Image adapted from [23].
Figure 10. Adversarial graph domain adaptation for segmentation. A cortical brain graph is mapped to a spectral domain. The source and target domain are aligned to a reference template. A GCN segmentator learns to predict a generic cortical parcel label for each domain. Finally, the discriminator classifies the segmentator predictions. Image adapted from [23].
Sensors 21 04758 g010
Figure 11. Spherical UNet architecture. The output surface is a cortical parcellation map or a cortical attribute map, and the blue boxes reflect feature maps in spherical space. Note that all spherical surfaces in this figure have the same real size. Image adapted from [19].
Figure 11. Spherical UNet architecture. The output surface is a cortical parcellation map or a cortical attribute map, and the blue boxes reflect feature maps in spherical space. Note that all spherical surfaces in this figure have the same real size. Image adapted from [19].
Sensors 21 04758 g011
Figure 12. Schematic of a UNet-GNN and illustration of irregular node connectivity for a given voxel in the initial graph. Image adapted from [41].
Figure 12. Schematic of a UNet-GNN and illustration of irregular node connectivity for a given voxel in the initial graph. Image adapted from [41].
Sensors 21 04758 g012
Figure 13. Supervoxels are generated from the brain MRI volume. A graph is constructed from these supervoxels with KNNs. A GCN is employed to classify supervoxels into different types of tissue. Image adapted from [161].
Figure 13. Supervoxels are generated from the brain MRI volume. A graph is constructed from these supervoxels with KNNs. A GCN is employed to classify supervoxels into different types of tissue. Image adapted from [161].
Sensors 21 04758 g013
Figure 14. To construct the node representation, the model extracts CNN appearance features and spatial priors for each candidate. Each GTV candidate corresponds to a node in the graph and the GCN is used to exchange information. Image adapted from [178].
Figure 14. To construct the node representation, the model extracts CNN appearance features and spatial priors for each candidate. Each GTV candidate corresponds to a node in the graph and the GCN is used to exchange information. Image adapted from [178].
Sensors 21 04758 g014
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ahmedt-Aristizabal, D.; Armin, M.A.; Denman, S.; Fookes, C.; Petersson, L. Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future. Sensors 2021, 21, 4758. https://doi.org/10.3390/s21144758

AMA Style

Ahmedt-Aristizabal D, Armin MA, Denman S, Fookes C, Petersson L. Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future. Sensors. 2021; 21(14):4758. https://doi.org/10.3390/s21144758

Chicago/Turabian Style

Ahmedt-Aristizabal, David, Mohammad Ali Armin, Simon Denman, Clinton Fookes, and Lars Petersson. 2021. "Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future" Sensors 21, no. 14: 4758. https://doi.org/10.3390/s21144758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop