Keywords

1 Introduction

Detecting objects at vastly different scales from images is a fundamental challenge in computer vision [1]. One traditional way to solve this issue is to build feature pyramids upon image pyramids directly. Despite the inefficiency, this kind of approaches have been applied for object detection and many other tasks along with hand-engineered features [7, 12].

We focus on detecting objects with deep ConvNets in this paper. Aside from being capable of representing higher-level semantics, ConvNets are also robust to variance in scale, thus making it possible to detect multi-scale objects from features computed on a single scale input [16, 38]. However, recent works suggest that taking pyramidal representations into account can further boost the detection performance [15, 19, 29]. This is due to its principle advantage of producing multi-scale feature representations in which all levels are semantically strong, including the high-resolution features.

There are several typical works exploring the feature pyramid representations for object detection. The Single Shot Detector (SSD) [33] is one of the first attempts on using such technique in ConvNets. Given one input image, SSD combines the predictions from multiple feature layers with different resolutions to naturally handle objects of various sizes. However, SSD fails to capture deep semantics for shallow-layer feature maps, since the bottom-up pathway in SSD can learn strong features only for deep layers but not for the shallow ones. This causes the key bottleneck of SSD for detecting small instances.

To overcome the disadvantage of SSD and make the networks more robust to object scales, recent works (e.g., FPN [29], DSSD [14], RON [25] and TDM [43]) propose to combine low-resolution and semantically-strong features with high-resolution and semantically-weak features via lateral connections in a top-down pathway. In contrast to the bottom-up fashion in SSD, the lateral connections pass the semantic information down to the shallow layers one by one, thus enhancing the detection ability of shallow-layer features. Such technology is successfully used in object detection [14, 30], segmentation [18], pose estimation [5, 46], etc.

Ideally, the pyramid features in ConvNets should: (1) reuse multi-scale features from different layers of a single network, and (2) improve features with strong semantics at all scales. The FPN works [29] satisfy these conditions by lateral connections. Nevertheless, the FPN, as demonstrated by our analysis in Sect. 3, is actually equivalent to a linear combination of the feature hierarchy. Yet, the linear combination of features is too simple to capture highly-nonlinear patterns for more complicate and practical cases. Several works are trying to develop more suitable connection manners [24, 45, 47], or to add more operations before combination [27].

The basic motivation of this paper is to enable the networks learn information of interest for each pyramid level in a more flexible way, given a ConvNet’s feature hierarchy. To achieve this goal, we explicitly reformulate the feature pyramid construction process as feature reconfiguration functions in a highly-nonlinear yet efficient way. To be specific, our pyramid construction employs a global attention to emphasize global information of the full image followed by a local reconfiguration to model local patch within the receptive field. The resulting pyramid representation is capable of spreading strong semantics to all scales. Compared to previous studies including SSD and FPN-like models, our pyramid construction is more advantageous in two aspects: (1) the global-local reconfigurations are non-linear transformations, thus depicting more expressive power; (2) the pyramidal precessing for all scales are performed simultaneously and are hence more efficient than the layer-by-layer transformation (e.g. in lateral connections).

In our experiments, we compare different feature pyramid strategies within SSD architecture, and demonstrate the proposed method works more competitive in terms of accuracy and efficiency. The main contributions of this paper are summarized as follows:

  • We propose the global attention and local reconfiguration for building feature pyramids to enhance multi-scale representations with semantically strong information;

  • We compare and analysis popular feature pyramid methodologies within the standard SSD framework, and demonstrate that the proposed reconfiguration works more effective;

  • The proposed method achieves the state-of-the-art results on standard object detection benchmarks (i.e., PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO) without losing real-time processing speed.

2 Related Work

Hand-Engineered Feature Pyramids: Prior to the widely development of deep convolutional networks, hand-craft features such as HOG [44] and SIFT [34] are popular for feature extraction. To make them scale-invariant, these features are computed over image pyramids [9, 13]. Several attempts have been performed on image pyramids for the sake of efficient computation [4, 7, 8]. The sliding window methods over multi-scale feature pyramids are usually applied in object detection [10, 13].

Deep Object Detectors: Benefited by the success of deep ConvNets, modern object detectors like R-CNN [17] and Overfeat [40] lead dramatic improvement for object detection. Particularly, OverFeat adopts a similar strategy to early face detectors by applying a ConvNet as the sliding window detector on image pyramids; R-CNN employs a region proposal-based strategy and classifies each scale-normalized proposal with a ConvNet. The SPP-Net [19] and Fast R-CNN [16] speed up the R-CNN approach with RoI-Pooling that allows the classification layers to reuse the CNN feature maps. Since then, Faster R-CNN [38] and R-FCN [6] replace the region proposal step with lightweight networks to deliver a complete end-to-end system. More recently, Redmon et al. [36, 37] propose a method named YOLO to predict bounding boxes and associate class probabilities in a single step.

Deep Feature Pyramids: To make the detection more reliable, researchers usually adopt multi-scale representations by inputting images with multiple resolutions during training and testing [3, 19, 20]. Clearly, the image pyramid methods are very time-consuming as them require to compute the features on each of image scale independently and thus the ConvNet features can not be reused. Recently, a number of approaches improve the detection performance by combining predictions from different layers in a single ConvNet. For instance, the HyperNet [26] and ION [3] combine features from multiple layers before making detection. To detect objects of various sizes, the SSD [33] spreads out default boxes of different scales to multiple layers of different resolutions within a single ConvNets. So far, the SSD is a desired choice for object detection satisfying the speed-vs-accuracy trade-off [23]. More recently, the lateral connection (or reverse connection) is becoming popular and used in object detection [14, 25, 29]. The main purpose of lateral connection is to enrich the semantic information of shallow layers via the top-down pathway. In contrast to such layer-by-layer connection, this paper develops a flexible framework to integrate the semantic knowledge of multiple layers in a global-local scheme.

3 Method

In this section, we firstly revisit the SSD detector, then consider the recent improvements of lateral connection. Finally, we present our feature pyramid reconfiguration methodology (Fig. 1).

Fig. 1.
figure 1

Different feature pyramid construction frameworks. left: SSD uses pyramidal feature hierarchy computed by a ConvNet as if it is a featurized image pyramid; middle: Some object segmentation works produce final detection feature maps by directly combining features from multiple layers; right: FPN-like frameworks enforce shallow layers by top-down pathway and lateral connections.

ConvNet Feature Hierarchy: The object detection models based on ConvNets usually adopt a backbone network (such as VGG-16, ResNets). Consider a single image \(x_0\) that is passed through a convolutional network. The network comprises L layers, each of which is implemented by a non-linear transformation \(\mathcal {F}_l(\cdot )\), where l indexes the layer. \(\mathcal {F}_l(\cdot )\) is a combination transforms such as convolution, pooling, ReLU, etc. We denote the output of the \(l^{th}\) layer as \(x_l\). The total backbone network outputs are expressed as \(X_{net} = \{x_1, x_2, ... , x_L\}\).

Without feature hierarchy, object detectors such as Faster R-CNN [38] use one deep and semantic layer such as \(x_L\) to perform object detection. In SSD [33], the prediction feature map sets can be expressed as

$$\begin{aligned} X_{pred} = \{x_{P}, x_{P+1}, \ldots , x_L\}, \end{aligned}$$
(1)

where \(P \gg 1\)Footnote 1. Here, the deep feature maps \(x_L\) learn high-semantic abstraction. When \(P< l < L\), \(x_{l}\) becomes shallower thus has more low-level features. SSD uses deeper layers to detect large instances, while uses the shallow and high-resolution layers to detect small onesFootnote 2. The high-resolution maps with limited-semantic information harm their representational capacity for object recognition. It misses the opportunity to reuse deeper and semantic information when detecting small instances, which we show is the key bottleneck to boost the performance.

Lateral Connection: To enrich the semantic information of shallow layers, one way is to add features from the deeper layersFootnote 3. Taking the FPN manner [29] as an example, we get

$$\begin{aligned}&\quad x_{L}^{'}=x_{L},~\nonumber \\&\, x_{L-1}^{'}=\alpha _{L-1}\cdot x_{L-1}+\beta _{L-1}\cdot x_{L}, \nonumber \\&x_{L-2}^{'}=\alpha _{L-2}\cdot x_{L-2}+\beta _{L-2}\cdot x_{L-1}^{'},\\&\qquad =\alpha _{L-2}\cdot x_{L-2}+\beta _{L-2}\alpha _{L-1}\cdot x_{L-1}+\beta _{L-2}\beta _{L-1}\cdot x_{L},\nonumber \end{aligned}$$
(2)

where \(\alpha \), \(\beta \) are weights. Without loss of generality,

$$\begin{aligned} x_{l}^{'}= \sum _{l=P}^{L} w_l\cdot x_{l}, \end{aligned}$$
(3)

where \(w_l\) is the generated final weights for \(l^{th}\) layer output after similar polynomial expansions. Finally, the features used for detection are expressed as:

$$\begin{aligned} X_{pred}^{'} = \{x_{P}^{'}, x_{P+1}^{'}, \ldots , x_L^{'}\}. \end{aligned}$$
(4)

From Eq. 3 we see that the final features \(x_l^{'}\) is equivalent to the linear combination of \(x_l, x_{l+1}, \ldots , x_L\). The linear combination with deeper feature hierarchy is one way to improve information of a specific shallow layer. And the linear model can achieve a good extent of abstraction when the samples of the latent concepts are linearly separable. However, the feature hierarchy for detection often lives on a non-linear manifold, therefore the representations that capture these concepts are generally highly non-linear function of the input [22, 28, 32]. It’s representation power, as we show next, is not enough for the complex task of object detection.

3.1 Deep Feature Reconfiguration

Given the deep feature hierarchy \(X = [x_{P}, x_{P+1}, \ldots , x_L]\) of a ConvNet, the key problem of object detection framework is to generate suitable features for each level of detector. In this paper, the feature generating process at \(l^{th}\) level is viewed as a non-linear transformation of the given feature hierarchy (Fig. 2):

$$\begin{aligned} x_{l}^{'} = \mathcal {H}_{l}(X) \end{aligned}$$
(5)

where X is the feature hierarchy considered for multi-scale detection. For ease of implementation, we concatenate the multiple inputs of \(\mathcal {H}_{l}(\cdot )\) in Eq. 5 into a single tensor before following transformationsFootnote 4.

Given no priors about the distributions of the latent concepts of the feature hierarchy, it is desirable to use a universal function approximator for feature extraction of each scale. The function should also keep the spatial consistency, since the detector will activate at the corresponding locations. The final features for each level are non-linear transformations for the feature hierarchy, in which learnable parameters are shared between different spatial locations.

Fig. 2.
figure 2

Top: Overview of the proposed feature pyramid building networks. We firstly combine multiple feature maps, then generate features at a specific level, finally detect objects at multiple scales. Down: A building block illustrating the global attention and local reconfiguration.

In this paper, we formulate the feature transformation process \(\mathcal {H}_{l}(\cdot )\) as global attention and local reconfiguration problems. Both global attention and local reconfiguration are implemented by a light-weight network so they could be embedded into the ConvNets and learned end-to-end. The global and local operations are also complementary to each other, since they deal with the feature hierarchy from different scales.

Global Attention for Feature Hierarchy. Given the feature hierarchy, the aim of the global part is to emphasise informative features and suppress less useful ones globally for a specific scale. In this paper, we apply the Squeeze-and-Excitation block [22] as the basic module. One Squeeze-and-Excitation block consists of two steps, squeeze and excitation. For the \(l^{th}\) level layer, the squeeze stage is formulated as a global pooling operation on each channel of X which has \(W\times H\times C\) dimensions:

$$\begin{aligned} z^{c}_l = \frac{1}{W \times H} \sum _{i=1}^{W}\sum _{j=1}^{H} x^c_{l}(i, j) \end{aligned}$$
(6)

where \(x^c_{l}(i, j)\) specifies one element at \(c^{th}\) channel, \(i^{th}\) column and \(j^{th}\) row. If there are C channels in feature X, Eq. 8 will generate C output elements, denoted as \(\mathbf z _l\).

The excitation stage is two fully-connected layers followed by sigmoid activation with input \(\mathbf z _l\):

$$\begin{aligned} \mathbf s _l = \sigma (W_l^{1}\delta (W_2^{l}{} \mathbf z _l)) \end{aligned}$$
(7)

where \(\delta \) refers to the ReLU function, \(\sigma \) is the sigmoid activation, \(W_l^{1}\in R^{\frac{c}{r}}\) and \(W_2^{2}\in R^{c}\). r is set to 16 to make dimensionality-reduction. The final output of the block is obtained by rescaling the input X with the activations:

$$\begin{aligned} \tilde{\mathbf{x }}^c_l = s^c_l \otimes \mathbf x ^c \end{aligned}$$
(8)

then \(\tilde{X}_l = [\tilde{x}^P_l, \tilde{x}^{P+1}_l, \ldots , \tilde{x}^L_l]\), \(\otimes \) denotes channel-wise multiplication. More details can be referred to the SENets [22] paper.

The original SE block is developed for explicitly modelling interdependencies between channels, and shows great success in object recognition [2]. In contrast, we apply it to emphasise channel-level hierarchy features and suppress less useful ones. By dynamically adopting conditions on the input hierarchy, SE Block helps to boost feature discriminability and select more useful information globally.

Local Reconfiguration. The local reconfiguration network maps the feature hierarchy patch to an output feature patch, and is shared among all local receptive fields. The output feature maps are obtained by sliding the operation over the input. In this work, we design a residual learn block as the instantiation of the micro network, which is a universal function approximator and trainable by back-propagation (Fig. 3).

Fig. 3.
figure 3

A building block illustrating the local reconfiguration for level l.

Formally, one local reconfiguration is defined as:

$$\begin{aligned} x_{l}^{'} = R(\tilde{X}_l)+W_l x_l \end{aligned}$$
(9)

where \(W_l\) is a linear projection to match the dimensionsFootnote 5. \(R(\cdot )\) represents the residual mapping that improves the semantics to be learned.

Discussion. A direct way to generate feature pyramids is just use the term \(R(\cdot )\) in Eq. 9. However, as demonstrated in [20], it is easier to optimize the residual mapping than to optimize the desired underlying mapping. Our experiments in Sect. 4.1 also prove this hypothesize.

We note there are some differences between our residual learn module and that proposed in ResNets [20]. Our hypothesize is that the semantic information is distributed among feature hierarchy and the residual learn block could select additional information by optimization. While the purpose of the residual learn in [20] is to gain accuracy by increasing network depth. Another difference is that the input of the residual learning is the feature hierarchy, while in [20], the input is one level of convolutional output.

The form of the residual function \(R(\cdot )\) is also flexible. In this paper, we involve a function that has three layers (Fig. 3), while more layers are possible. The element-wise addition is performed on two feature maps, channel by channel. Because all levels of the pyramid use shared operations for detection, we fix the feature dimension (numbers of channels, denoted as d) in all the feature maps. We set \(d = 256\) in this paper and thus all layers used for prediction have 256-channel outputs.

4 Experiments

We conduct experiments on three widely used benchmarks: PASCAL VOC 2007, PASCAL VOC 2012 [11] and MS COCO datasets [31]. All network backbones are pretrained on the ImageNet1k classification set [39] and fine-tuned on the detection dataset. We use the pre-trained VGG-16 and ResNets models that are publicly availableFootnote 6. Our experiments are based on re-implementation of SSD [33], Faster R-CNN [38] and Feature Pyramid Networks [29] using PyTorch [35]. For the SSD framework, all layers in \({\varvec{X}}\) are resized to the spatial size of layer conv8_2 in VGG and conv6_x in ResNet-101 to keep consistency with DSSD. For the Faster R-CNN pipeline, the resized spatial size is as same as the conv4_3 layer in both VGG and ResNet-101 backbones.

4.1 PASCAL VOC 2007

Implementation Details. All models are trained on the VOC 2007 and VOC 2012 trainval sets, and tested on the VOC 2007 test set. For one-stage SSD, we set the learn rate to \(10^{-3}\) for the first 160 epochs, and decay it to \(10^{-4}\) and \(10^{-5}\) for another 40 and 40 epochs. We use the default batch size 32 in training, and use VGG-16 as the backbone networks for all the ablation study experiments on the PASCAL VOC dataset. For two-stage Faster R-CNN experiments, we follow the training strategies introduced in [38]. We also report the results of ResNets used in these models.

Baselines. For fair comparisons with original SSD and its feature pyramid variations, we conduct two baselines: Original SSD and SSD with feature lateral connections. In Table 1, the original SSD scores 77.5%, which is the same as that reported in [33]. Adding lateral connections in SSD improves results to 78.5% (SSD+lateral). When using the global and local reconfiguration strategy proposed above, the result is improved to 79.6\(\%\), which is 1.6% better than SSD with lateral connection. In the next, we discuss the ablation study in more details.

Table 1. Effectiveness of various designs with SSD300.

How Important Is Global Attention? In Table 1, the fourth row shows the results of our model without the global attention. With this modification, we remove the global attention part and directly add local transformation into the feature hierarchy. Without global attention, the result drops to 79.0% mAP (-0.6%). The global attention makes the network to focus more on features with suitable semantics and helps detecting instance with variation.

Comparison with the Lateral Connections. Adding global and local reconfiguration to SSD improves the result to 79.6\(\%\), which is 2.1% better than SSD and 1.1% better than SSD with lateral connection. This is because there are large semantic gaps between different levels on the bottom-up pyramid. And the global and local reconfigurations help the detectors to select more suitable feature maps. This issue cannot be simply remedied by just lateral connections. We note that only adding local reconfiguration, the result is better than lateral connection (+0.5%).

Only Use the Term \(R(\cdot )\). One way to generate the final feature pyramids is just use the term \(R(\cdot )\). in Eq. 9. Compared with residual learn block, the result drops 0.4%. The residual learn block can avoid the gradients of the objective function to directly flow into the backbone network, thus gives more opportunity to better model the feature hierarchy.

Use All Feature Hierarchy or Just Deeper Layers? In Eq. 3, the lateral connection only considers feature maps that are deeper (and same) than corresponding levels. To better compare our method with lateral connection, we conduct a experiment that only consider the deep layers too. Other settings are the same with the previous baselines. We find that just using deeper features drops accuracy by a small margin (−0.2%). We think the difference is that when using the total feature hierarchy, the deeper layers also have more opportunities to re-organize its features, and has more potential for boosting results, similar conclusions are also drawn from the most recent work of PANet [32].

Accuracy vs. Speed. We present the inference speed of different models in the third column of Table 1. The speed is evaluated with batch size 1 on a machine with NVIDIA Titan X, CUDA 8.0 and cuDNN v5. Our model has a 2.7% accuracy gain with 39.5 fps. Compared with the lateral connection based SSD, our model shows higher accuracy and faster speed. In lateral connection based model, the pyramid layers are generated serially, thus last constructed layer considered for detection becomes the speed bottleneck (\(x_{P}^{'}\) in Eq. 4). In our design, all final pyramid maps are generated simultaneously, and is more efficient.

Under Faster R-CNN Pipeline. To validate the generation of the proposed feature reconfiguration method, we conduct experiment under two-stage Faster R-CNN pipeline. In Table 2, Faster R-CNN with ResNet-101 get mAP of 78.9%. Feature Pyramid Networks with lateral connection improve the result to 79.8% (+0.9%). When replacing the lateral connection with global-local transformation, we get score of 80.6% (+1.8%). This result indicate that our global-and-local reconfiguration is also effective in two-stage object detection frameworks and could improve its performance.

Table 2. Effectiveness of various designs within Faster R-CNN.

Comparison with Other State-of-the-Arts. Table 3 shows our results on VOC2007 test set based on SSD [33]. Our model with \(300\times 300\) achieves 79.6% mAP, which is much better than baseline method SSD300 (77.5%) and on par with SSD512. Enlarging the input image to \(512\times 512\) improves the result to 81.1%. Notably our model is much better than other methods which try to include context information such as MRCNN [10] and ION [3]. When replace the backbone network from VGG-16 to ResNet-101, our model with \(512\times 512\) scores 82.4% without bells and whistles, which is much better than the one-stage DSSD [14] and two-stage R-FCN [6].

Table 3. PASCAL VOC 2007 test detection results. All models are trained with 07 + 12 (07 trainval + 12 trainval). The entries with the best APs for each object category are bold-faced.

To understand the performance of our method in more detail, we use the detection analysis tool from [21]. Figure 4 shows that our model can detect various object categories with high quality. The recall is higher than 90%, and is much higher with the ‘weak’ (0.1 jaccard overlap) criteria.

Fig. 4.
figure 4

Visualization of performance for our model with VGG-16 and \(300\times 300\) input resolution on animals, vehicles, and furniture from VOC2007 test. The Figures show the cumulative fraction of detections that are correct (Cor) or false positive due to poor localization (Loc), confusion with similar categories (Sim), with others (Oth), or with background (BG). The solid red line reflects the change of recall with the ‘strong’ criteria (0.5 jaccard overlap) as the number of detections increases. The dashed red line uses the ‘weak’ criteria (0.1 jaccard overlap).

4.2 PASCAL VOC 2012

For VOC2012 task, we follow the setting of VOC2007 and with a few differences described here. We use 07++12 consisting of VOC2007 trainval, VOC2007 test, and VOC2012 trainval for training and VOC2012 test for testing. We see the same performance trend as we observed on VOC 2007 test. The results, as shown in Table 4, demonstrate the effectiveness of our models. Compared with SSD [33] and other variants, the proposed network is significantly better (+2.7% with \(300\times 300\)).

Table 4. PASCAL VOC 2012 test detection results. All models are trained with 07++12 (07 trainval+test + 12 trainval). The entries with the best APs for each object category are bold-faced

Compared with DSSD with ResNet-101 backbone, our model gets similar results with VGG-16 backbone. The most recently proposed RUN [27] improves the results of SSD with skip-connection and unified prediction. The method add several residual blocks to improve the non-linear ability before prediction. Compared with RUN, our model is more direct and with better detection performance. Our final result using ResNet-101 scores 81.1%, which is much better than the state-of-the-art methods.

4.3 MS COCO

To further validate the proposed framework on a larger and more challenging dataset, we conduct experiments on MS COCO [31] and report results from test-dev evaluation server. The evaluation metric of MS COCO dataset is different from PASCAL VOC. The average mAP over different IoU thresholds, from 0.5 to 0.95 (written as 0.5:0.95) is the overall performance of methods. We use the 80k training images and 40k validation images [31] to train our model, and validate the performance on the test-dev dataset which contains 20k images. For ResNet-101 based models, we set batch-size as 32 and 20 for \(320\times 320\) and \(512\times 512\) model separately, due to the memory issue (Table 5).

Table 5. MS COCO test-dev2015 detection results.
Table 6. MS COCO test-dev2015 detection results on small (\(AP_s\)), medium (\(AP_m\)) and large (\(AP_l\)) objects.

With the standard COCO evaluation metric, SSD300 scores 25.1% AP, and our model improves it to 28.4% AP (+3.3%), which is also on par with DSSD with ResNet-101 backbone (28.0%). When change the backbone to ResNet-101, our model gets 31.3% AP, which is much better than the DSSD321 (+3.3%). The accuracy of our model can be improved to 34.6% by using larger input size of \(512\times 512\), which is also better than the most recently proposed RetinaNet [30] that adds lateral connection and focal loss for better object detection.

Table 6 reports the multi-scale object detection results of our method under SSD framework using ResNet-101 backbone. It is observed that our method achieves better detection accuracies than SSD and DSSD for the objects of all scales (Fig. 5).

Fig. 5.
figure 5

Qualitative detection examples on VOC 2007 test set with SSD300 (77.5% mAP) and Ours-300 (79.6% mAP) models. For each pair, the left is the result of SSD and right is the result of ours. We show detections with scores higher than 0.6. Each color corresponds to an object category in that image. (Color figure online)

5 Conclusions

A key issue for building feature pyramid representations under a ConvNet is to reconfigure and reuse the feature hierarchy. This paper deal with this problem with global-and-local transformations. This representation allows us to explicitly model the feature reconfiguration process for the specific scales of objects. We conduct extensive experiments to compare our method to other feature pyramid variations. Our study suggests that despite the strong representations of deep ConvNet, there is still room and potential to building better pyramids to further address multiscale problems.