Next Article in Journal
Radiomics of Musculoskeletal Sarcomas: A Narrative Review
Next Article in Special Issue
Union-Retire for Connected Components Analysis on FPGA
Previous Article in Journal
A Boosted Minimum Cross Entropy Thresholding for Medical Images Segmentation Based on Heterogeneous Mean Filters Approaches
Previous Article in Special Issue
A Soft Coprocessor Approach for Developing Image and Video Processing Applications on FPGAs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis

1
Laboratoire Bordelais de Recherche en Informatique, University of Bordeaux, CEDEX, 33405 Talence, France
2
Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary
3
Institut de Neurosciences Cognitives et Intégratives d’Aquitaine, University of Bordeaux, CEDEX, 33076 Bordeaux, France
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(2), 44; https://doi.org/10.3390/jimaging8020044
Submission received: 12 January 2022 / Revised: 3 February 2022 / Accepted: 8 February 2022 / Published: 12 February 2022
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)

Abstract

:
The present paper proposes an implementation of a hybrid hardware–software system for the visual servoing of prosthetic arms. We focus on the most critical vision analysis part of the system. The prosthetic system comprises a glass-worn eye tracker and a video camera, and the task is to recognize the object to grasp. The lightweight architecture for gaze-driven object recognition has to be implemented as a wearable device with low power consumption (less than 5.6 W). The algorithmic chain comprises gaze fixations estimation and filtering, generation of candidates, and recognition, with two backbone convolutional neural networks (CNN). The time-consuming parts of the system, such as SIFT (Scale Invariant Feature Transform) detector and the backbone CNN feature extractor, are implemented in FPGA, and a new reduction layer is introduced in the object-recognition CNN to reduce the computational burden. The proposed implementation is compatible with the real-time control of the prosthetic arm.

1. Introduction and State-of-the Art

One of the problems assistive robotics addresses is the production of upper limb prostheses for amputees. Despite great progress in upper limb bionic prostheses, allowing for object-of-interest reaching and grasping, the key remaining issues relate to their control by the operator. To overcome the limitations of traditional control solely based on the electromyographic (EMG) activity of the remaining muscles, promising alternatives consider hybrid systems combining noninvasive motion capture and vision control [1,2]. They include camera vision modules that allow for recognition of the subject’s intention to grasp an object and assist visual control of prosthetic arms for object reaching and grasping [3].
The computer vision algorithms which are implemented in these systems comprise the latest object recognition approaches, such as deep neural network (DNN) classifiers and regressors [4]. In our previous work [5] we proposed an FPGA-implemented SIFT detector for matching of views in a multi-camera visual prosthesis servoing system. Despite the fact that the visual servoing of robotic arms has been a highly researched subject [6], the application to arm neuroprostheses implies supplementary constraints. The whole control device has to be lightweight and worn by the subject. Hence, it is necessary first to minimize the equipment and second to propose efficient lightweight solutions for visual scene analysis by the camera worn by the subject.
Real-time performance is also a mandatory requirement for our target application [2,7]. As the fastest visuomotor response to a perturbation takes about 90 ms [8], and feedback delays of 100 ms or more are known to deteriorate the performance of online feedback control [9], computation time should remain as low as possible, and below 100 ms.
In this work, we propose a hybrid hardware/software (HW/SW) architecture for the analysis of a visual scene for the visual servoing of a neuroprosthetic arm using a glass-worn camera. The visual task here is to recognize the object the subject intends to grasp and localize it in the egocentric visual scene.

1.1. State-of-the-Art Hybrid Solutions in Robotic Vision

As the core block for object recognition in our system is a convolutional neural network (CNN), we further present a brief state-of-the-art review of lightweight CNNs for object detection.

1.2. State-of-the-Art lightweight CNNs for Object Detection

In recent years, in the field of computer vision, the most popular algorithms for object detection are deep convolutional neural networks, such as faster regions with CNN (Fast R-CNN) [10], you only look once (YOLO) [11], and single shot detector (SSD) [12]. These detectors are based on deep residual networks (Resnet) [13], very deep convolutional networks (VGGnet) [14], Alexnet [15], MobileNet [16], and GoogleNet [17].
Resnet [13] was proposed by He et al. and uses residual blocks, which are illustrated in Figure 1.
Denoting the desired underlying mapping as H ( x ) .
F ( x ) :   =   H ( x ) x
where we let the stacked, nonlinear layer fit another mapping of F ( x ) . The original mapping is recast into F ( x ) + x . It is easier to optimize the residual mapping than to optimize the original mapping. F ( x ) + x can be realized by feedforward neural networks with shortcut connections, as illustrated in Figure 1. Shortcut connections can skip one or more layers. In Resnet [13], the shortcut connections’ outputs are simply added to the outputs of the stacked layer.
The computational cost of the Resnet [13] is high which makes real-time implementation difficult. However, there are methods that can accelerate the computational speed.
VGGNet [14] is a simple deep convolutional neural network, where deep refers to the number of layers. The VGG-16 consists of 13 convolutional layers and 3 fully connected layers. The convolutional layers are simple because they use only 3 × 3 filters and pooling layers. This architecture has become popular in image classification problems.
Faster R-CNN [10] was proposed by Ren et al. This architecture has gained popularity among object detection algorithms. Faster R-CNN [10] is composed of the following four parts:
  • feature extraction module, this can be a VGGnet [14], Mobilnet [16], or Resnet [13];
  • region proposal module to generate the bounding boxes around the object;
  • classification layer to detect the class of the object—for example, cat, dog, etc.;
  • regression layer to make the prediction more precise.
The computational speed of the network depends on the feature extraction module and the size of the region proposal module.
Both SSD [12] and YOLO [11] are single-stage detectors. They are significantly faster than two-stage detectors (region-based methods), such as Faster R-CNN [10]. However, in cases when the objects have not so much variability, neither interclass nor intraclass Faster R-CNN [10] is a well-suited network. In our problem, we are interested in naturally cluttered home environments, where the subject intends to grasp an object, such as in kitchens. The vision analysis system we propose has to be designed to recognise objects to grasp in the video, similar to the grasping-in-the-wild (GITW) dataset [18]. This dataset was recorded in natural environments by several healthy volunteers and we made it publicly available on the CNRS NAKALA platform. The objects here, seen from the glass-mounted camera, are quite small. Their surface merely represents 10 % of the whole video frame. Hence, Faster R-CNN [10] is a better choice than the SSD [12] and YOLO [11]. This is due to the fact that Faster R-CNN achieves higher mean average precision (mAP) than them, as reported by Huang et al. [19] for small objects.
The original Faster R-CNN [10] uses VGGnet [14] as a feature extractor. However, the mAP is higher when Resnet [13] is used as a backbone [20]. When the object is small, the mAP of the backbone with Resnet [13] is higher than the backbone with MobileNet [16], as reported in [19].
There are several possible ways to accelerate an algorithm [21]. In our case, FPGA was chosen in the interest of developing a lightweight and portable device [22].
Neural network inference can be very efficiently accelerated on field-programmable gate arrays (FPGA). The most important frameworks and development environments are Vitis AI [23], Apache TVM Versatile Tensor Accelerator (VTA) [24], Brevitas [25], and FINN [26].
Due to the large computing and memory bandwidth requirements, deep learning neural networks are trained on high-performance workstations, computing clusters, or GPUs using floating-point numbers. The memory access pattern of the inference step of a trained network is different, offering more data reuse and requiring smaller memory bandwidth. It makes FPGAs a versatile platform for acceleration. Computing with floating-point numbers is a resource-intensive process for FPGA in terms of digital signal processing (DSP) slices and logic resource usage. Memory bandwidth, required to load 32 bit floating-point state values and weights, can be still high compared with the capabilities of low-power FPGA devices. Additionally, a significant amount of memory is required for buffering state values and partial results in the on-chip memory of the FPGA. One possible solution would consist of using the industry standard bfloat, 16-bit, floating-point representation, which can improve the inference speed of an FPGA. Observations show [26] that the value of weights, state values, and partial results during the computation usually fall in a relatively small range and the 8-bit exponent range of the bfloat type is practically never used. If the range of the values during the computation is known in advance, then fixed-point numbers can be used. One of the major application areas of FPGAs is signal processing; therefore, the DSP slices are designed for fast, fixed-point multiply–accumulate (MAC) or multiply–add (MADD) operations, which can be utilized during neural network inference.
Converting a neural network model trained with floating-point numbers to a fixed-point FPGA-based implementation usually requires an additional step called quantization. Here, a small training set is used to determine the fixed-point weights and optimize the position of the radix point in each stage of the computation. The common bit width for quantization is 16 or 8 bits, where the accuracy of the network is slightly reduced. In some cases, even a binary representation is possible [26], eliminating all multiplications from the computation, which makes FPGA implementation very efficient while the accuracy is decreased slightly.
For latency-sensitive applications, this fixed-point model can be implemented on a streaming architecture, such as FINN [26], where layers of the network are connected directly on the FPGA. Using this structure, loading and storing state values can be avoided. In an ideal case, when the number of weights is small enough, they can be stored in the on-chip memories, further reducing the memory bandwidth requirements of the system. This also results in lower dissipated power due to the high energy requirement of off-chip data movement. Another approach used in Vitis AI [23] and Apache TVM VTA [24] is to divide the computation into a series of matrix–matrix multiplications and create a customized ISA (instruction set architecture) to execute these operations efficiently. The resulting system might have higher memory bandwidth requirements and longer latency, but can be easily reprogrammed to infer a different network during different steps of an image processing application.
Apache TVM VTA [24] is an open, generic, and customizable deep learning accelerator with a complete TVM-based compiler stack. It is an end-to-end hardware–software deep learning system stack that combines TVM and VTA. It contains the hardware design drivers, a just-in-time (JIT) runtime, and an optimizing compiler stack based on TVM.
The main advantages of the quantization are reduced complexity of the circuit, efficient use of dedicated hardware resources, reduced on-chip memory requirements, reduced off-chip memory bandwidth, and smaller power dissipation. Thus, for a lightweight body-worn device, Vitis AI [23] is a good choice, because it can accelerate the network with minimal accuracy loss.
The remainder of the paper is organized as follows. In Section 2, we present the system overview for object detection in egocentric camera view, previously developed in [4], which we further adapt. In Section 3, we propose a hybridization of the solution for the FPGA–CPU board to be incorporated into a body-worn device for prosthetic control. In Section 4, we present our results, measuring the execution time, while comparing it on different platforms. Section 5 concludes our work and outlines its perspectives.

2. System Overview for Object Detection

In this section, we present a system overview for object detection in egocentric video, explain each module, and propose our adaptation of a gaze-driven CNN for object recognition to meet the real-time constraints of our hybrid solution.

2.1. System Overview

The vision analysis part, which is the most critical in the whole chain of prosthesis servoing, is presented in Figure 2. The underlying hypothesis for the functioning of vision-guided neuroprostheses is that the upper limb amputee wearing the neuroprosthesis is first looking at the object they wish to grasp. The subject is wearing a Tobii glasses device, which acquires an ego-visual scene and records gaze fixations of the subject in their coordinate system—see the left-most block in Figure 2. The recorded gaze fixations allow for roughly localizing the object of interest in video frames. Nevertheless, visual saccades to the distractors in a visual scene, microsaccades, and initial scene exploration before the subject finds the object make these measurements noisy. Hence, two blocks of the system—gaze point alignment and gaze point noise reduction—serve to estimate the position of the gaze fixation on the object in the current ego-video frame. The gaze point alignment module aims to estimate and compensate for the ego-motion between the past frames and the current frame. For more details, see Section 2.2. The goal of the gaze point noise reduction module is to reduce the noise in the current frame. This noise can be a head motion, or a product of the user being distracted and looking at another object for a moment. For more details, see Section 2.3. Then, the video frame is cropped around the estimated gaze point to limit the area of the object search. Finally, different object proposal bounding boxes (BBs) at different scales are generated around the point for object localization. The gaze point-centred image and the set of BB coordinates are then submitted to the gaze-driven CNN—see the right-most block in Figure 2. The gaze-driven CNN is pre-trained on the taxonomy of objects to detect. It outputs the best score for the object class and the best-scored bounding box. When the object is localized in a video frame, the 3D position of it for prosthesis servoing can be estimated from eye tracker depth measures of gaze fixation and the coordinates of the centre of the best-scored bounding box.
The resolution of the Tobii first-person view camera is full HD (1920 × 1080 p), with a frame rate of 25 frames per second (fps). The real-time requirement for the system in our case means that each processing step of the localization of the object of interest in the glasses-mounted camera in a current video frame has to be lower than 40 ms (the video acquisition rate), and the latency of the whole system should be lower than 100 ms to leave the place for mechanical servoing of the prosthetic arm [7]. In this work, we do not consider depth estimation, which is a simple regression from eye tracker gaze fixation measures—our focus is on object detection. In the following passages, we present each system block in detail.

2.2. Gaze Point Alignment

The Tobii glass camera and eye tracker system output the coordinates of gaze fixations in each video frame of the first-person integrated camera.
Even if the subject is looking at the same object to grasp during the object reaching, the projected gaze points will vary between two consecutive frames because of the body and ocular movements. Furthermore, saccades provoked by distractors can deviate from the human gaze. Hence, the first step consists of the estimation of a gaze fixation in the current (reference) video frame using all the past recorded gaze fixations. It is necessary to estimate and compensate the ego-motion between the past frames and the current frame to collect all gaze points in the same reference frame. We show an illustration of such a collection in Figure 3, where the light is the gaze fixation point, and more distanced it is from the current timestamp.
Motion compensation from the past frames to the current frame is realized by a sequential homography transformation computed between consecutive frames.
Suppose a video sequence given with N frames and a list of gaze points, g n = { ( g x n , g y n ) , n = 1 N } . The system operates as follows: for each pair of consecutive frames, it extracts the characteristic keypoints and local features. In our case, the keypoint extractor is the scale invariant feature transform [27] (SIFT). A fast library for approximate nearest neighbours (FLANN)-based matcher [28] is used to find the good matches between the SIFT descriptors of the two frames.
The final step is to estimate the homography transformation matrix, H n , n = 1 , , N , with N, the number of the current frame, based on the good matches. Then, the gaze fixations can be projected from all frames into the current frame by a composition of homographies H n . In this projection, we use a sliding window of duration, Δ t = 10 , frames which correspond with 400 ms time interval, with the scene apprehension time by the subjects in our experiments. Therefore, for the current frame, N, the collected gaze points are g ^ N , n , n = N Δ t , , N .

2.3. Noise Reduction

The goal of this module is to reduce the noise of the gaze fixations projected into the current frame.
The list of the aligned gaze fixations, g ^ N , n , n = N Δ t , , N , is the input of the kernel density estimator (KDE) with Gaussian kernel [29], which predicts the most probable location of the gaze fixation in the current frame. The KDE estimates the values as described in the following equation:
ρ K ( y ) = i = 1 L N K ( y g ^ N , n , i ; h )
where a kernel, K ( x , h ) , is a positive function that is controlled by the bandwidth parameter, h. In our case, the bandwidth, h, parameter of the Gaussian kernel was set to 1, as default. L N is the number of gaze points projected in the current frame N The maximum of the estimated density surface is considered as a predictor of the gaze fixation point in the current frame. The search for the maximum is realized inside a bounding box which encompasses all projected gaze fixations g ^ N , n , n = N Δ t , , N , using full search method with pixel accuracy. An example of an estimated gaze point in a frame is presented in Figure 4, see the bright disk of the largest diameter.

2.4. Gaze-Driven Object Recognition CNN

This module recognizes the object location and type (e.g., bowl, pan, etc.) in a first-person video frame. A limited number of bounding boxes of different scales is generated around the estimated gaze fixation point to localize the object. The module’s input is thus the estimated gaze fixation point g ^ n , the cropped frame around the estimated gaze fixation, and the possible bounding boxes of the object generated around g ^ n —see the second block in Figure 2.
In the current work, 9 bounding boxes (BB) have been generated with different scale and shape factors. The size of a cropped frame is 300 × 300 px [4]. For the size of BB, we have considered the width and the height between 67 and 223, in accordance with the frame resolution and the typical object sizes in egocentric visual scenes.
Recognition of the object is carried out by a CNN classifier applied to each of the generated bounding boxes. The BB with the maximum score is thus considered as the object location.
Figure 5 shows the structure of the gaze-driven CNN. The backbone is a Resnet50 in the first four layers, see the left-most block in Figure 5. These layers serve as feature extractors from the input image. The input of the backbone is a cropped video frame of size 300 px × 300 px × 3. The output is a 1024 × 19 × 19 feature tensor.
Not all feature channels are equally important for object classification when using the backbone. To select the most important ones, and to reduce the computational burden of the remaining part of the network, we introduce a reduction layer (RL). It reduces the number of channels in the input tensor to a given channel number C H (in our case, C H can be: 32, 64, 96, 128, 256, 512, 1024).
The input of RL is the backbone output tensor of dimension 1024 × 19 × 19. The RL applies a 2D convolution [30] over an input signal composed of several input planes. Assume that the input is (N, C i n , H, W) and the output is (N, C o u t , H o u t , W o u t ), then the RL can be precisely described as follows:
o u t ( M i , C o u t j ) = b i a s ( C o u t j ) + k = 0 C i n 1 w e i g h t ( C o u t j , k ) i n p u t ( M i , k )
where ★ is the valid 2D cross-correlation operator, M is the batch size, C denotes the number of channels, H is the height of input planes in pixels, and W is the width in pixels.
Bounding boxes generated around the estimated gaze fixation point, and feature tensor with the reduced number of channels ( C H × 19 × 19) are the inputs of the Faster R-CNN module [31] (ROI Heads). The module predicts the object type and location as a 17 × 9 tensor as we have 9 BBs (see Figure 6 and work with a 17-class taxonomy comprising 16 object classes and a rejection class, as in [4]. This tensor contains the probability of each bounding box for each class.
o u t p u t R O I h e a d s = P 11 P 12 P 13 P 1 B P 21 P 22 P 23 P 2 B P C 1 P C 2 P C 3 P C B
Equation (2) is the output tensor of the ROI heads (Faster R-CNN [31]), where C i are the categories and B are the bounding boxes.
The class scores of bounding boxes are aggregated, as in [4], by multiple instance learning [32] (MIL). The input of the MIL aggregation is the output tensor of the Faster R-CNN [31]. The module predicts the class of the frame. The frame-level score ( y ^ ( f , c ) ) is calculated as shown in Equation (3).
y ^ ( f , c ) = 1 γ l o g ( b = 1 B B f e γ y ( b , c ) )
Here, f is the frame, c is the class, b is the bounding box, and y ( b , c ) is the score of the bounding box. γ is an open parameter.
MIL aggregation will produce the vector of the frame-level scores for the object categories. This vector can be finally transformed into the vector of object probabilities using a simple softmax operator: p(f, c) = softmax( y ^ (f, c)).

3. System Hybridization

To propose a hybridization of the system, compatible with real-time performance, we have conducted thorough time measurements on different CPUs and processors to identify the most time-critical modules. The bottleneck is the Scale Invariant Feature Transform (SIFT) detector, which is required in our system for geometric alignment of gaze points—see Figure 2. The main steps of the SIFT are the following: scale-space extrema detection, keypoint localization, orientation assignment, and descriptor generation. For hardware acceleration, we have chosen Xilinx UltraScale ZCU102 [33] FPGA as it supports the parallel execution, and the energy consumption is very low. In our previous work [5], we proposed an SIFT detector on FPGA. It comprises a non-maximum suppression method to filter the keypoints which are too close, instead of the Taylor expansion in the keypoint localization step. We use this implementation in the present work.
The other complex module is the CNN for object recognition. Nevertheless, CNN is pre-trained offline for a given set of object categories. The spatial regularity of the CNN inference makes it ideal for FPGA implementation, and hundreds of papers have been published in this area in recent years. The proposed solutions can be divided into two classes: streaming architectures and parametrizable blocks.
The structure of the streaming architectures closely follows the data flow of the given network by connecting templated processing blocks in a pipeline. Input and output of the blocks are data streams (FIFO interfaces) and each operation in the network—e.g., convolution, pooling, nonlinear response, etc.—has a dedicated block for FPGA implementation [34].
The usual template parameters in the case of a convolution block are the number of input and output layers and the size of the convolution window. The input image is fed into the system in a row-wise order, which makes it possible to connect the network directly to a camera input. The latency of the resulting system is low because the convolution blocks can start processing as soon as the first rows required for the computation are available.
The main drawback of the streaming architecture is that all the weights for the computation must be stored on-chip, which is not possible for large networks. In addition, the computation load of the layers is very different. Therefore, different design optimization strategies must be used for each layer, which makes the design process complicated.
Another approach is to use a compiler to break down the entire CNN computation into a series of tensor operations and create parametrizable hardware blocks to efficiently execute them [24,35]. The fundamental building block of these architectures is a matrix–matrix multiplication block, which is usually extended by an additional functional unit to efficiently carry out other operations, such as max pooling and nonlinear transformation. The matrix–matrix multiplication is usually carried out by a systolic array of multiply–accumulate (MAC) units. A critical part of the system is the compiler, which is also responsible for the optimal scheduling of the tensor operations. The input image, network weights, and partial results are stored in off-chip memory, so the network size is not limited by the size of the FPGA device. On the other hand, the latency of the CNN computation is higher in this case because the entire image frame must be captured and stored in the memory before processing is started. Performance of the system might be also limited by the available off-chip memory bandwidth.
Taking into account the real-time constraints and also dissipation power, we implement a hybrid solution both for the preliminary processing steps before feeding gaze-driven CNN and the CNN as well. Referring to Figure 2, the hybridization of the preliminary steps is given in Table 1.
As for the gaze-driven CNN implementation, accordingly with the time measures for real-time compatibility and simplification of R-CNN input by channel number reduction we proposed—see Section 2.4—only the ResNet backbone is implemented on FPGA; as depicted in Figure 5. The details of all modules from the input of CNN to the final aggregation of decisions by MIL are given in Table 2 below.
The reference software implementation of the system was executed on a four-core Intel i5 7300HQ [36] laptop CPU running at 2.5 GHz. This software system is also compiled for the four-core ARM Cortex A53 [37] processor system (PS) of the Xilinx Zynq UltraScale+ XCZU9EG device on the ZCU102 development board. Based on these measurements, the system was partitioned between the PS and the programmable logic (PL) parts of the device. Specialized accelerator circuits were designed for the modules of the proposed system, which cannot be executed fast enough on the ARM Cortex A53 processors. A traditional register-transfer-level (RTL)-based design of a digital circuit is time consuming; therefore, the Xilinx Vitis HLS system was used to create the FPGA-based circuits from a high-level C/C++ description.
We give our measures justifying these choices and the overall results in the next section.

4. Results

In this section, we discuss the measured computing time of the different steps of the proposed algorithm.

4.1. Dataset

The GITW [18] dataset contains egocentric videos recorded by a camera on the eye tracker glasses. It includes the gaze points of where the person was looking at each moment. The videos were recorded in the wild, in real kitchens, by different subjects, and every video was recorded by a subject who grasped a kitchen object.
The acquisition device used was Tobii Glasses 2 (eye tracker) with an egocentric scene camera. The Tobii Glasses video resolution is HD (1280 pixels × 720 pixels), and the video frame rate is 25 fps. There are 16 different kitchen objects in the videos: bowl, plate, wash liquid, vinegar bottle, milk bottle, oil bottle, glass, lid, saucepan, frying pan, and mug. Different subjects recorded the dataset in five different kitchens. The videos were short, around 10 s long. The GITW [18] dataset contains 404 videos overall. The dataset is freely available for research.
We carried out the time measurements on a subset of the GITW dataset, containing fifteen videos of “grasping a bowl” actions, recorded by four different subjects. The kitchen environments are of different complexity, from a scene with just a few objects, such as the BowlPlace1 videos, to a highly cluttered scene, such as BowlPlace4. The class bowl object had a strong inner variance: different colours, the material of the bowl object, and even a transparent one. The lighting conditions and the visibility are different. Moreover, sometimes, we obtained strong blurring effects due to the camera motion, which was worn on the person’s body.

4.2. Geometric Alignment Measurements

For the completeness of time measures of the whole system, we present here the result of our previous work [22]. The time measures of the geometric alignment module are given in Table 3. The OpenCV [38] library 4.5.5 version was used during this experiment. The geometric alignment consists of an SIFT [27] keypoint extractor, an FLANN matcher [28], and a homography estimator. In the first part of Table 3, we give measures on embedded mobile ZCUs. The left-most column of Table 3 contains the name of the video file. The SIFT points have been detected in the mask, centred on the estimated gaze fixation point in each frame. The radius of the mask was chosen to encompass approximately 100 points. The second column contains the mean mask radius with standard deviation. For the geometric alignment by homography, we detected keypoints in two video frames: the current and previous reference frames. In the next columns, we give time figures on ARM A53 processors for keypoint (KP) computation on one frame, the matching time, and homography computation time.
In Table 3, the second column contains the number of detected SIFT points with the corresponding mask radius. We also present it as the mean and standard deviation on the whole video. The time figures are given for general purpose Intel processors.
The matcher, the homography estimator, and the gaze projection on ZCU102 are fast enough for real-time processing, as illustrated in Table 3. The worst-case scenario was 0.024 s, for the FLANN matcher [28], which means that the frame rate does not exceed 40 fps. This speed is enough for controlling a robotic arm.
However, the SIFT keypoint extractor was slower than the required processing time. While the worst-case scenario on the Intel i5 7300HQ CPU took 0.072 s, which is around 13.81 fps, on the ARM A53, it took 0.866 s, which is around 1.15 fps. For real-time processing, a rate of least 10 fps is required.

4.3. Kernel Density Estimation

Table 4 illustrates a comparison of the estimated time of KDE computation between the Intel i5-7300HQ and the Xilinx ZCU102 ARM Cortex A53. The second column contains the available number of gaze points during a frame gaze point estimation. The Intel i5-7300HQ [36] computes the KDE at 80 fps on average, and the ARM A53 [37] computes the KDE at 7.9 fps on average. In some critical cases, when the scattering of the subject’s gaze fixations is too strong, then the computation time is higher than in real-time, and is 3.9 s per frame, see the “Lid” sequence. Evidently, in such a case of highly cluttered scenes and problems of ocular movements, our system shows its limits.
The problem is caused by outlier gaze fixation points, which fall far away from the majority, increasing the KDE search area. The solution might be to use a simple clustering algorithm to find the outlier gaze fixation points and discard them. Since only the last 10 gaze fixation points are used, we think this clustering can be carried out in a short time.
However, if the projected gaze fixations in the current frame are sufficiently close (in the radius of 10 pixels approximately, which is the “normal case”), the ARM A53 [37] can compute the KDE in real-time.

4.4. Bounding Box Generation Time Measurements

The bounding box generation is fast on the Intel i5 7300HQ CPU. On average, 1 frame is processed in 0.42351 ± 0.01991 milliseconds, which is more than 2500 fps. The embedded ARM A53 processor is also fast enough to generate bounding boxes in real-time. The average computation time was 2.659 ± 0.027 milliseconds, which is more than 376 fps.

4.5. Gaze-Driven Object-Recognition CNN Time Measurements

Here, all measurements were taken by PyTorch. 1.6. [39].
The measurements in Table 5 show that the most time-consuming part of the CNN is the Resnet50 backbone. In every case, the backbone can process a frame in 0.09 s on Intel i5 7300 CPU, which is equal to 11 fps. On the ARM A53 processor, see Table 6, this time, presented in the second column, is even higher. It is about 1.8 s, thus giving 0.5 fps. This is below the required computational speed. Higher channel number causes larger computational complexity in the reduction layer and the Region of Interest (ROI) heads, as shown in Table 5 and Table 6. Nevertheless, with a reasonable number of channels after the reduction, not exceeding 128, these blocks run in real-time, with 82 fps for channel reduction and 25 fps for ROI heads.
The slowest part of the system was, thus, the backbone; therefore, it was implemented in FPGA. The accelerated Resnet50 CNN on ZCU102 can process an image in 0.02686 s, which is 37.23 fps. This is high enough for real-time processing.
The measurements in Table 6 show the results of the ARM A53 CPU.

4.6. Gaze-Driven Faster RCNN Accuracy

As Table 7 and Figure 7 show, the current architecture can perform sufficiently well on our real-world data. Reducing the number of channels to 128 does not impoverish the classification accuracy too much, compared with the initial 1024 feature channels of the backbone, as we can see from Table 7. The average accuracy and loss are computed per class of objects.
Table 8 shows a comparison between different object recognition methods from the state-of-the-art methods and our method. The state-of-the-art methods, such as lightweight YOLO V3 [20] and SSD Mobilnet V2 [12], are trained on the COCO and VOC datasets. We have a specific and very cluttered kitchen environment. For this reason, we do not think that these object detectors are suitable in our case. From the computational time point of view [40], implemented on the same architecture, they are a bit faster: 13.2 fps object recognition for YOLO V3 [20] and 78.8 fps for SSD Mobilnet V2 [12]. In our work, we take profit from the availability of gaze fixations in real-time, which can drive object localization. However, the actual implementation of KDE on CPU makes the system slower. We have 12.64 fps for object recognition and its localization. The bottleneck is the KDE estimation, which we are now improving. Nevertheless, our actual computation times are compatible with real-time prosthesis control.

4.7. Time Measurement of the Whole System

Table 9 illustrates the average computational time of the system in milliseconds. The first row contains the module name, and the second row contains the Intel i5 7300HQ [36] CPU results. In the third row, the ARM A53 [37]-embedded CPU results are given. The fourth row contains the hybrid (ZCU102 [33] and the ARM A53 [37]) results.
The total computation time is 182.782 ms in the Intel i5 7300HQ, which is 5.471 fps. The ARM 53 [37]-embedded CPU is the slowest because it is needed 2868.066 ms per frame, which is 0.349 fps. The hybrid embedded solution is computed in a frame of 236.507 ms, which is 4.228 fps. The hybrid embedded solution is equally as fast as the Intel i5 7300HQ [36], and the power consumption of the hybrid embedded solution is 5.6 W, which is less than the Intel i5 7300HQ [36] CPU 45 W.
The measurements show that the current experimental setup with the whole chain of modules is not yet suitable for real-time processing. However, with pipelining the modules, with some delays, the real-time processing speed is achievable.

5. Conclusions and Perspectives

In this paper, we have proposed a hybrid implementation of a visual analysis part for visual servoing of a prosthetic arm. The system was partitioned between the FPGA fabric and the ARM Cortex A53 processors of the Xilinx ZCU102 development board, based on the computing performance measurements of the building blocks. As a reference, the computing time of each image processing step was also measured on a laptop microprocessor and its power dissipation was estimated.
The measurements show that the gaze point alignment steps are fast enough on the ARM Cortex A53 [37]-embedded CPU, except the SIFT [27] point extraction step. Therefore the SIFT [27] detection module is implemented on the programmable logic part of the Xilinx ZCU102 [33] FPGA board.
In some cases, we find that the variance of the computing time of the KDE in our current setup is very high and slows down processing. In these scenes, most of the gaze points are located over the object to grasp, except one or two, which scattered around the image due to the saccadic movement of the eye. To overcome this problem, we plan to apply an outlier filtering by clustering before KDE computation.
The gaze-driven CNN is built on 4 different modules: Resnet50 [13], reduction layer, Faster R-CNN [10], and multiple instance learning (MIL) aggregation. Resnet50 [13] was accelerated on FPGA because the measured computational speed on the ARM Cortex A53 processor was only 0.55 fps, which was improved to 37.23 fps. The Faster R-CNN is also slow, providing only 3.5 fps when the number of input channels is 1024. We thus proposed a new reduction layer between the Resnet50 [13] and the Faster R-CNN [10] to reduce the number of input channels for the latter block. The frame rate can be increased to 25 fps when the number of input channels for the Faster R-CNN is reduced to 128 by the reduction layer. The experiments show that the accuracy using only 128 channels is still high enough for the bounding box computation.
The experimental setup, with the whole chain of modules is not suitable for real-time processing (236.507 ms on average, or approximately 4 fps). However, this computing time can be improved by pipelining the system and processing different frames at each stage, because each block can finish processing an image within 40 ms. The drawback of pipelining is increased latency. The latency of our current system is around 250 ms, which is higher than the latency allowed by the control of the robotic arm (∼100 ms) and is mainly caused by the KDE block. In the future, the KDE search algorithm will be optimized.
The power consumption and processing speed for the different architectures show that the embedded system, accelerated with FPGA, is a feasible solution for creating a wearable device.

Author Contributions

Conceptualization, A.F., Z.N., J.B.-P., P.S., A.d.R. and J.-P.D.; methodology, A.F., Z.N., J.B.-P., P.S., A.d.R. and J.-P.D.; software, A.F.; validation, A.F., Z.N., J.B.-P., P.S., A.d.R. and J.-P.D.; formal analysis, A.F., Z.N., J.B.-P. and P.S.; investigation, A.F.; writing—review and editing, A.F., Z.N., J.B.-P. and P.S.; visualization, A.F.; supervision, Z.N., J.B.-P., P.S. and A.d.R.; funding acquisition, P.S., J.B.-P. and J.-P.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Thematic Excellence Programme 2019 grant number TUDFO/ 51757-1/2019-ITM and LABRI UMR CNRS 5800 grant.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: Grasping-in-the-wild (GITW) dataset at NAKALA CNRS server https://www.labri.fr/projet/AIV/graspinginthewild.php, accessed on 30 December 2021.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BBbounding box
Resnetdeep residual networks
CNNconvolutional neural network
SIFTscale invariant feature transform
KDEkernel density estimation
MILmultiple instance learning
FPGAfield-programmable gate array
FPSframe rate per second
FLANNfast library for approximate nearest neighbors
Faster R-CNNfaster regions with CNN
KPkeypoints
GITWgrasping-in-the-wild dataset
DNNdeep neural network
EMGelectromyography
YOLOyou only look once
SSDsingle shot detector
VGGnetvery deep convolutional networks
DSPdigital signal processing
MACmultiply–accumulate
MADDmultiply–add
RLreduction layer

References

  1. Kanishka Madusanka, D.G.; Gopura, R.A.R.C.; Amarasinghe, Y.W.R.; Mann, G.K.I. Hybrid Vision Based Reach-to-Grasp Task Planning Method for Trans-Humeral Prostheses. IEEE Access 2017, 5, 16149–16161. [Google Scholar] [CrossRef]
  2. Mick, S.; Segas, E.; Dure, L.; Halgand, C.; Benois-Pineau, J.; Loeb, G.E.; Cattaert, D.; de Rugy, A. Shoulder kinematics plus contextual target information enable control of multiple distal joints of a simulated prosthetic arm and hand. J. Neuroeng. Rehabil. 2021, 18, 3. [Google Scholar] [CrossRef] [PubMed]
  3. Han, M.; Günay, S.Y.; Schirner, G.; Padır, T.; Erdoğmuş, D. HANDS: A multimodal dataset for modeling toward human grasp intent inference in prosthetic hands. Intell. Serv. Robot. 2020, 13, 179–185. [Google Scholar] [CrossRef] [PubMed]
  4. González-Díaz, I.; Benois-Pineau, J.; Domenger, J.P.; Cattaert, D.; de Rugy, A. Perceptually-guided deep neural networks for ego-action prediction: Object grasping. Pattern Recognit. 2019, 88, 223–235. [Google Scholar] [CrossRef]
  5. Fejér, A.; Nagy, Z.; Benois-Pineau, J.; Szolgay, P.; de Rugy, A.; Domenger, J.P. Implementation of Scale Invariant Feature Transform detector on FPGA for low-power wearable devices for prostheses control. Int. J. Circ. Theor. Appl. 2021, 49, 2255–2273. Available online: http://xxx.lanl.gov/abs/https://onlinelibrary.wiley.com/doi/pdf/10.1002/cta.3025 (accessed on 30 December 2021). [CrossRef]
  6. Hussein, M.T. A review on vision-based control of flexible manipulators. Adv. Robot. 2015, 29, 1575–1585. [Google Scholar] [CrossRef]
  7. Mick, S.; Lapeyre, M.; Rouanet, P.; Halgand, C.; Benois-Pineau, J.; Paclet, F.; Cattaert, D.; Oudeyer, P.Y.; de Rugy, A. Reachy, a 3D-Printed Human-Like Robotic Arm as a Testbed for Human-Robot Control Strategies. Front. Neurorobotics 2019, 13, 65. [Google Scholar] [CrossRef] [Green Version]
  8. Scott, S.H. A functional taxonomy of bottom-up sensory feedback processing for motor actions. Trends Neurosci. 2016, 39, 512–526. [Google Scholar] [CrossRef]
  9. Miall, R.C.; Jackson, J.K. Adaptation to visual feedback delays in manual tracking: Evidence against the Smith Predictor model of human visually guided action. Exp. Brain Res. 2006, 172, 77–84. [Google Scholar] [CrossRef]
  10. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  11. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. arXiv 2016, arXiv:1506.02640. [Google Scholar]
  12. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In European Conference on Computer Vision; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar] [CrossRef] [Green Version]
  13. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  14. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  15. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  16. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  17. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
  18. Grasping in the Wild. Available online: https://www.labri.fr/projet/AIV/dossierSiteRoBioVis/GraspingInTheWildV2.htm (accessed on 30 December 2021).
  19. Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. arXiv 2017, arXiv:1611.10012. [Google Scholar]
  20. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. arXiv 2016, arXiv:1612.08242. [Google Scholar]
  21. Fejér, A.; Nagy, Z.; Benois-Pineau, J.; Szolgay, P.; de Rugy, A.; Domenger, J.P. FPGA-based SIFT implementation for wearable computing. In Proceedings of the 2019 IEEE 22nd International Symposium on Design and Diagnostics of Electronic Circuits Systems (DDECS), Cluj-Napoca, Romania Balkans, 24–26 April 2019; pp. 1–4. [Google Scholar] [CrossRef]
  22. Fejér, A.; Nagy, Z.; Benois-Pineau, J.; Szolgay, P.; de Rugy, A.; Domenger, J.P. Array computing based system for visual servoing of neuroprosthesis of upper limbs. In Proceedings of the 2021 17th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA), Catania, Italy, 29 September–1 October 2021; pp. 1–5. [Google Scholar] [CrossRef]
  23. Kathail, V. Xilinx Vitis Unified Software Platform; FPGA ’20; Association for Computing Machinery: New York, NY, USA, 2020; pp. 173–174. [Google Scholar] [CrossRef] [Green Version]
  24. Moreau, T.; Chen, T.; Vega, L.; Roesch, J.; Yan, E.; Zheng, L.; Fromm, J.; Jiang, Z.; Ceze, L.; Guestrin, C.; et al. A Hardware–Software Blueprint for Flexible Deep Learning Specialization. IEEE Micro 2019, 39, 8–16. [Google Scholar] [CrossRef] [Green Version]
  25. Pappalardo, A. Xilinx/Brevitas 2021. Available online: https://zenodo.org/record/5779154#.YgNP6fgRVPY (accessed on 30 December 2021). [CrossRef]
  26. Umuroglu, Y.; Fraser, N.J.; Gambardella, G.; Blott, M.; Leong, P.; Jahre, M.; Vissers, K. FINN: A Framework for Fast, Scalable Binarized Neural Network Inference. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Monterey, CA, USA, 22–24 February 2017. [Google Scholar] [CrossRef] [Green Version]
  27. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  28. OpenCV. A FLANN-Based Matcher Tutorial. Available online: https://docs.opencv.org/3.4/d5/d6f/tutorial_feature_flann_matcher.html (accessed on 30 December 2021).
  29. 2.8. Density Estimation. Available online: https://scikit-learn.org/stable/modules/density.html (accessed on 30 December 2021).
  30. PyTorch CONV2D. Available online: https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.htmll (accessed on 30 December 2021).
  31. Girshick, R. Fast R-CNN. arXiv 2015, arXiv:1504.08083. [Google Scholar]
  32. Amores, J. Multiple instance classification: Review, taxonomy and comparative study. Artif. Intell. 2013, 201, 81–105. [Google Scholar] [CrossRef]
  33. UG82 ZCU102 Evaluation Board—User Guide. Available online: https://www.xilinx.com/support/documentation/boards_and_kits/zcu102/ug1182-zcu102-eval-bd.pdf (accessed on 30 December 2021).
  34. Blott, M.; Preußer, T.B.; Fraser, N.J.; Gambardella, G.; O’Brien, K.; Umuroglu, Y.; Leeser, M.; Vissers, K. FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks. ACM Trans. Reconfigurable Technol. Syst. 2018, 11, 1–23. [Google Scholar] [CrossRef]
  35. Xilinx. PG338—Zynq DPU v3.3 IP Product Guide (v3.3). Available online: https://www.xilinx.com/support/documentation/ip_documentation/dpu/v3_3/pg338-dpu.pdf (accessed on 30 December 2021).
  36. Intel i5 7300HQ. Available online: https://ark.intel.com/content/www/us/en/ark/products/97456/intel-core-i57300hq-processor-6m-cache-up-to-3-50-ghz.html (accessed on 30 December 2021).
  37. Zynq UltraScale+ MPSoC Data Sheet: Overview. Available online: https://www.xilinx.com/support/documentation/data_sheets/ds891-zynq-ultrascale-plus-overview.pdf (accessed on 30 December 2021).
  38. Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
  39. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
  40. UG1431 (v1.4): ZCU102 Evaluation Kit. Available online: https://www.xilinx.com/html_docs/vitis_ai/1_4/ctl1565723644372.html (accessed on 30 December 2021).
Figure 1. Example of the residual block in the Resnet.
Figure 1. Example of the residual block in the Resnet.
Jimaging 08 00044 g001
Figure 2. The prosthetic arm visually guided system.
Figure 2. The prosthetic arm visually guided system.
Jimaging 08 00044 g002
Figure 3. Example of bowl place 4 subject 2 gaze point alignment. The points are the gaze points.
Figure 3. Example of bowl place 4 subject 2 gaze point alignment. The points are the gaze points.
Jimaging 08 00044 g003
Figure 4. Example of bowl place 4 subject 2 KDE gaze point estimation. The points are the gaze points and the white point is the estimated gaze point.
Figure 4. Example of bowl place 4 subject 2 KDE gaze point estimation. The points are the gaze points and the white point is the estimated gaze point.
Jimaging 08 00044 g004
Figure 5. Gaze-driven, object-recognition CNN, where C H is the number of output channel of the reduction layer.
Figure 5. Gaze-driven, object-recognition CNN, where C H is the number of output channel of the reduction layer.
Jimaging 08 00044 g005
Figure 6. Example of bowl place 1 subject 1 generated bounding box. The bounding boxes generated around the red bowl.
Figure 6. Example of bowl place 1 subject 1 generated bounding box. The bounding boxes generated around the red bowl.
Jimaging 08 00044 g006
Figure 7. Training accuracy (in blue) and loss (in red) during 30 epochs. The top left is when the reduction layer number of channels is 32, and, next to it, 64. The vertical axis is the training loss or the accuracy, depending on the line. The horizontal axis is the epochs number.
Figure 7. Training accuracy (in blue) and loss (in red) during 30 epochs. The top left is when the reduction layer number of channels is 32, and, next to it, 64. The vertical axis is the training loss or the accuracy, depending on the line. The horizontal axis is the epochs number.
Jimaging 08 00044 g007
Table 1. Hybridization of preliminary steps in the pipeline, which contains two main blocks: Gaze-Point Alignment Block and Gaze-Point Noise Reduction Block and its submodules.
Table 1. Hybridization of preliminary steps in the pipeline, which contains two main blocks: Gaze-Point Alignment Block and Gaze-Point Noise Reduction Block and its submodules.
ModuleCPUFPGA
Gaze-Point Alignment Block
SIFT Detection [5]-X
SIFT MatchingX-
Homography estimationX-
Gaze-point projectionX-
Gaze-Point Noise Reduction Block
KDE estimationX-
Table 2. Hybridization of the gaze-driven CNN.
Table 2. Hybridization of the gaze-driven CNN.
ModuleCPUFPGA
Resnet50-X
Reduction layerX-
Faster R-CNNX-
MIL aggregationX-
Table 3. Comparison between the Intel i5 7300HQ and the Xilinx ZCU102 ARM CORTEX A53.
Table 3. Comparison between the Intel i5 7300HQ and the Xilinx ZCU102 ARM CORTEX A53.
Xilinx ZCU102 ARM CORTEX A53
Video File NameMask RadiusSIFT KP Extractions (ms)Matcher (ms)Homography (ms)Gaze Projection (ms)
BowlPlace1Subject1119 ± 25875.504 ± 12.12323.471 ± 5.2032.200 ± 0.5400.089 ± 0.004
BowlPlace1Subject2106 ± 16875.282 ± 9.50420.036 ± 3.7041.900 ± 0.3980.088 ± 0.001
BowlPlace1Subject3153 ± 50873.072 ± 7.28317.626 ± 3.2762.539 ± 0.6210.089 ± 0.001
BowlPlace1Subject4120 ± 25873.545 ± 9.06222.244 ± 5.9382.160 ± 0.4640.092 ± 0.009
BowlPlace4Subject1158 ± 55855.947 ± 6.58316.011 ± 3.0532.883 ± 1.1880.088 ± 0.001
BowlPlace4Subject2117 ± 24861.933 ± 5.82116.276 ± 2.6231.997 ± 0.4490.089 ± 0.004
BowlPlace4Subject3108 ± 19867.649 ± 8.89415.679 ± 4.6202.136 ± 0.3500.089 ± 0.005
BowlPlace4Subject4147 ± 49857.271 ± 9.46816.762 ± 4.1862.240 ± 0.5160.088 ± 0.001
BowlPlace5Subject1120 ± 33861.481 ± 8.01217.875 ± 2.1762.018 ± 0.5050.088 ± 0.001
BowlPlace5Subject2133 ± 42858.547 ± 6.23217.944 ± 3.0242.354 ± 0.8800.088 ± 0.001
BowlPlace5Subject3126 ± 33859.774 ± 6.38415.742 ± 2.8362.007 ± 0.5240.087 ± 0.001
BowlPlace6Subject1120 ± 25867.344 ± 10.95019.026 ± 3.8621.965 ± 0.3060.088 ± 0.001
BowlPlace6Subject2129 ± 35862.750 ± 9.73119.737 ± 4.9733.681 ± 3.4560.090 ± 0.008
BowlPlace6Subject3127 ± 31864.429 ± 6.93117.555 ± 3.8062.588 ± 0.8230.087 ± 0.001
BowlPlace6Subject4112 ± 22867.962 ± 9.57917.368 ± 4.7252.710 ± 0.6490.089 ± 0.004
Intel i5 7300HQ
Video File NameNumber of Extracted KPSIFT KP Extractions (ms)Matcher (ms)Homography (ms)Gaze Projection (ms)
BowlPlace1Subject1151 ± 6774.205 ± 5.6113.891 ± 0.8530.259 ± 0.0510.015 ± 10 4
BowlPlace1Subject2156 ± 3775.062 ± 5.6403.304 ± 0.5790.228 ± 0.0400.014 ± 10 4
BowlPlace1Subject386 ± 5072.217 ± 2.5723.011 ± 0.4760.282 ± 0.0550.014 ± 10 4
BowlPlace1Subject4138 ± 6972.979 ± 2.8533.717 ± 0.9400.252 ± 0.0440.015 ± 0.002
BowlPlace4Subject194 ± 5070.068 ± 2.4052.747 ± 0.5650.313 ± 0.1130.014 ± 10 4
BowlPlace4Subject2121 ± 2872.280 ± 3.5382.778 ± 0.4070.233 ± 0.0400.015 ± 10 4
BowlPlace4Subject3126 ± 3973.402 ± 3.4062.678 ± 0.7280.256 ± 0.0470.014 ± 10 4
BowlPlace4Subject495 ± 5070.394 ± 2.3492.872 ± 0.6950.259 ± 0.0510.014 ± 10 4
BowlPlace5Subject1129 ± 3971.990 ± 2.6913.027 ± 0.3690.244 ± 0.0500.015 ± 10 4
BowlPlace5Subject2120 ± 5671.587 ± 2.5263.077 ± 0.5730.272 ± 0.0870.014 ± 10 4
BowlPlace5Subject3108 ± 3671.359 ± 2.5002.684 ± 0.4480.234 ± 0.0490.015 ± 0.001
BowlPlace6Subject1132 ± 4872.150 ± 2.8913.213 ± 0.6450.237 ± 0.0310.015 ± 10 4
BowlPlace6Subject2129 ± 5971.790 ± 3.9343.348 ± 0.8230.390 ± 0.3160.015 ± 10 4
BowlPlace6Subject3114 ± 4772.042 ± 2.8832.976 ± 0.6170.287 ± 0.0760.015 ± 0.001
BowlPlace6Subject4138 ± 4474.585 ± 4.4313.089 ± 0.8490.303 ± 0.0750.015 ± 10 4
Table 4. Comparison in processing time of kernel density estimation module between the Intel i5 7300HQ and the Xilinx ZCU102 ARM CORTEX A53.
Table 4. Comparison in processing time of kernel density estimation module between the Intel i5 7300HQ and the Xilinx ZCU102 ARM CORTEX A53.
Xilinx ZCU102 ARM CORTEX A53Intel i5 7300HQ
Video File NameGaze PointsTime (ms)Max Time (ms)Time (ms)Max Time (ms)
Bowl22 ± 849.27 ± 82.83307.344.94 ± 7.6827.90
CanOfCocaCola26 ± 1175.54 ± 95.89395.087.46 ± 8.8036.70
FryingPan24 ± 959.09 ± 50.06206.765.86 ± 4.5118.98
Glass29 ± 10148.22 ± 265.60943.1914.89 ± 26.2392.21
Jam27 ± 12132.75 ± 319.011365.6513.34 ± 31.39134.68
Lid29 ± 16247.21 ± 718.323835.3023.92 ± 70.97379.64
MilkBottle28 ± 10114.95 ± 148.60647.8611.20 ± 13.9961.92
Mug28 ± 11109.88 ± 218.401087.3911.03 ± 21.26106.63
OilBottle30 ± 12235.15 ± 477.792117.2622.86 ± 46.23205.83
Plate32 ± 14203.39 ± 406.911837.7019.59 ± 39.46178.97
Rice29 ± 1390.34 ± 95.16372.938.64 ± 8.9235.80
SaucePan25 ± 12139.07 ± 261.081286.1113.68 ± 25.82126.92
Sponge24 ± 1050.05 ± 49.79207.895.10 ± 4.7620.46
Sugar27 ± 14146.60 ± 271.581165.4414.46 ± 26.70117.57
VinegarBottle28 ± 13122.32 ± 178.37683.5612.23 ± 17.7170.01
WashLiquid28 ± 12102.93 ± 183.02880.4710.42 ± 18.4589.25
Table 5. Measurements of the gaze-driven, object-recognition CNN in the Intel i5 7300 CPU. The first column contains the reaming number of channels after the reduction layer. Each column shows the elapsed time during the computation in milliseconds.
Table 5. Measurements of the gaze-driven, object-recognition CNN in the Intel i5 7300 CPU. The first column contains the reaming number of channels after the reduction layer. Each column shows the elapsed time during the computation in milliseconds.
Number of ChannelBackbone (ms)Reduction Layer (ms)ROI Heads (ms)Aggregation (ms)
3290.000 ± 0.2500.336 ± 10 4 1.107 ± 10 4 0.137 ± 10 6
6497.307 ± 1.6130.531 ± 0.0022.262 ± 0.0040.138 ± 10 6
9687.441 ± 0.5080.557 ± 0.0032.956 ± 0.0030.241 ± 10 4
12889.952 ± 2.5680.646 ± 0.0013.356 ± 0.0010.142 ± 10 6
25685.287 ± 0.3750.908 ± 10 4 6.592 ± 0.0020.150 ± 10 5
51294.505 ± 2.1002.485 ± 0.00212.276 ± 0.0020.159 ± 10 6
102495.515 ± 7.2853.204 ± 0.00723.718 ± 0.0100.164 ± 10 6
Table 6. Measurements of the gaze-driven, object-recognition CNN in the ARM A53 CPU. The first column contains the reaming number of the channels after the reduction layer. Each column shows the elapsed time during the computation in milliseconds.
Table 6. Measurements of the gaze-driven, object-recognition CNN in the ARM A53 CPU. The first column contains the reaming number of the channels after the reduction layer. Each column shows the elapsed time during the computation in milliseconds.
Number of ChannelBackbone (ms)Reduction Layer (ms)ROI Heads (ms)Aggregation (ms)
321863.300 ± 11.4336.949 ± 0.00113.843 ± 0.0020.643 ± 0.001
641768.616 ± 15.6158.156 ± 0.00121.859 ± 0.0060.708 ± 10 4
961787.737 ± 15.90310.178 ± 0.00130.705 ± 0.0010.758 ± 10 6
1281800.327 ± 17.91512.140 ± 0.00139.371 ± 0.0020.727 ± 10 5
2561797.798 ± 16.37222.061 ± 0.01173.750 ± 0.0020.714 ± 10 4
5121733.458 ± 14.42933.723 ± 0.001142.231 ± 0.0010.752 ± 10 6
10241761.748 ± 16.30563.319 ± 0.001285.121 ± 0.0020.714 ± 10 6
Table 7. The results of the training and testing after 30 epochs.
Table 7. The results of the training and testing after 30 epochs.
Number of Channel3264961282565121024
avg loss on training set7.2356.3186.6424.7783.9203.1152.623
avg acc on trainig set0.8150.8770.8270.9630.9881.0001.000
avg acc on test set0.793 ± 0.2610.926 ± 0.1200.853 ± 0.1610.952 ± 0.0831.0001.0001.000
avg ap on test set0.978 ± 0.0430.985 ± 0.0300.964 ± 0.0410.995 ± 0.0121.0001.0001.000
Table 8. Comparison of different object recognition CNNs. All the measurements were taken by Vitis AI 1.4. The gaze-driven, object-recognition CNN used 128 channels in the reduction layer.
Table 8. Comparison of different object recognition CNNs. All the measurements were taken by Vitis AI 1.4. The gaze-driven, object-recognition CNN used 128 channels in the reduction layer.
NameGaze-Driven, Object-Recognition CNNSSD Mobilnet V2YOLO V3
DatasetGITWCOCOVOC
FrameworkPytorchTensorflowTensorflow
Input size300 × 300300 × 300416 × 416
Running deviceZCU 102 + ARM A53ZCU 102ZCU 102
FPS12.6478.813.2
Table 9. The average computational time measurement of the whole system on different hardware. The Resnet50 number of channels is 128.
Table 9. The average computational time measurement of the whole system on different hardware. The Resnet50 number of channels is 128.
Computational Time (ms)
Module NameIntel i5 7300HQ CPUARM A53FPGA + ARM A53
SIFT [27]72.407 ± 3.349865.499 ± 8.4377.407 [5]
FLANN matcher3.094 ± 0.63818.223 ± 3.86718.223 ± 3.867
Homography estimation0.270 ± 0.0752.359 ± 0.7782.359 ± 0.778
Gaze point projection0.015 ± 10 4 0.089 ± 0.0030.089 ± 0.003
KDE estimation12.477 ± 23.306126.672 ± 238.900126.672 ± 238.900
Bounding Box generation0.424 ± 0.0202.659 ± 0.0272.659 ± 0.027
Resnet50 [13]89.952 ± 2.5681800.327 ± 17.91526.860
Reduction Layer0.645 ± 0.00112.140 ± 0.00112.140 ± 0.001
Faster R-CNN [10]3.356 ± 0.00139.371 ± 0.00239.371 ± 0.002
MIL Aggregation0.142 ± 10 6 0.727 ± 10 6 0.727 ± 10 6
Total time (ms)182.782 ± 29.9572868.066 ± 269.930236.507 ± 243.578
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fejér, A.; Nagy, Z.; Benois-Pineau, J.; Szolgay, P.; de Rugy, A.; Domenger, J.-P. Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis. J. Imaging 2022, 8, 44. https://doi.org/10.3390/jimaging8020044

AMA Style

Fejér A, Nagy Z, Benois-Pineau J, Szolgay P, de Rugy A, Domenger J-P. Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis. Journal of Imaging. 2022; 8(2):44. https://doi.org/10.3390/jimaging8020044

Chicago/Turabian Style

Fejér, Attila, Zoltán Nagy, Jenny Benois-Pineau, Péter Szolgay, Aymar de Rugy, and Jean-Philippe Domenger. 2022. "Hybrid FPGA–CPU-Based Architecture for Object Recognition in Visual Servoing of Arm Prosthesis" Journal of Imaging 8, no. 2: 44. https://doi.org/10.3390/jimaging8020044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop