Abstract

Corner detection is a common method to obtain image features, and the detection effect influences the performance of matching and tracking directly. A FAST-Harris fusion corner detection algorithm is proposed to improve the shortcomings of the Harris algorithm, such as the low detection accuracy and low positioning accuracy, and a corner detection fusion model is established. First, the detected target image is padded, and then the FAST algorithm is used with a 25% reduced contrast points to achieve fast capture roughly; in this way, a candidate corner set is obtained. Then, screening the candidate corner is set one by one by calculating the response function of the Harris with Scharr operator to achieve capture accurately. Finally, the real corners are obtained using SAD for nonmaximum suppression. The positioning error, error detection rate, robustness, and running time of corner detection are obtained by the PyCharm platform. Compared with Harris, the error detection rate and localization error of the algorithm are reduced by 16.89% and 42.04%, respectively. Compared with 8 popular corner detection algorithms, the error detection rate and localization error of the algorithm in this paper are the lowest, which are 24.60% and 1.42 pixels. The robust performance in lossy JPEG compression is the best, with 17.37% shorter running time than Harris algorithm. The method in this paper can be used in scenarios such as autonomous driving and image search services.

1. Introduction

Corner detection is a method used to obtain image features in computer vision which is widely used in motion estimation [1], image matching [26], image visual processing [7], visual tracking [810], and 3D scene reconstruction, etc. [1115], mainly divided into edge-based corner detection algorithm [16] and gray-based corner detection algorithm [17]. The corner detection algorithm based on edge has prime requirements for image segmentation and edge detection, and the algorithm is complex and the steps are cumbersome. The gray-based corner detection algorithm calculates the curvature according to the gradient change of the local grayscale of the image and does not need to segment the image and detect the edge in advance.

Harris algorithm [18] is proposed on the basis of gray corner detection algorithm Moravec [19]. It uses the Taylor series expansion to extend the four moving directions to any direction, so that Harris algorithm has better robustness and detection accuracy, but the corner detection rate decreases. In [20], Sobel edge detection is proposed to extract the alternative corners. When nonmaximum suppression is carried out, the rectangular template is changed to the circular template to improve the detection speed. The computational performance and repeatability of [21] are better than [20, 2225]. The eigenvalues of the matrix are directly calculated by the improved Harris algorithm of Shi-Tomasi, the smaller eigenvalues are compared with the threshold value, and the strong feature point is greater than the threshold value, which effectively improves the detection accuracy [26]. In [27] and [28], the linear combination coefficient and subpixel Harris algorithm are introduced, respectively. Only the corners near the enhanced edge and the parameters of the rectangular template are normalized to improve the accuracy of corner detection. Lowe proposed a Scale Invariant Feature Transform (SIFT) algorithm [29] with rotation invariance, scale invariance, and light intensity invariance. Based on [29], Bay et al. in [29] proposed an accelerated robust feature (Speeded Up Robust Feature, SURF) algorithm [30] to improve the speed of the algorithm while ensuring the invariance of scale and affine transformation. In AKAZE [31], Fast Explicit Diffusion (FED) is added to the pyramid framework to quickly construct a nonlinear space. In the corner detection algorithm of subpixel accuracy [32], one is to get the coordinate value of subpixel accuracy by minimizing the iterative method of error function. The Features from Accelerated Segment Test (FAST) algorithm [33] is the most efficient algorithm in the grayscale corner detection, but there are many redundant corner points detected and there is an image edge missed detection. In order to solve the shortcomings of the FAST algorithm, Rublee et al. proposed the Oriented FAST and Rotated BRIEF (ORB) algorithm [34] which has rotation invariance, fast calculation speed, and antinoise. Researchers usually combine the FAST algorithm with the traditional corner detection algorithm [3537] or improve the FAST algorithm [38] with the back propagation neural network. Finally, the nonmaximum suppression method is used to screen corners [39] to solve the problem of detecting redundant corners.

By means of the above analysis the Harris algorithm has high detection accuracy, while having a large amount of computation, and the corner detection efficiency is low. FAST algorithm has fast corner detection speed, but it detects more redundant corners. In order to achieve high positioning accuracy, detection accuracy, and robustness, this paper proposes a corner detection algorithm that combines FAST and Harris (F–H). The improved FAST algorithm is used to quickly capture the corner position, and then the improved Harris algorithm is used to finely screen the corner points. The mathematical model is established by gradually narrowing the corner capture range. First, the target image is padded with constants to solve the problem of image edge missed detection. Then, the corner points are quickly and roughly captured by using a 7 7 rectangular window for the target image with the advantage of high detection efficiency of the coarse capture FAST algorithm. Using the advantage of high detection accuracy of Harris algorithm, a 3 3 rectangular window is used to filter the candidate corner set. Finally, Sum of Absolute Difference (SAD) is used for nonmaximum suppression to reduce redundant corners and get real corners. The simulation experiment is completed based on PyCharm-python3.8 platforms. The results show that the proposed algorithm has better performance in corner missed detection rate and positioning error than eight advanced algorithms and has obvious advantages in detection speed compared with Harris algorithm.

The contributions of the paper are summarized briefly as follows: a fusion corner detection model of FAST and Harris is established. First, the number of comparisons between the center point and the discrete points on the circumference is reduced by 25%, and a 7 7 rectangular window is used to capture the corner points roughly to obtain a set of candidate corner points. Then, the Scharr operator is used to replace the Sobel operator in the Harris algorithm to calculate the gradient values in the x and y directions, and the 3 3 matrix window is used to screen the candidate corners twice to extract the small boundaries in the image. Finally, the final real corners are determined by nonmaximum suppression.

The specific arrangement of the paper is as follows: Section 2 introduces the FAST algorithm and the improved FAST algorithm. In Section 3, the Harris algorithm and the improved Harris algorithm are introduced. In Section 4, the mathematical model of corner detection combined with FAST and Harris is introduced in detail. In Section 5, simulation experiments are carried out on the PyCharm platform, and the detection error rate, positioning error, repeatability, and execution time performance of 9 corner detection methods are compared. Finally, the conclusion is given in Section 6.

2. Improved FAST Algorithm for Rough Capture

2.1. FAST Algorithm Mathematical Model

The classical FAST algorithm [33] is shown in Figure 1, and Bresenham circle is drawn by considering the target detection point P (x, y) as the center of a circle and three pixels as radius. The corners are obtained by judging the difference between the pixel value of the P(x, y) point and the pixel value of the discrete points P1P16 on the Bresenham circle. A large part of the corners detected by the FAST algorithm is error corners, or redundant corners.

Hypothesis 1. The pixel values of P1P16 are (i = 1, 2, 3 ... 16).where N is the number of pixels satisfying (1), T is the threshold, and the empirical value is usually 50 pixels, where N > 12, 12 is the best experimental result value obtained through experimental comparison in literature [33].
When the pixel value of point P (x, y) satisfies (2), then P (x, y) is judged to be a corner point.
The corner detection rate of the classic FAST algorithm is very high, but the corner response function of the FAST algorithm is shown in (2). When N meets the condition, there are more corners, and many feature points are connected together, resulting in redundant corners. More than that, it seriously affects the detection performance of FAST.

2.2. Mathematical Model of Rough Capture FAST Algorithm

The algorithm proposed in the paper reduced the number of points compared with the target detection point P (x, y) in (1) to improve the corner detection speed of FAST algorithm; the number of the contrast points between the center point and on the circumference discrete point is decreased by 25%. As shown in Figure 1, only the difference between the point P1 (x1, y1-3), P5 (x5+3, y5), P9 (x9, y9+3) and P13 (x13-3, y13) on the Bresenham circle and P (x, y) should be estimated, which is equivalent to the vertical point of the 7 7 matrix window judged with P (x, y) as the center. Equation (3) is the FAST corner response function. P (x, y) is the candidate corner if the value of the pixel is satisfied.where Ii (i = 1,5,9,13) represents the pixel value of any pixel on the circumference. Ip is the pixel value of the center point P (x, y). Set the threshold T = 50. Roughly capturing FAST algorithm reduces 12 P (x, y) contrast points compared with the classical FAST algorithm and speeds up the detection speed, but it gets more candidate corners and the selection of candidate corners is not optimal. The improved Harris algorithm is adopted to get the accurate corner points in the paper and finally used SAD for nonmaximum suppression to get the optimal corner point.

2.3. Target Image Padding

As proposed in [36], the candidate corners are captured roughly by the FAST algorithm preliminarily as shown in Figure 2(a), the 7 7 matrix window is selected to slide the target image, and three rows and three columns of pixels are lost in each edge of the target image. In Figure 2, the matrix box in blue is the target image, while the matrix box with the dotted line in red is the padded target image.

It is necessary to pad the image for obtaining each point P (x, y) in the image as shown in Figure 2(b); the image padding size should be expressed in (4) if the input and output sizes are the same.

In (4), nn is the input size of the image; f is the size of the matrix window; q is the size of padding; s is step length; is round down to the integer.

Hypothesis 2. Image size is rowscols, padding the image in horizontal direction with the parameters n=rows, f= 7, s= 1, and putting the parameters into (4). Then, we get The rows in (5) are replaced by clos and the number of padded columns is shown in (3). According to (5), the size of the image padding value is related to the window size of the sliding matrix. Therefore, the edge of the target image is padded with three rows and three columns to ensure each pixel in the image is detected, as shown in Figure 2(c).
Figure 3 shows the process diagram of rough capture FAST algorithm.

Step 1. Padding the original image as shown in Figure 3(a).

Step 2. As shown in Figure 3(b), move the matrix window of 7 7 from left to right, top to bottom, and step distance is one pixel, according to (3) to determine corner point.

Step 3. Get the candidate corner position as shown in Figure 3(c).
The padding methods of roughly capturing FAST include constant padding, symmetry axis padding, and boundary pixel value padding. For the same corner points, the running time of the constant padding method is superior to the other two methods by 33.31% and 3.59% according to the simulation, so the constant padding method is adopted in the paper.
Figure 4 shows the format of the pseudocode for Algorithm 1 rough capture FAST algorithm.

3. Improved Harris Algorithm for Accurate Capture

The above rough capture FAST algorithm is used to get the candidate corner set after padding the target image with constant. The candidate corner set includes a lot of redundancy corners and there are some corner points palletized, so it is necessary to further screen the corners. The classical Harris algorithm has good robustness and using Scharr operator [40] instead of Sobel operator can screen the real corners more carefully in the candidate corner set.

3.1. Mathematical Model of Harris Algorithm

The image is observed by sliding a small window in classic Harris algorithm [18]. The window moves to arbitrary directions, and the points whose gray changed obviously should be the corners. The pixel gray of P (x, y) changed with the moving of the window based on Taylor series expression showed in where u and are the offsets in vertical and horizontal directions; (x, y) represents the weight of point-centered; I represent the gray value.

Equation (6) is expanded by a first-order Taylor expansion and can be rewritten as (7) approximately if the image I (x, y) translated ∆x in a horizontal direction while ∆y in vertical direction.

Equation (8) is obtained by second-order Taylor expansion.

In (9) H is composed of a window function and a horizontal vertical gradient.

Equation (10) is the diagonal real symmetric matrix, A is the rotation factor, and the variation components in two orthogonal directions namely and (eigenvalue) are extracted.

In (11) R represents the Harris corner response function, and α is a constant (generally 0.04–0.06), set as α = 0.05 in the paper.where and .

3.2. Mathematical Model of Accurate Capture Harris Algorithm

The Scharr operator is used to calculate the gradient values of the H matrix in the x and y directions instead of Sobel operator to improve the detection accuracy of Harris algorithm. The Scharr operator can extract the tiny boundaries effectively and then filter the H matrix with Gaussian filter to eliminate the isolated points and convex points in the image.

Gaussian filtering function is used as the window function (x, y) in (10) expressed in

Table 1 is the parameters of the Gaussian filter. The template on the left is a 3 3 matrix, which is the window function calculated by (12) assuming the value of σ is 1.5. And then the template on the right side is obtained after normalizing the left one, which is used to eliminate isolated points and raised points in candidate corner points with the normalized filter.

E (u, ) is related to Ix and Iy, and the H function in (10) can be simplified as

In (13), Ix uses the 3 3 Scharr gradient operator [−3 0 3; −10 0 10; −3 0 3] [40] to calculate the horizontal gradient, and Iy uses the 3 3 Scharr gradient operator [ −3 −10 −3; 0 0 0; 3 10 3] [40]. Calculate the vertical gradient.

In (14), R is the Harris corner response function, and α is a constant; α is 0.05 in the paper.where H0 is the new matrix obtained with transformation as in Table 1 Gaussian filtering.

Finally, whether P (x, y) is a real corner point or not can be judged according to (18).

The flow chart of accurate Harris corner detection algorithm is shown. In Figure 5, Steps 1–4 keep on cycling until the candidate corner set is empty:(1)Step 1: take one of the candidate corners P (x, y) obtained by roughly capturing FAST algorithm(2)Step 2: calculate x-axis and y-axis gradient values of P (x, y) points using 3 3 Scharr operator(3)Step 3: a new H0 matrix is obtained by Gaussian filtering of x, y gradient values(4)Step 4: according to corner criterion R, determine whether point P (x, y) is a corner or not

Figure 6 shows the format of the pseudocode for Algorithm 2 accurate capture Harris algorithm.

4. FAST-Harris Fusion Corner Detection Algorithm

The corner detection algorithm fusion combining FAST with Harris is proposed in the paper. According to the improved FAST algorithm in Section 2.2, the corner detection is captured by the matrix window of 7 7 to obtain the candidate corner set. Then the improved Harris algorithm in Section 3.2 is used to rescreen the candidate corner by the matrix window of 3 3. Finally, the real corner is determined by nonmaximum suppression.

4.1. FAST-Harris Algorithm Mathematical Model

The mathematical model of FAST-Harris algorithm is divided into two parts. In the first part, the candidate corner set is obtained if the pixels in the image are satisfied (16). In the second part, if the candidate corner points are satisfied (16), then screen the point according to (17), and finally the real corners are obtained.

In (16) and (17), Ii (i = 1, 5, 9, 13) denotes the pixel value of the pixel on the circumference with radius 3 pixels; Ip is the pixel value of the center point P (xp + q, yp + q), q is the padding size for the target image; T is the threshold. P(xp + q, yp + q) is the candidate corner; Pc(xp + q, yp + q) is the corner after the Gaussian filter; and are the horizontal and vertical one-step pixel value of P(xp + q, yp + q).

4.2. Nonmaximum Suppression

The corner set is obtained by the corner detection algorithm based on FAST-Harris fusion above. In order to reduce the redundancy corners, SAD is used for nonmaximum suppression.

Calculate the SAD values of each candidate corner Pc(xp + q,yp + q) and 16 pixels value around its Bresenham circle as the scores of the candidate corner, expressed as in (18).

In (19), Ii is the pixel value of discrete point on the Bresenham circle; Ip is the pixel value of candidate corner.

According to the Euler distance expressed as (20), the two candidate corners Pc (xi, yi) and Pc (xj, yj) are adjacent if the Euler distance between Pc (xi, yi) and Pc (xj, yj) is less than four pixels.

Finally, comparing the scores of two adjacent corners, the corner with low score should be desert. Using the nonmaximum suppression to find the local maximum value, reduce the redundant corners and then determine the final real corners.

4.3. Steps of FAST-Harris Algorithm Corner Detection

The flow chart of corner detection algorithm proposed in this paper is shown in Figure 7. First, the edge of the target image is padded, then the FAST algorithm (green process box) is roughly captured, and then the Harris algorithm (yellow process box) is accurately captured. Finally, the nonmaximum suppression (gray process box) is used to obtain the final corner point.

5. Analysis of Experimental Results

This paper selects PyCharm-python3.8 as the experimental development platform. Because the stability of the initial program is not high, all the experimental results in this paper are carried out after the program runs 20 times. The algorithm parameters are taken as the empirical value T = 50 pixels, q = 3. The target image padding method selects constant padding, and all parameters in the experiment are consistent. The proposed method was compared with the other eight corner detectors (Harris [18], FAST [33], ORB [34], SURF [30], SIFT [29], Shi-Tomasi [26], Subpixel level [32], and AKAZE [31]). The method in the paper is compared with Harris [18], Shi-Tomasi [26], and Subpixel level [32] represented by improving positioning accuracy; SIFT [29] represented by improving robustness, SURF [30] and AKAZE [31] represented for comparison; FAST [33] algorithm represented by detection speed and ORB [34] for comparison. After 50 times running for the program, the performance evaluation for error detection rate, positioning error, robustness, and running time is obtained.

5.1. Error Detection Rate and Positioning Error Performance Evaluation of Corner Detection

In this section, the error detection rate and localization error are used to evaluate the nine methods (Harris [18], FAST [33], ORB [34], SURF [30], SIFT [29], Shi-Tomasi [26], Sub -pixel level [32], AKAZE [31], and F–H).

5.1.1. Error Detection Rate

In the image corner detection results, the ratio of error corner F to the total number of detected corners is represented by Er. If the total number of corner points got by corner detection algorithm is S, the number of correct matching corner points is R, the number of error corner points is F (including the number of missing and error corner points), and the number of real corner points detected manually is . Er can be expressed as

Equation (21) matches the corner point set with the position coordinates of the S corner point set, keeping the real corner detected within 4 pixels in the R corner set, with distance over 4 pixels, less than 8 pixels for missed corner; otherwise, it is error corer.

Figure 8 shows three test images: 'Block image' [41], 'Checkerboard image' [42], 'House image' [41], and corresponding manual marked corners (marked with green circles). The test images 'Block image', 'Checkerboard image', and 'House image' contain 59, 100, and 61 corners, respectively. The manually marked corner position is jointly marked by 10 experts. When eight experts mark the same corner position, it is the real corner position [16].

The corner detection results of the algorithm in this paper and other 8 algorithms on the three target images are shown in Figures 911. As shown in Table 2, Harris [18], Shi-Tomasi [26], Subpixel level [32], SIFT [29], SURF [30], AKAZE [31], FAST [33], ORB [34], F–H, and the number of error detection corners on the 'Block image' [41] test images of the proposed algorithm are 29, 25, 28, 99, 42, 63, 46, 38, and 4, respectively. The number of misdetection points on the 'Checkerboard image' [42] test images is 12, 12, 10, 276, 37, 130, 107, 48, and 4, respectively. The number of misdetection points on the 'House image' [41] test images is 45, 32, 30, 56, 36, 52, 51, 36, and 25, respectively.

As shown in Figure 12, compared with the Harris [18] algorithm, Shi-Tomasi [26] and Subpixel level [32] algorithms are represented by the positioning accuracy, and the average error detection rate is reduced by 16.89%, 33.08%, and 16.87%. Compared with the other five algorithms, the algorithm in this paper has the best error detection rate, and the average error detection rate is 24.60%.

5.1.2. Positioning Error

The positioning error [43] is an important index for evaluating the accuracy of corner detection algorithm. Assuming that the corner set {(, ), i = 1,2, ..., m} and the S corner set {(xi, yi), i = 1,2, ..., m} are matched successfully, the localization error can be defined as

As shown in Figure 13, the positioning error of the F–H algorithm is the lowest on the three images. As shown in Table 3, the average positioning errors of the 9 algorithms in the three images are 2.45, 1.92, 1.94, 2.72, 2.63, 1.91, 2.01, 1.93, and 1.42 pixels, respectively. Compared with the Harris [18] Shi-Tomasi [26], and the Subpixel level [32] represented by the positioning accuracy, the average positioning error is reduced by 42.04%, 26.04%, and 26.80%, respectively. Compared with the other five algorithms, the algorithm in this paper has the smallest average positioning error.

5.2. Robustness Evaluation under Image Transformation

In this section, repeatability is used to evaluate the robustness of the 9 methods (Harris [18], Shi-Tomasi [26], Subpixel level [32], SIFT [29], SURF [30], AKAZE [31], FAST [33], ORB [34], and F–H). The repeatability under image transformation in [43] means that the image can also be detected through rotation, scaling, noise, and lossy JPEG compression. This is a robust corner detection method. Let n1, n2 be the number of corner points detected in the two pictures and R the number of corner points that appear in the two pictures at the same time, and (23) is the repeatability under image transformation.

Due to the different corner detection methods, the accuracy of the corner position is different, so as long as the corner appears within 4 pixels adjacent to the target position, it is considered that the corner appears repeatedly. The higher the repeatability under image transformation, the better the robustness of the detection method.

As shown in Figure 8, robustness experiments are performed on “Block image” [41], “Checkerboard image” [42], “House image” [41] images. Each image undergoes 4 different transformations, as shown in Figure 14, and a total of 108 test images are obtained.

Rotation angle: the original image in the [−π, π] range, π/4 angle to rotate the test image.

Uniform scale factor: the horizontal and vertical resolution of the image, between [−1, 1] with 0.2 interval test image.

Add Gaussian noise: add Gaussian noise to simulate sensor noise caused by bad lighting and high temperature, add zero-mean Gaussian noise in the range of [0, 3.5], and get test images at intervals of standard deviation 0.5.

JEPG quality factor: the ‘cv2.IMWITE_JPEG_ QUALITY, C’ function is used to ensure that the image size is fixed. The image quality C changes within the range of [0, 90], and the test image is obtained at the interval of 10.

Figure 15 shows the line chart of robustness experimental results of 9 detection algorithms for “Block image” [41], “Checkerboard image” [42], and “House image” [41] images. Table 4 illustrates the data of the experimental results quantized value; it can be seen that the algorithm in this paper is compared with the SIFT [29] algorithm, SURF [30] and AKAZE [31] algorithm represented by improving robustness. Robustness is optimal. Among the 9 algorithms, the algorithm proposed in this paper has the second performance in uniform scaling and rotation, the third performance in adding Gaussian noise, but the best performance in lossy JPEG compression.

To further verify the robustness of the algorithm, as shown in Figure 16, 10 images of different scenes were selected in the COCO2017 dataset [44] to verify the average repeatability of the corner detection algorithm. Each image was subjected to 4 different transformations, resulting in a total of 360 test images.

Figure 17 shows a line graph of the robustness of 9 detection algorithms for 10 groups of images, and Table 5 shows the data quantification values. From Table 5, it can be seen that the algorithm in the paper compared with the SIFT [29], SURF [30], and AKAZE [31] is represented by improved robustness. The robustness of the proposed algorithm is the best in uniform scaling and lossy JPEG compression. Among the 9 algorithms, the proposed algorithm is the best in lossy JPEG compression.

5.3. Performance Evaluation of Corner Detection Running Time

This section uses runtime to evaluate the performance of nine methods (Harris [18], Shi-Tomasi [26], Subpixel level [32], SIFT [29], SURF [30], AKAZE [31], FAST [33], ORB [34], and F–H). As shown in Figure 18, eight different scenes of “Construction”, “Train”, “Wall”, “Airplane”, “Fruit”, “Bridge”, “Car”, and “Instrument” were selected from the ImageNet dataset [45], and “Block image” [41], “Checkerboard image” [42], “House image” [41] are shown in Figure 8, to test the running time of the corner detection algorithm. The experiment is implemented in PyCharm-python3.8 with 2.60GHZ i7-9750H CPU and 8 GB memory. Each image runs 50 times, removes the longest and shortest running time, and then takes the average running time. Table 6 shows the comparison of the average running time. It can be seen from the table that the running time of this method is less than other algorithms; the running time is shortened by 17.37% compared with Harris algorithm especially.

6. Conclusion

In this paper, aiming at the low detection efficiency of Harris algorithm and the low detection accuracy of FAST algorithm, a corner detection method based on integrating FAST and Harris is proposed, and a corner detection model based on integrating FAST and Harris is established. First, in order to avoid the missing of image edge, the image is padded, and then the corner detection speed of FAST algorithm is improved by reducing the discrete detection points on the Bresenham circle. Then the candidate corner set is obtained by using the rough capture FAST algorithm with 7 7 as the matrix window; second, the accurate screening of candidate corners is carried out by accurately capturing Harris algorithm with 3 3 as matrix window. Finally, SAD is used for nonmaximum suppression to reduce corner redundancy.

Compared with Harris, Shi-Tomasi, Subpixel level, SIFT, SURF, AKAZE, FAST, and ORB, the proposed algorithm has the best performance in error detection rate and location error. Compared with Harris and FAST algorithms, the average error rate is reduced by 16.88% and 46.71%, respectively, and the average positioning error is 1.42 pixels. Compared with 8 algorithms, the performance of lossy JEPG compression has the best average repeatability, which is 0.56. Compared with Harris algorithm, the running time is reduced by 17.37%.

The data of the simulation show that an innovative corner detection model integrated the improved FAST and Harris. This method has better accuracy and robustness. The method can be used to image visual processing, motion estimation, image matching, visual tracking, and 3D scene reconstruction and so on, especially in the high quality and speed scene [46, 47].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Thanks are due to the Science and Technology Development Plan of Jilin Province for help in identifying collaborators for this work. This study was supported by the Science and Technology Development Plan of Jilin Province (20200401090GX).