Next Article in Journal
Evaluating the Effect of Noise from Traffic on HYB Magnetic Observatory Data during COVID-19 Lockdown
Previous Article in Journal
A Direct-Current Triboelectric Nanogenerator Energy Harvesting System Based on Water Electrification for Self-Powered Electronics
Previous Article in Special Issue
Design and Implementation of a Tether-Powered Hexacopter for Long Endurance Missions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Image Mosaicing Applied on UAVs Survey

by
Jean K. Gómez-Reyes
1,
Juan P. Benítez-Rangel
1,*,
Luis A. Morales-Hernández
1,
Emmanuel Resendiz-Ochoa
1 and
Karla A. Camarillo-Gomez
2
1
Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, San Juan del Río 76807, QE, Mexico
2
Mechanical Engineering Department, Tecnológico Nacional de México-Instituto Tecnológico de Celaya, Celaya 38010, GJ, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(5), 2729; https://doi.org/10.3390/app12052729
Submission received: 20 January 2022 / Revised: 11 February 2022 / Accepted: 28 February 2022 / Published: 7 March 2022
(This article belongs to the Special Issue Recent Advances in Unmanned Aerial Vehicles)

Abstract

:
The use of UAV (unmanned aerial vehicle) technology has allowed for advances in the area of robotics in control processes and application development. Such is the case of image processing, in which, by the use of aerial photographs taken by these aircrafts, it is possible to perform surveillance and monitoring tasks. As an example, we can mention the use of aerial photographs for the generation of panoramic images through the process of stitching images without losing image resolution. Some applications are photogrammetry and mapping, where the main problems to be solved are image alignment and ghosting images, for which different stitching techniques can be applied. These methodologies can be categorized into direct methods or feature-based methods. This paper aims to show an overview of the most frequently applied mosaicing techniques in UAVs by providing an introduction to those interested in developing in this area. For this purpose, a summary of the most applied techniques and their applications is given, showing the trend of the research field and the contribution of different countries over time.

1. Introduction

The aerial mosaic has different applications, such as surveillance mapping and tracking [1,2,3,4,5], search and rescue [6,7], 3D scene reconstruction [8,9], inspection in heritage and archeological applications [10,11,12], and vegetation and forest surveillance [13,14,15]. For these applications, aerial mosaic panorama generation is applied to stitch multiple images into a single image based upon overlapped regions [16,17]. Different approaches have been developed for the stitching process, for example, direct methods (pixel-based) [18,19,20] and feature-based methods [21] or mosaicing based on registration and mosaicing based on blending [22]. In aerial panoramas, image acquisition can be performed by satellites or UAV systems, but satellite technology provides a higher coverage area than that of other systems such as UAVs [23]. One advantage is that satellite image acquisition is faster than that of UAVs [24]. However, there are important factors to evaluate for the use of satellite image acquisition: firstly, if an analysis of a specific section is required, it is necessary to check if any satellite is available for the specific coordinates or has recent information on the area of interest; additionally, if the area is small, when zooming to focus on it, the resolution will be lower in comparison to that of a medium-size UAV camera; an additional consideration is UAVs’ dependence on the state of the clouds, owing to the fact that satellites are not subjected to weather inclemency, such as storms, as UAVs are; for satellite aerial panorama, the methods for mosaicing images are based on cross-correlation, Fourier-based, phase correlation, and area-based approaches [25,26,27,28,29]. In the case of UAVs, aerial image panoramas are mainly based on feature-based methods [1,3], due to their flexibility to fly in a specific area, which allows them to focus on the selected region, obtaining images with greater accuracy and sufficient distinctive features [1,3,30,31,32,33]. Another advantage of UAVs is that they can carry different types of sensors, such as fish-eye cameras, thermographic cameras, LIDAR sensors, and proximity sensors. The aforementioned features make them ideal for surveillance and monitoring tasks. Each system has its own advantages depending on the application. However, this work will focus on aerial images obtained through UAVs due to their high resolution, precision, ease, and flexibility [34]. Table 1 summarizes the comparisons of the characteristics of satellites and UAVs [35].

2. Panorama Generation

The basis for image stitching is to relate two images using a geometry model that associates the motion from one image with another; the motion that best fits this relation is the projective transformation, also called the homography matrix [36], which gives an aligned eight-parameter model preserving the straight lines [37,38]. For feature-based methods, the most acknowledged approaches are global single transformation and local hybrid transformation [39]. The sequence followed by these techniques, shown in Figure 1, generates a mosaic.
The first stage is image acquisition. This can be achieved by using one camera for translational or rotational acquisition, as shown in Figure 2. This task can be performed in different ways: by using a moving camera, by using more than one camera [40,41,42] fixed on a frame to acquire multiple images at once from different angles, or by using a video camera sequence [43]. To perform the relations between the images, it is important to obtain the camera parameters, such as focal length, that are used in the perspective and projection algorithms [44,45].
The second stage is feature registration, where different features are detected and matched. These features can be: points, lines, or their combinations in general [46]. The third stage is transformation estimation. Once features are established, a register of both images is created from the features detected. Some cases lead to a mismatch between the key points. For this, different algorithms are used to search for the features with the closest distances between the images, as KD-tree, k-nearest neighbor (KNN) pattern classification, and Hamming distance [2,47,48,49] search for the closest distance from the query location.
x ˜ = H x ˜
where x ˜ is x in homogeneous coordinates and H R 3 × 3 defines the homography [49].
Different techniques are proposed to calculate the homography. In practice, robust statistical techniques are employed on a large number of matching points or lines after normalizing the data; these techniques reduce the adverse effects of noise by using the sum of squared difference method or an iterative mathematical model, such as RANSAC (random sample consensus) [50], PROSAC (progressive sample consensus) [51], or direct linear transformation (DLT), to relate the features and reduce the matching points. For feature-based methods, the most used techniques are DLT and RANSAC for their performance and robustness [52]. RANSAC uses the smallest data set possible and proceeds to enlarge this set with consistent data points [53]. The goal is to determine a set of inliers from the presented correspondences so that the homography can be estimated optimally from these inliers [52]. The fourth stage is the warping or stitching phase, where the images are overlapped to stitch together as one. After the matching, the not overlapped region has reprojection errors. In order to solve this problem, the algorithm of bundle adjustment is used. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal 3D structure and viewing parameter (camera pose and/or calibration) estimates. Optimal means that the parameter estimates are found by minimizing some cost function that quantifies the model fitting error and jointly that the solution is simultaneously optimal concerning both structure and camera variations [54]. This optimization problem is usually formulated as a nonlinear least squares problem, where the error is the squared L 2 norm of the difference between the observed feature location and the projection of the corresponding 3D point on the image plane of the camera [55]. The image composition is the last stage, where, when the illumination and brightness of the images stitched may not be continuous, different algorithms can be applied to postprocess the image and blend the mosaic images as one. A method based on the use of gain compensation and multiband blending is proposed in [33]. Gain compensation adjusts the intensity of the mosaic by computing the local mean brightness of the image. Nevertheless, simply adjusting the gain to give all regions the same medium intensity will tend to reduce the intensity in regions with high brightness and increase the dark or low-intensity regions [56]. Multiband image blending is proposed in [57], and it is one of the most popular applications for image fusion due to its easy implementation and its advantage of being insensitive to misalignment. The basic idea of this process is to decompose the original image into a pyramidal representation and blend the images at each level [58,59]. Another approach is presented in [60], with a variant of Gaussian function as the weighting function, and it proposes improved implementation and improvement of the weighted mean method to eliminate the edges.

3. Stitching Methods

Feature-based methods are also algorithms that extract common features or descriptors from an image that define them, being the most common features used: points, lines, edges, corners, pixels, colors, histograms, or geometric entities [61]. These are extracted from features and compared and matched to their characteristics. These methods have a significant advantage over direct pixel-by-pixel methods, in which the relation is determined by directly minimizing pixel-to-pixel dissimilarities [21]. The feature-based methods can be divided into two categories: the global single transformation, where the main processes are feature detection and registration to perform the global projective transformation, and the local hybrid transformation.

3.1. Feature-Based: Global Single Transformation

The feature descriptors must have different characteristics and must be found throughout the image so that the points of coincidence in both images are distinguished. There must be a high number of descriptors; in case of geometric changes, the identifiers can relate images efficiently. Among the most used feature algorithms are the Harris Corner Detector [62], FAST [63], ORB [64], BRIEF [65], BRISK [66], SIFT [67], and SURF [32].

3.1.1. Harris Corner

The Harris Corner Detector [62] was one of the first feature detection methods and it is based on the Moravec Corner Detector. This method uses a small window to scan in different directions for changes in the average light intensity of the image; then, the center point of the window is extracted as a corner point, shifting the window. Should there be a flat region, it will show no change of intensity in all directions. If an edge region is found, then there will be no change of intensity along the edge direction. However, if a corner is found, then there will be a significant change of intensity in all directions [68].
The corresponding eigenvalues provide the actual value amounts of these increases. λ 1 and λ 2 are the eigenvalues of matrix M. Then, the corner, edge, and flat area of the image can be computed from the eigenvalues as follows:
  • Flat area: both λ 1 and λ 2 are very small.
  • Edge: one of λ 1 and λ 2 is smaller and the other is bigger.
  • Corner: both λ 1 and λ 2 are bigger and are nearly equal.

3.1.2. SIFT

One of the feature methodologies most widely used for its performance is SIFT (Scale Invariant Feature Transform) [67]. This low-level feature methodology has the advantage of being robust to occlusion, clutter, and noise with a good quantity of key points generated for even small objects [69]. SIFT uses a sequence of four stages. An image pyramid is constructed by repeatedly convolving input images with Gaussians, including a set of scale-space images, shown on the left, and subtracting the adjacent Gaussian images to produce a difference-of-Gaussian (DoG) pyramid. The scale space is constructed by convolving an image repeatedly using a Gaussian filter, which changes the scales and groups the outputs into octaves [67,68]. After the scale-space construction is complete, DoG images are computed from adjacent Gaussian-blurred images in each octave [21].

3.1.3. FAST

The Features from Accelerated Segment Test (FAST) [63,70] is a corner detection method which can be used to extract feature points and later used to track and map objects in many computer vision tasks. A corner detector should satisfy the following criteria: consistent, insensitive to the variation of noise, detected as close as possible to the correct positions (accuracy), and fast enough (speed) [69]. The segment test criteria operate by considering a circle of sixteen pixels around the corner candidate feature p. The original detector classifies p as a corner if there is a set of n contiguous pixels in the circle which are all brighter than the intensity of the candidate pixel p plus a threshold t or all darker than I p minus t [71].

3.1.4. ORB

The feature matching ORB (Oriented FAST and Rotated BRIEF) algorithm is a descriptor method comparable to SIFT, with low cost and high speed; it is based on BRIEF (Binary Robust Independent Elementary Features) and FAST. One disadvantage of FAST is its lack of an orientation component. For this, ORB uses a multiscale image pyramid that consists of a sequence of images with different resolutions. After locating the key points, ORB assigns an orientation to each key point depending on its level of intensity. BRIEF takes all key points found by the FAST algorithm and converts them into a binary feature vector so that together they can represent an object. A binary feature vector—also known as a binary feature descriptor—is a feature vector that only contains 1 and 0. To sum up, each key point is described by a feature vector which has 128–512 string bits [64,65].

3.1.5. SURF

Speeded Up Robust Features (SURF) is a scale and rotation invariant feature interest point detector and descriptor proposed by [32]. This algorithm has advantages over previous systems, such as SIFT, because it presents similar results of matching points, but its calculations are faster. The approach for interest point detection uses a basic Hessian matrix approximation by relying on integral images for image convolutions: the Hessian matrix H e ( x , σ ) in x at scale σ as the convolution of the Gaussian second-order derivative, with the image I in point x, and similarly for L x y ( x , σ ) and L y y ( x , σ ) to calculate the determinant of the Hessian matrix. These approximate second-order Gaussian derivatives are evaluated at a very low computational cost using integral images, and regardless of size, they allow fast calculation.

3.1.6. BRISK

The Binary Robust Invariant Scalable Keypoints (BRISK) algorithm [66] is a feature point detection and description algorithm with scale invariance and rotation invariance. It constructs the feature descriptor of the local image through the grayscale relationship of random point pairs in the neighborhood of the local image and obtains the binary feature descriptor. The key concept of the BRISK descriptor makes use of a pattern used for sampling the neighborhood of the key point. Two subsets of distance pairings are defined: one each for the short-distance and long-distance pairings, S and E, respectively. BRISK loses information about the image colors, which can provide more key points for matching points. Owing to this reason, a CBRISK algorithm is proposed to maintain the information of the RGB color channels [72]. To decrease computation time, the SBRISK development shifts the binary vector rather than rotating the image pattern or constellation, as many other descriptors do [73].

3.2. Feature-Based: Local Hybrid Transformation

Feature-based panorama generation based on global single transformation has shown good results for pure rotational moves and planar scenes, but in real practice, this condition is rarely satisfied due to movement of the UAV, as shown in Figure 3 [41]. Therefore, ghosting effects frequently happen when the images are aligned. Moreover, the parallax problem remains due to the move of the optical center [74].
Local hybrid transformation is where mesh-based alignment is reviewed, since it is complemented by the other methodologies [61]. Mesh-based alignment divides images into uniform meshes. Each mesh corresponds to an estimated transformation where there are two regions: the overlapped region, which is aligned by the projective transformation, and the nonoverlapped region, which is generally warped by using a similarity transformation by calculating the local homography model to avoid potential distortions.

3.2.1. APAP

One mesh-based algorithm is proposed by Zaragoza [74]. Their algorithm, named As Projective As Possible (APAP), is based on the DLT used to calculate the global homography. Instead, they calculate location-dependent homography (local homography) using moving DLT (MDLT); this produces flexible warps but also maintains the global homography as much as possible. Given the estimated H to align the images and arbitrary pixel at position x in the source, image I is warped to the position x in the target image I by:
x ˜ = H x ˜
The result shows an overlapped mesh, as the horizontal lines are reserved, reducing the parallax error.

3.2.2. SPHP

As previously presented, the APAP result is a global projective warp with the problem of shape/area distortion in the nonoverlapping area; part of the image is stretched and nonuniformly enlarged. This problem is produced for the single perspective with a wide FOV; for this reason, a multiperspective warp is employed in [75]. Based on a projective warp for the overlapped areas and a similarity warp for the nonoverlapped section, we have the shape-preserving half-projective warp (SPHP).
x y 1 h ^ 1 h ^ 2 h ^ 3 h ^ 4 h ^ 5 h ^ 6 c 0 1 u v 1
For R L the transformation, the projective transform goes from H ( u , v ) S ( u , v ) , which reduces the distortion images generated from the projective transform.

3.2.3. AANAP

The global similarity transform performed by SPHP may result in a mismatch if the overlapped region contains distinct image plans due to the use of all points to obtain the similarity transform. Due to this, an optimal similarity transformation is proposed in [76]. Between the target and the reference images, the process begins with the feature points’ matches, and then extrapolation between the nonoverlapping areas using homography linearization occurs. The resultant image has fewer perspective distortions than the result using APAP. Once the global similarity transform is calculated, it is used to mitigate the perspective distortion, using it as a warp on the target image.
H ^ i ( t ) = μ h H i ( t ) + μ s S
H ^ i ( r ) = H ^ i ( t ) + ( H i ( t ) ) 1
The local homography is represented by H i ( t ) , H ^ i ( t ) represents the updated local transformation, and S is the global similarity transform.

4. Aerial Panorama Applications

In the previous section, an introduction of the most used feature-based algorithms was shown. In this section, a résumé of aerial panoramic applications is presented; these applications were developed to generate aerial panoramas as the principal task or to use the stitching methodology as a complementary method for a different application.

4.1. Feature-Based: Global Single Transformation

4.1.1. Harris Corner

Harris Corner is still a widely used method for its low computational cost. New improvements have been proposed, such as applying a prefilter of the characteristic points detected using Harris Corner, to reduce the ghosting and luminance problems [77]. Another proposal is an improvement replacing the Gaussian Window function for a B-spline function; then, the corner points are preselected to obtain candidate corners, and an autoadaptive threshold method improves the adaptability of the algorithm. Another approach of Harris corner improvement is applying an adaptive nonmaximal corner suppression algorithm to reduce the pixels that cannot be corners. The local representative corners are retained, which reduces the corner detection time by 30.2%, improving the stitching speed [78]. The use of distinct algorithms on the matching process enhances the methodology: as an example, by applying Harris corner in a correlation on the registration, the accuracy and robustness of aerial panoramas are improved [79]. Another proposal is to combine it with another feature algorithm, such as SURF, in one process, making it possible to achieve a more robust algorithm than a simple Harris corner, as proposed in [80]. In Table 2, a résumé of Harris Corner applied on UAV examples is presented.

4.1.2. SIFT

SIFT is one of the most used algorithms for a scaled invariant detector. Although it is efficient at detecting matching points, the time needed to compute operations is its disadvantage. To improve the processing time, different approaches are proposed, such as the one presented in [81]. They propose a binary local image based on SIFT, reducing the complex operations and speeding up almost 50% faster than the original algorithm. An improved SIFT method called AH-SIFT is proposed in [29]. In this method, the descriptor performs more efficiently than the original SIFT, undergoing various levels of geometric and photometric transformations. An optimized projective transformation method to improve SIFT thermal infrared images is proposed using M-least squares to join images obtained by uncooled thermal infrared video in [82]. An approach presented in [83] enhances the speed estimation of a drone and adjusts the velocity of image acquisition, reducing the ghosting effects. A global motion model is used to predict the overlapped region based on the world coordinate frame. Then, SIFT stitching is applied and image quality is evaluated based on gray relational analysis, improving the accuracy [84]. Using a graphics processing unit (GPU), an implementation called the CUDA-SIFT (Compute Unified Device Architecture) approach [85] achieves real-time mosaic generation and tracking. Another approach presented by [82] is a SIFT stitching process based on random M-least square algorithm and super-resolution processes. Some applications use the SIFT process for an earthquake rescue system, where the image mosaicing is used for an image earthquake damage degree (EDD) analysis. This is performed by evaluating the gray level co-occurrence matrix (GLCM) features along with coarseness, contrast, metric, and filters to analyze the EDD [86]. Following the disaster evaluation developed in [87], a methodology to evaluate open-source systems for Urban Search and Rescue (USaR) is used to determine the location of possible trapped victims for fast 3D modeling of fully or partially collapsed buildings using images from UAVs. In the inspection area, some applications use the SIFT mosaicing approach for inspecting photovoltaic systems (PVs) using UAVs with thermal cameras to record videos using GPS for the trajectory. These images will be used to generate a high-resolution image of a PV zone by using SIFT [88]. A measurable aerial panorama based on panoramic images and multiview oblique images is proposed in [89], and it is divided into major stages: projection, matching, and back projection. The stitching process applied is the SIFT methodology to stitch the projected aerial panorama with a down-looking oblique image and the aerial panoramic image after matching the images by their proposed method. Table 3 shows some of the most recent implementations of the SIFT algorithm in UAVs.

4.1.3. FAST

In comparison with Harris Corner, the FAST algorithm can detect more features at the same time, yet, compared with SIFT, the number of features detected is less than half. This can lead to the wrong assumption that SIFT is better than FAST. Nevertheless, in processing time, FAST accomplishes feature detection with simple operations that make it faster than SIFT [90]. This has to be considered to choose the best fit implementation according to the application. A SIFT variation using FAST in each pyramid instead of DoG is proposed in [91]. Such an approach achieves a robust and faster algorithm with more features than what is achieved just using FAST and faster than SIFT. As aforementioned, FAST has the advantage of speed computation; for this case, [92] proposes a real-time application using the FAST feature detector with the correspondence algorithm Bag-of-Word (BoW) to improve the time correspondences compared to the brute-force matching algorithm. Sometimes the stitching process uses multitemporal images, which present more changes in lighting and contrast than when applying any of the feature detection methods. The process will have errors due to the change in the grayscale. Reference [93] applies the use of phase congruency (PC) to maintain the image structure, regardless of the change in the grayscale, once the PC images are obtained. A crowd density estimated by jointly clustering analysis is presented in [94], where two versions of FAST are tested to detect the crowd features. The filtering procedures are used to eliminate the feature points which did not belong to crowd features. Some applications of the algorithm used with drone images are presented in Table 4.

4.1.4. ORB

As previously stated, ORB methodology has the advantage of speeding up computation compared to most of the feature-based methodologies. As an example is the application of the aerial image mosaicing process based on ORB to remove the mismatch from thousands of putative correspondences by applying locality-preserving matching (LPM), cited in [95]. Another approach based on Bayesian frameworks aims to formulate it as a maximum likelihood problem and solve the geometric algorithm using the expectation maximization (EM) algorithm. To reduce the matching process, principal component analysis (PCA) is used, reducing dimensions and facilitating the feature extraction process without compromising accuracy, as shown in the root mean square errors (RMSE) results [96], improving the time process by using a GPU with CUDA, obtaining a faster matching process compared to SIFT and SURF. Other developments in the ORB methodology may concur in the implementation of techniques to relate the features; in the case of [97], a preprocess phase correlation method is used to obtain the overlapping area between the to-be-stitched image and the reference image, reducing the feature calculation. Then, using Hamming distance, the relation between the image matching points is improved compared to the classical ORB methodology. Similarly, using a mask to register local clustered ORB features and nonmaximal suppression to remove clustered points, only the feature point with the largest response value is retained [98]. Hamming distance is used for the matching step, and finally, PROSAC is applied to eliminate the wrong matches and calculate the transformation matrix between images. The result is an improvement on the correct matching points, slightly less than that of SIFT and almost the same speed as classic ORB. Table 5 presents a summary of the implementations using ORB.

4.1.5. SURF

SURF-based aerial panoramas are attractive for their accuracy, comparable to SIFT in a shorter period of time. An example is the implementation of the process on workflow technology and a geoprocessing workflow tool called GeoJModelBuilder as a four-step process. First is detecting and registering; second is KNN for matching points [99]. RANSAC is used for transforming estimation and finally warping all images to the same coordinate system. The workflow approach is proposed to provide users a flexible way to create a workflow to fulfill their needs. The workflows could be bound to different algorithms for better results or less time consumption. Tests of the SURF algorithm with fast approximate nearest neighbor search (FANN) feature matching [100] were carried out through ROS using Google Maps for the simulation of the panoramic images. In the tracking object case, the methodology can be used with a Kanade–Lucas–Tomasi tracker (KLT) to track a region of interest [101]. The stitching process can be used for position estimation, as presented in [102], where position estimation methodology is applied for path planning and distance calculation by the triangle similarity principle and fusion images. In Table 6 these implementations are presented.

4.1.6. BRISK

Comparable with BRIEF applied in ORB, BRISK methodology outperforms SIFT and SURF in speed. With similar results and low calculation cost, this makes it ideal for UAV aerial panoramas, as shown with previous methods. An improved BRISK methodology developed to acquire reliable control points for image registration is presented in [103]. The spatial relationship is analyzed, with the key points derived from the coincidence of descriptors to eliminate the corresponding false points. This methodology proves to be 4.7 times faster than classic BRISK. The use of ground control points allows a more accurate position, as shown in [15], where ground control points are used for thermal orthomosaics generated by BRISK and an RGB camera analyzes the blooming of flowers for their apple orchard management system. This information is summarized and presented in Table 7.

4.2. Feature-Based: Local Hybrid Transformation

From the feature-based methodologies, the most accurate, mesh analyses, are based on SIFT and SURF features. Some methods focus on image compositing. Once the UAV obtains the aerial image, mesh-based stitching blending methods are applied to improve the panoramic result, as presented in [104], where they propose color blending based on superpixels, using simple linear iterative clusters after generating the SPHP panoramic image. Since the number of superpixels is much less than that of individual pixels, such improvement reduces the computational complexity and processing time compared to multiband blending, color transfer based on image gradients, and color matching blending. Another approach is suggested in [105] by calculating SURF and Harris Corner features to obtain the global homography. Applying PROSAC and KNN, it fuses with MDLT to improve the SPHP algorithm, reducing the ghosting on the overlapped image result. An improvement on AANAP is proposed by using superpixel methodology to improve the compositing image. After relating both images, AANAP improves the alignment accuracy and reduces the perspective distortion [15]; then, seam cutting is applied with superpixel segmentation to reduce the ghosting images, and image color blending is finally applied. To reduce the distortion generated by the global homography, the algorithms AANAP and SPHP use the similarity transform. However, in urban scenes, these algorithms cannot preserve the building lines. New developments propose mixing features’ inertial navigation systems (INS) in order to improve efficiency or time processing. An indoor application for SIFT and INS is proposed by [106] for camera pose estimation, improving stitch drone-captured indoor video frames. Pose estimation can be achieved by INS to calculate the relation between image frames captured by the UAV to select the most related and reduce the number of image stitching processes. Another option is the use of SIFT to estimate the global transformation parameter. The result will accumulate registration errors and disregard multiple constraints between images [107] to improve the stitching performance. A shape-preserving transform is used to preserve the geometric similarity before reprojecting, which attempts to retain the shapes of local regions and use multiband fusion to process the gain compensation and obtain a natural-looking panoramic image. A matching improvement is proposed in [108] by using the grid-based motion statistics (GMS) algorithm as a means of encapsulating motion smoothness as a statistical likelihood of having a certain number of feature matches between a region pair and removing the mismatches for applying them: the RANSAC. A region-based methodology uses SIFT as the first step to obtain the global transformation, where the overlapped region is divided into small regions and multiple regions have different weights depending on the local homography [109]. Then, RANSAC is used to reduce the outliers, compared to SIFT, APAP, and AANAP. After the global projection estimation, the thin-plate spline (TPS) with a simple radial basis function type formulates the image deformation, due to its good performance in both alignment quality and efficiency, by using REW (robust elastic warping) [110]. REW is a methodology proposed by what can be regarded as a combination of the mesh-based model and the direct deformation strategy to remove mismatches. The radial distortion function allows us to create a perfect reconstruction due to its good alignment quality and efficiency [111], and then, by applying global homography, we can obtain a good effect on the nonoverlapping regions of the target image. Table 8 presents the summary of these implementations.

5. Discussion

The generation of aerial panoramas and mosaicing are very active fields with new approaches each year. The growing trend of this research field can be seen in Figure 4, wherein the period from 2017 to 2019 saw a significant increase in this research field compared to 2020 and the first half of 2021, where the number of articles was nearly half as much as that of the previous period. Thus, it can be assumed that the interest in this field could increase due to the approach applied in these techniques as an effort to solve some of the main problems of stitching methods and the development of different implementations for new UAV applications or improvements in their processes. Some of the issues that are addressed are time processing, matching relation, hybrid transformation to avoid the most common errors of parallax and ghosting images, and image composition by applying different methodologies and techniques based on feature registration mosaicing methods. As it can be observed in Figure 5, the implementation of mosaicing methodologies has increased by almost 46% between 2010 and 2015. In that way, the process has changed from applying the stitching technique on aerial images to implementing it on UAVs by focusing on solving the main problems related to image acquisition from a moving camera, where new solutions based on preprocessing filters are used to improve the feature detection, thus ensuring the relation for the stitching process, or by performing gain compensation techniques on the resulting mosaic image.
This study found that the main contribution to this area is presented by the Asian continent, locating it as a zone of interest in this research field. For this reason, it is important to remain close to the progress generated in this region, as shown in Figure 6. As it can be noticed, another region whose progress is important to follow is the American continent, which has remained active in this research field. From these regions, the countries that have contributed the most are presented in Figure 7. The first place is China, followed by India, Taiwan, and Korea from Asia. In the case of America, the USA is the country that has contributed the most in this region.
This may be due to the development of new applications based on the mosaic of aerial images applied in a wide variety of areas. China has been one of the main countries involved in the growth and development of technologies but also one of the largest producers of drones in the industry. They have innovated in introducing this system to daily activities, where the use of these techniques is applied in tasks such as surveillance and monitoring in large areas, but this process can be used for more applications, such as: photogrammetry for archaeology and heritage maintenance, agricultural and forestry surveillance, civil engineering, digital elevation models and 3D mapping, rural roads, geological infrastructure, road information, urban terrain reconstruction, air analysis and pollution (for environmental awareness), urban configuration, and environmental monitoring, among others, as shown in Figure 8.
Among the main objectives of this review is to present the most widely applied mosaic techniques of aerial panoramas in drones and, as presented in the previous sections, the feature-based approaches are the most implemented methodology, as shown in Figure 9. More specifically, the approaches based on estimation of the global single transform features, even more than the local hybrid transformation, are more recent methods with some advantages over the earlier methods. However, this may be related to the fact that these methodologies are faster, more documented, and, in some cases, have a low computation cost compared to the local hybrid transformation approaches.
From global single transform feature revision, the most implemented methodologies are the Harris corner, SIFT, FAST, ORB, SURF, and BRISK; these approaches are the most implemented in UAVs. The comparison made between these methods shows that the classic ORB is one of the most applied methodologies due to its speed above other classic methodologies, which makes it a good choice for real-time applications. However, a disadvantage of this method is its low accuracy compared to that of other methods, such as FAST, in which it is based, and BRISK, which outperforms in cases of rotation and fast scale changes. Even so, the SURF technique is implemented more often because of its performance compared to that of SIFT and its speed close to that of ORB; however, if speed is not a key factor but accuracy and robustness are desired, the most applied method is SIFT, as shown in Figure 10, where there are new approaches based on GPU and computing algorithms show an improvement in the speed of SIFT application.
In the local hybrid transformation area, the most notorious algorithms are SPHP and APAP, in which SPHP has the advantage by applying local hybrid transformation with similarity transformation to reduce the distortions and preserve the similarity constraints (Figure 11). New approaches based on improvements in the matching and blending methods were considered within the aforementioned categories. As it can be observed from the tables, feature registration is the most implemented in SIFT, SURF, and ORB for recent development. The same applies for SPHP and AANAP methodologies. In Figure 12, a plot of the principal feature-based transform methodologies implemented on UAVs is presented.
As it has been presented, since 2017, there has been an increase in this research area, mainly undertaken by universities and research centers, where, it should be pointed out, the main works are carried out in Asia. Nonetheless, many countries have joined this research field, in which new proposals explore the use of machine learning-based systems or artificial intelligence (AI)-based techniques. This does not mean that classic methodologies are outdated, since new approaches propose the combination of feature-based techniques with different algorithms, such as LPM, SR, PCA, and REW, which provide a more robust and efficient methodology. It is interesting to think about the new developments that can be generated in combination with different areas of image processing.

6. Conclusions

As shown in this work, aerial images have been used in many fields, and only in the last decade have new methods been developed, delivering great progress in the panoramic image field. New approaches and methodologies were proposed in this work for different applications. The evolution of this field has attracted the attention of different researchers, where China is one of the countries that has contributed more heavily. More countries are joining in the development of new mosaic-based techniques improving panoramic aerial images by exploring different approaches. These improvements can be faster and more accurate for the generation of aerial images or have more complex applications, such as that of surveillance and tracking systems focused on solving specific tasks, by applying mosaic generation with more algorithms. The results submitted here show the trend of feature-based algorithms. They are based on machine learning and AI together with these methods in order to improve the generation of image mosaics by correcting the errors generated when joining images. The area of drone application has permitted image mosaicing to gain more attention for new developments, which allows for increasing development in the mosaicing of images.

Author Contributions

Conceptualization, methodology, J.K.G.-R. and L.A.M.-H.; formal analysis, J.P.B.-R.; investigation, J.K.G.-R.; resources, E.R.-O. and L.A.M.-H.; writing—original draft preparation, J.K.G.-R.; writing—review and editing, J.K.G.-R., L.A.M.-H., J.P.B.-R., E.R.-O., K.A.C.-G.; visualization, K.A.C.-G.; supervision J.P.B.-R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ait-Aoudia, S.; Mahiou, R.; Djebli, H.; Guerrout, E.H. Satellite and Aerial Image Mosaicing—A Comparative Insight. In Proceedings of the 2012 16th International Conference on Information Visualisation, Montpellier, France, 11–13 July 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 652–657. [Google Scholar] [CrossRef]
  2. Ghosh, D.; Kaabouch, N.; Semke, W. Super-Resolution Mosaicing of Unmanned Aircraft System (UAS) Surveillance Video Frames. Int. J. Sci. Eng. Res. 2013, 4, 1–9. [Google Scholar]
  3. Misra, I.; Manthira Moorthi, S.; Dhar, D.; Ramakrishnan, R. An automatic satellite image registration technique based on Harris corner detection and Random Sample Consensus (RANSAC) outlier rejection model. In Proceedings of the 2012 1st International Conference on Recent Advances in Information Technology, RAIT-2012, Dhanbad, India, 15–17 March 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 68–73. [Google Scholar] [CrossRef]
  4. Tsao, P.; Ik, T.U.; Chen, G.W.; Peng, W.C. Stitching aerial images for vehicle positioning and tracking. In Proceedings of the IEEE International Conference on Data Mining Workshops, ICDMW, Singapore, 17–20 November 2018; IEEE: Piscataway, NJ, USA, 2019; pp. 616–623. [Google Scholar] [CrossRef]
  5. Wei, Q.; Lao, S.; Bai, L. Panorama stitching, moving object detection and tracking in UAV Videos. In Proceedings of the 2017 International Conference on Vision, Image and Signal Processing, ICVISP 2017, Osaka, Japan, 22–24 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 46–50. [Google Scholar] [CrossRef]
  6. Dong, N.; Ren, X.; Sun, M.; Jiang, C.; Zheng, H. Fast stereo aerial image construction and measurement for emergency rescue. In Proceedings of the 2013 5th International Conference on Geo-Information Technologies for Natural Disaster Management, GiT4NDM 2013, Mississauga, ON, Canada, 9–11 October 2013; IEEE: Piscataway, NJ, USA, 2014; pp. 119–123. [Google Scholar] [CrossRef]
  7. Lenjani, A.; Yeum, C.M.; Dyke, S.; Bilionis, I. Automated building image extraction from 360° panoramas for postdisaster evaluation. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 241–257. [Google Scholar] [CrossRef] [Green Version]
  8. Li, N.; Liao, T.; Wang, C. Perception-based seam cutting for image stitching. Signal Image Video Process. 2018, 12, 967–974. [Google Scholar] [CrossRef]
  9. Tariq, A.; Gillani, S.M.A.; Qureshi, H.K.; Haneef, I. Heritage preservation using aerial imagery from light weight low cost Unmanned Aerial Vehicle (UAV). In Proceedings of the International Conference on Communication Technologies, ComTech 2017, Rawalpindi, Pakistan, 19–21 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 201–205. [Google Scholar] [CrossRef]
  10. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV lidar and hyperspectral fusion for forest monitoring in the southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  11. ŞASİ, A.; YAKAR, M. Photogrammetric modelling of hasbey dar’ülhuffaz (masjid) using an unmanned aerial vehicle. Int. J. Eng. Geosci. 2018, 3, 6–11. [Google Scholar] [CrossRef]
  12. Doğan, Y.; Yakar, M. Gis and three-dimensional modeling for cultural heritages. Int. J. Eng. Geosci. 2018, 3, 50–55. [Google Scholar] [CrossRef] [Green Version]
  13. Dawn, S.; Khera, A.; Agarwal, N.; Arora, A. Panorama Generation from a Video. In Proceedings of the 2018 5th IEEE Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2018, Uttar Pradesh, India, 2–4 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar] [CrossRef]
  14. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef] [Green Version]
  15. Yuan, W.; Choi, D. UAV-based heating requirement determination for frost management in apple orchard. Remote Sens. 2021, 13, 273. [Google Scholar] [CrossRef]
  16. Mistry, S.; Patel, A. Image Stitching using Harris Feature Detection. Int. Res. J. Eng. Technol. 2016, 3, 1363–1369. [Google Scholar]
  17. Pandey, A.; Pati, U.C. Image mosaicing: A deeper insight. Image Vis. Comput. 2019, 89, 236–257. [Google Scholar] [CrossRef]
  18. Mustafa, R.; Dhar, P. A method to recognize food using GIST and SURF features. In Proceedings of the 2018 Joint 7th International Conference on Informatics, Electronics and Vision and 2nd International Conference on Imaging, Vision and Pattern Recognition, ICIEV-IVPR 2018, Kitakyushu, Japan, 25–29 June 2018; IEEE: Piscataway, NJ, USA, 2019; pp. 127–130. [Google Scholar] [CrossRef]
  19. Tahir, W.; Majeed, A.; Rehman, T. Indoor/outdoor image classification using GIST image features and neural network classifiers. In Proceedings of the 2015 12th International Conference on High-Capacity Optical Networks and Enabling/Emerging Technologies (HONET), Islamabad, Pakistan, 21–23 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–5. [Google Scholar] [CrossRef]
  20. Zomet, A.; Levin, A.; Peleg, S.; Weiss, Y. Seamless image stitching by minimizing false edges. IEEE Trans. Image Process. 2006, 15, 969–977. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Lyu, W.; Zhou, Z.; Chen, L.; Zhou, Y. A survey on image and video stitching. Virtual Real. Intell. Hardw. 2019, 1, 55–83. [Google Scholar] [CrossRef]
  22. Ghosh, D.; Kaabouch, N. A survey on image mosaicing techniques. J. Vis. Commun. Image Represent. 2016, 34, 1–11. [Google Scholar] [CrossRef]
  23. Bignalet-Cazalet, F.; Baillarin, S.; Greslou, D.; Panem, C. Automatic and generic mosaicing of satellite images. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, HI, USA, 25–30 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 3158–3161. [Google Scholar] [CrossRef]
  24. Zhang, T.; Lei, B.; Gan, Y.; Hu, Y.; Liu, K. National satellite image coverage using overall planning technique. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 5488–5491. [Google Scholar] [CrossRef]
  25. Hsieh, S.L.; Chen, Y.W.; Chen, C.C.; Chang, T.W. A geometry-distortion resistant image detection system based on log-polar transform and scale invariant feature transform. In Proceedings of the 2011 IEEE International Conference on High Performance Computing and Communications(HPCC), Banff, AB, Canada, 2–4 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 893–897. [Google Scholar] [CrossRef]
  26. Kalluri, S.; Csiszar, I.; Kondragunta, S.; Laszlo, I. Non-Meteorological Application of New Generation Geostatinary Satellites. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 8773–8776. [Google Scholar] [CrossRef]
  27. Matungka, R.; Zheng, Y.F.; Ewing, R.L. Aerial image registration using projective polar transform. In Proceedings of the ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing; ICASSP 2009, Taipei, Taiwan, 19–24 April 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1061–1064. [Google Scholar] [CrossRef]
  28. Singh, A.K.; Gopala Krishna, B.; Srivastava, P.K. Satellite platform stability estimation using image matching. In Proceedings of the 2011 IEEE Recent Advances in Intelligent Computational Systems, RAICS 2011, Trivandrum, India, 22–24 September 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 378–380. [Google Scholar] [CrossRef]
  29. Tang, H.; Tang, F. AH-SIFT: Augmented Histogram based SIFT descriptor. In Proceedings of the International Conference on Image Processing, ICIP, Orlando, FL, USA, 30 September–3 October 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 2357–2360. [Google Scholar] [CrossRef]
  30. Adel, E.; Elmogy, M.; Elbakry, H. Image Stitching System Based on ORB Feature-Based Technique and Compensation Blending. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 55–62. [Google Scholar] [CrossRef] [Green Version]
  31. Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2012; Volume 7577, pp. 214–227. [Google Scholar] [CrossRef]
  32. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  33. Brown, M.; Lowe, D.G. Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 2007, 74, 59–73. [Google Scholar] [CrossRef] [Green Version]
  34. Zou, Y.; Li, G.; Wang, S. The fusion of satellite and unmanned aerial vehicle (UAV) imagery for improving classification performance. In Proceedings of the 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 836–841. [Google Scholar] [CrossRef]
  35. Emilien, A.V.; Thomas, C.; Thomas, H. UAV & satellite synergies for optical remote sensing applications: A literature review. Sci. Remote Sens. 2021, 3, 100019. [Google Scholar] [CrossRef]
  36. Szeliski, R. Image Alignment and Stitching: A Tutorial. Found. Trends Comput. Graph. Vis. 2007, 2, 1–104. [Google Scholar] [CrossRef]
  37. Bhadane, D.; Pawar, K.N. A Review Paper on Various Approaches for Image Mosaicing. Int. J. Eng. Manag. Res. 2013, 62, 193–195. [Google Scholar]
  38. Torr, P.H.S.; Zisserman, A. Feature Based Methods for Structure and Motion Estimation. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2000; Volume 1883, pp. 278–294. [Google Scholar] [CrossRef]
  39. Adel, E.; Elmogy, M.; Elbakry, H. Image Stitching based on Feature Extraction Techniques: A Survey. Int. J. Comput. Appl. 2014, 99, 1–8. [Google Scholar] [CrossRef]
  40. Ju, M.H.; Kang, H.B. Panoramic image generation with lens distortions. In Proceedings of the 2013 IEEE International Conference on Image Processing, ICIP 2013, Melbourne, VIC, Australia, 15–18 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1296–1300. [Google Scholar] [CrossRef]
  41. Sovetov, K.; Kim, J.S.; Kim, D. Online Panorama Image Generation for a Disaster Rescue Vehicle. In Proceedings of the 2019 16th International Conference on Ubiquitous Robots (UR), Jeju, Korea, 24–27 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 92–97. [Google Scholar] [CrossRef]
  42. Zia, O.; Kim, J.h.; Han, K.; Lee, J.W. 360° Panorama Generation using Drone Mounted Fisheye Cameras. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 11–13 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–3. [Google Scholar] [CrossRef]
  43. Huang, J.; Chen, Z.; Ceylan, D.; Jin, H. 6-DOF VR videos with a single 360-camera. In Proceedings of the 2017 IEEE Virtual Reality (VR), Los Angeles, CA, USA, 18–22 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 37–44. [Google Scholar] [CrossRef]
  44. Workman, S.; Greenwell, C.; Zhai, M.; Baltenberger, R.; Jacobs, N. DEEPFOCAL: A method for direct focal length estimation. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1369–1373. [Google Scholar] [CrossRef] [Green Version]
  45. Azarbayejani, A.; Pentland, A.P. Recursive Estimation of Motion, Structure, and Focal Length. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 562–575. [Google Scholar] [CrossRef]
  46. Kriegman, D. Homography estimation from planar contours in image sequence. Opt. Eng. 2007, 49, 037202. [Google Scholar] [CrossRef]
  47. Li, E.; Mo, H.; Xu, D.; Li, H. Image Projective Invariants. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1144–1157. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Tareen, S.A.K.; Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. In Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies: Invent, Innovate and Integrate for Socioeconomic Development, iCoMET 2018, Sukkur, Pakistan, 3–4 March 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–10. [Google Scholar] [CrossRef]
  49. Szeliski, R. Computer Vision; Texts in Computer Science; Springer: London, UK, 2011; Volume 42, p. 823. ISBN 978-1-84882-935-0. [Google Scholar] [CrossRef]
  50. Fischler, M.A.; Bolles, R.C. Random sample consensus. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  51. Chum, O.; Matas, J. Matching with PROSAC—Progressive Sample Consensus. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 220–226. [Google Scholar] [CrossRef] [Green Version]
  52. Agarwal, A.; Jawahar, C.V.; Narayanan, P.J. A Survey of Planar Homography Estimation Techniques; Technical Report; International Institute of Information Technology: Hyderabad, India, 2005. [Google Scholar]
  53. Reboucas, R.A.; Eller, Q.d.C.; Habermann, M.; Shiguemori, E.H. Visual Odometry and Moving Objects Localization Using ORB and RANSAC in Aerial Images Acquired by Unmanned Aerial Vehicles. In Proceedings of the 2013 BRICS Congress on Computational Intelligence and 11th Brazilian Congress on Computational Intelligence, Ipojuca, Brazil, 8–11 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 441–446. [Google Scholar] [CrossRef]
  54. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle Adjustment—A Modern Synthesis. In Lecture Notes in Computer Science (including Subseries Vision Algorithms: Theory and Practice); Springer: Berlin/Heidelberg, Germany, 2000; Volume 1883, pp. 298–372. [Google Scholar] [CrossRef] [Green Version]
  55. Agarwal, S.; Snavely, N.; Seitz, S.M.; Szeliski, R. Bundle Adjustment in the Large. In Proceedings of the Computer Vision—ECCV 2010, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 29–42. [Google Scholar] [CrossRef]
  56. Shi, D.; Fan, Z.; Yin, H.; Liu, D.C. Fast GPU-based automatic time gain compensation for ultrasound imaging. In Proceedings of the 2010 4th International Conference on Bioinformatics and Biomedical Engineering, iCBBE 2010, Chengdu, China, 18–20 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–3. [Google Scholar] [CrossRef]
  57. Burt, P.J.; Adelson, E.H. A multiresolution spline with application to image mosaics. ACM Trans. Graph. TOG 1983, 2, 217–236. [Google Scholar] [CrossRef]
  58. Zhao, N.; Zheng, X. Multi-band blending of aerial images using GPU acceleration. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar] [CrossRef]
  59. Lee, S.; Lee, S.J.; Park, J.; Kim, H.J. Exposure correction and image blending for planar panorama stitching. In Proceedings of the 2016 16th International Conference on Control, Automation and Systems (ICCAS), Gyeongju, Korea, 16–19 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 128–131. [Google Scholar] [CrossRef]
  60. Tian, F.; Shi, P. Image Mosaic using ORB descriptor and improved blending algorithm. In Proceedings of the 2014 7th International Congress on Image and Signal Processing, CISP 2014, Dalian, China, 14–16 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 693–698. [Google Scholar] [CrossRef]
  61. Bind, V.S. Robust Techniques for Feature-based Image Mosaicing. Ph.D. Thesis, National Institute of Technology Rourkela, Odisha, India, 2013. [Google Scholar]
  62. Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; The Plessey Company PLC.: London, UK, 1988; pp. 147–151. [Google Scholar]
  63. Rosten, E.; Drummond, T. Fusing Points and Lines for High Performance Tracking. In Proceedings of the 10th IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 2, pp. 1508–1515. [Google Scholar] [CrossRef]
  64. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  65. Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. BRIEF: Binary Robust Independent Elementary Features. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6314, pp. 778–792. [Google Scholar] [CrossRef] [Green Version]
  66. Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary Robust invariant scalable keypoints. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2548–2555. [Google Scholar] [CrossRef] [Green Version]
  67. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  68. Joshi, H.; Sinha, M. A Survey on Image Mosaicing Techniques. Int. J. Adv. Res. Comput. Eng. Technol. 2013, 2, 365–369. [Google Scholar]
  69. Xiang, T.Z.; Xia, G.S.; Bai, X.; Zhang, L. Image stitching by line-guided local warping with global similarity constraint. Pattern Recognit. 2018, 83, 481–497. [Google Scholar] [CrossRef] [Green Version]
  70. Trajković, M.; Hedley, M. Fast corner detection. Image Vis. Comput. 1998, 16, 75–87. [Google Scholar] [CrossRef]
  71. Rosten, E.; Drummond, T. Machine Learning for High-Speed Corner Detection. In Computer Vision—ECCV 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar] [CrossRef]
  72. Jing, H.; He, X.; Han, Q.; Niu, X. CBRISK: Colored binary robust invariant scalable keypoints. IEICE Trans. Inf. Syst. 2013, 96, 392–395. [Google Scholar] [CrossRef] [Green Version]
  73. Yang, S.; Li, B.; Zeng, K. SBRISK: Speed-up binary robust invariant scalable keypoints. J. Real-Time Image Process. 2016, 12, 583–591. [Google Scholar] [CrossRef]
  74. Pan, F.; Shang, H. Enhacing Image Mosaicing with Adaptive Local Homographies. In Proceedings of the International Conference on Digital Signal Processing, DSP, Shanghai, China, 19–21 November 2018; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
  75. Chang, C.H.; Sato, Y.; Chuang, Y.Y. Shape-Preserving Half-Projective Warps for Image Stitching. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; IEEE: Piscataway, NJ, USA, 2014; Volume 1, pp. 3254–3261. [Google Scholar] [CrossRef] [Green Version]
  76. Lin, C.C.; Pankanti, S.U.; Ramamurthy, K.N.; Aravkin, A.Y. Adaptive as-natural-as-possible image stitching. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1155–1163. [Google Scholar] [CrossRef] [Green Version]
  77. Han, S.; Yu, W.; Yang, H.; Wan, S. An Improved Corner Detection Algorithm Based on Harris. In Proceedings of the 2018 Chinese Automation Congress, CAC 2018, Xi’an, China, 30 November–2 December 2018; IEEE: Piscataway, NJ, USA, 2019; pp. 1575–1580. [Google Scholar] [CrossRef]
  78. Yuanting, X.; Yi, L.; Kun, Y.; Chunxue, S. Research on image mosaic of low altitude UAV based on harris corner detection. In Proceedings of the 2019 14th IEEE International Conference on Electronic Measurement & Instruments(ICEMI), Changsha, China, 1–3 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 639–645. [Google Scholar] [CrossRef]
  79. Hong, Y.X.; Jie, Z.Q.; Dan Dan, Z.; Xin, S.X.; Jing, X. UAV image automatic mosaic method based on matching of feature points. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 33–36. [Google Scholar] [CrossRef]
  80. Cheng, C.; Wang, X.; Li, X. UAV image matching based on surf feature and harris corner algorithm. In Proceedings of the 4th International Conference on Smart and Sustainable City (ICSSC 2017), Shanghai, China, 5–6 June 2017; Institution of Engineering and Technology: Savoy Place: London, UK, 2017; pp. 1–6. [Google Scholar] [CrossRef]
  81. Ni, Z.S. B-SIFT: A binary SIFT based local image feature descriptor. In Proceedings of the 4th International Conference on Digital Home, ICDH 2012, Guangzhou, China, 23–25 November 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 117–121. [Google Scholar] [CrossRef]
  82. Wang, Y.; Camargo, A.; Fevig, R.; Martel, F.; Schultz, R.R. Image mosaicking from uncooled thermal IR video captured by a small UAV. In Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation, Santa Fe, NM, USA, 24–26 March 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 161–164. [Google Scholar] [CrossRef]
  83. Ye, J.G.; Chen, H.T.; Tsai, W.J. Panorama Generation Based on Aerial Images. In Proceedings of the 2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), San Diego, CA, USA, 23–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  84. Xiaoyue, J.; Xiaojia, X.; Jian, H. Real-Time Panorama Stitching Method for UAV Sensor Images Based on the Feature Matching Validity Prediction of Grey Relational Analysis. In Proceedings of the 2018 15th International Conference on Control Automation, Robotics and Vision, ICARCV 2018, Singapore, 18–21 November 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1454–1459. [Google Scholar] [CrossRef]
  85. Zhang, F.; Yang, T.; Liu, L.; Liang, B.; Bai, Y.; Li, J. Image-Only Real-Time Incremental UAV Image Mosaic for Multi-Strip Flight. IEEE Trans. Multimed. 2021, 23, 1410–1425. [Google Scholar] [CrossRef]
  86. Liu, H.; Lv, M.; Gao, Y.; Li, J.; Lan, J.; Gao, W. Information Processing System Design for Multi-rotor UAV-Based Earthquake Rescue. In Man-Machine-Environment System Engineering; Springer: Singapore, 2020; Volume 645, pp. 321–330. [Google Scholar] [CrossRef]
  87. Verykokou, S.; Ioannidis, C.; Athanasiou, G.; Doulamis, N.; Amditis, A. 3D reconstruction of disaster scenes for urban search and rescue. Multimed. Tools Appl. 2018, 77, 9691–9717. [Google Scholar] [CrossRef]
  88. Ismail, H.; Rahmani, A.; Aljasmi, N.; Quadir, J. Stitching Approach for PV Panel Detection. In Proceedings of the 2020 Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 4 February–9 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–4. [Google Scholar] [CrossRef]
  89. Hu, D.; Wang, Y.; Hu, Q.; Hu, W. The construction method of measurable aerial panorama based on panoramic image and multi-view oblique images matching. In Proceedings of the 4th International Workshop on Earth Observation and Remote Sensing Applications, EORSA 2016, Guangzhou, China, 4–6 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 232–236. [Google Scholar] [CrossRef]
  90. Adel, E.; Elmogy, M.; Elbakry, H. Real time image mosaicing system based on feature extraction techniques. In Proceedings of the 2014 9th IEEE International Conference on Computer Engineering and Systems, ICCES 2014, Cairo, Egypt, 22–23 December 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 339–345. [Google Scholar] [CrossRef]
  91. Wu, L.; Gao, Y.; Zhang, J. An improved SIFT algorithm based on FAST corner detection. In Proceedings of the 2013 9th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2013, Beijing, China, 16–18 October 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 202–205. [Google Scholar] [CrossRef]
  92. Botterill, T.; Mills, S.; Green, R. Real-time aerial image mosaicing. In Proceedings of the International Conference Image and Vision Computing New Zealand, Queenstown, New Zealand, 8–9 November 2010; IEEE: Piscataway, NJ, USA, 2010. [Google Scholar] [CrossRef]
  93. Zhang, X.; Hu, Q.; Ai, M.; Ren, X. A Multitemporal UAV Images Registration Approach Using Phase Congruency. In Proceedings of the International Conference on Geoinformatics, Kunming, China, 28–30 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar] [CrossRef]
  94. Almagbile, A. Estimation of crowd density from UAVs images based on corner detection procedures and clustering analysis. Geo-Spat. Inf. Sci. 2019, 22, 23–34. [Google Scholar] [CrossRef]
  95. Chen, J.; Luo, L.; Wang, S.; Wu, H. Automatic Panoramic UAV Image Mosaic Using ORB Features and Robust Transformation Estimation. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4265–4270. [Google Scholar] [CrossRef]
  96. Yeh, C.C.; Chang, Y.L.; Hsu, P.H.; Hsien, C.H. GPU acceleration of UAV image splicing using oriented fast and rotated brief combined with PCA. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 5700–5703. [Google Scholar] [CrossRef]
  97. Zhang, Y.; Zhang, J.; Zhang, L.; Wang, S. Research on panorama reconstruction technique of UAV aerial image based on improved ORB algorithm. In Proceedings of the 2019 IEEE 3rd International Conference on Electronic Information Technology and Computer Engineering, EITCE 2019, Xiamen, China, 18–20 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1252–1256. [Google Scholar] [CrossRef]
  98. Wang, S.; Zhang, Y.; Wang, W.; Zhao, Y.; Zhu, S. A Novel Image Mosaic Method Based on Improved ORB and its Application in Police-UAV. In Proceedings of the IEEE 2018 International Congress on Cybermatics: 2018 IEEE Conferences on Internet of Things, Green Computing and Communications, Cyber, Physical and Social Computing, Smart Data, Blockchain, Computer and Information Technology, iThings/Gree, Halifax, NS, Canada, 30 July–3 August 2018; pp. 1707–1713. [Google Scholar] [CrossRef]
  99. Wu, Z.; Yue, P.; Zhang, M.; Tan, Z. A workflow approach for mosaicking UAV images. In Proceedings of the 2016 5th International Conference on Agro-Geoinformatics, Agro-Geoinformatics 2016, Tianjin, China, 18–20 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–4. [Google Scholar] [CrossRef]
  100. Hadrovic, E.; Osmankovic, D.; Velagic, J. Aerial image mosaicing approach based on feature matching. In Proceedings of the Proceedings Elmar—International Symposium Electronics in Marine, Zadar, Croatia, 18–20 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 177–180. [Google Scholar] [CrossRef]
  101. Micheal, A.A.; Vani, K. Comparative analysis of SIFT and SURF on KLT tracker for UAV applications. In Proceedings of the 2017 IEEE International Conference on Communication and Signal Processing, ICCSP 2017, Chennai, India, 6–8 April 2017; IEEE: Piscataway, NJ, USA, 2018; pp. 1000–1003. [Google Scholar] [CrossRef]
  102. Yue, M.; Yan, Q. UAV remote sensing positioning algorithm based on image registration. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 1, pp. 926–931. [Google Scholar] [CrossRef]
  103. Tsai, C.H.; Lin, Y.C. An accelerated image matching technique for UAV orthoimage registration. ISPRS J. Photogramm. Remote Sens. 2017, 128, 130–145. [Google Scholar] [CrossRef]
  104. Fang, F.; Wang, T.; Fang, Y.; Zhang, G. Fast Color Blending for Seamless Image Stitching. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1115–1119. [Google Scholar] [CrossRef]
  105. Leng, J.; Wang, S. UAV Remote Sensing Image Mosaic Technology Combined with Improved SPHP Algorithm. In Proceedings of the 2020 IEEE Int. Conf. Mechatronics Autom. ICMA 2020, Beijing, China, 13–16 October 2020; IEEE: Piscataway, NJ, USA, 2020; Volume 2, pp. 1155–1160. [Google Scholar] [CrossRef]
  106. Ramaswamy, A.; Gubbi, J.; Raj, R.; Purushothaman, B. Frame stitching in indoor environment using drone captured images. In Proceedings of the International Conference on Image Processing, ICIP, Athens, Greece, 7–10 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 91–95. [Google Scholar] [CrossRef]
  107. Wan, Q.; Chen, J.; Luo, L.; Gong, W.; Wei, L. Drone Image Stitching Using Local Mesh-Based Bundle Adjustment and Shape-Preserving Transform. In Proceedings of the IEEE Transactions on Geoscience and Remote Sensing, Waikoloa, HI, USA, 2 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1–11. [Google Scholar] [CrossRef]
  108. Lan, X.; Guo, B.; Huang, Z.; Zhang, S. An Improved UAV Aerial Image Mosaic Algorithm Based on GMS-RANSAC. In Proceedings of the 2020 IEEE 5th International Conference on Signal and Image Processing, ICSIP 2020, Nanjing, China, 23–25 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 148–152. [Google Scholar] [CrossRef]
  109. Xu, Q.; Luo, L.; Chen, J.; Gong, W.; Guo, D. UAV Image Mosaicing Based Multi-Region Local Projection Deformation. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 1845–1848. [Google Scholar] [CrossRef]
  110. Li, J.; Wang, Z.; Lai, S.; Zhai, Y.; Zhang, M. Parallax-Tolerant Image Stitching Based on Robust Elastic Warping. IEEE Trans. Multimed. 2018, 20, 1672–1687. [Google Scholar] [CrossRef]
  111. Ullah, H.; Zia, O.; Kim, J.H.; Han, K.; Weon Lee, J. Automatic 360° Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System. Sensors 2020, 20, 3097. [Google Scholar] [CrossRef]
  112. Zhou, Y.; Rui, T.; Li, Y.; Zuo, X. A UAV patrol system using panoramic stitching and object detection. Comput. Electr. Eng. 2019, 80, 106473. [Google Scholar] [CrossRef]
  113. Luo, L.; Wan, Q.; Chen, J.; Wang, Y.; Mei, X. Drone image stitching guided by robust elastic warping and locality preserving matching. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 9212–9215. [Google Scholar] [CrossRef]
Figure 1. Mosaicing process.
Figure 1. Mosaicing process.
Applsci 12 02729 g001
Figure 2. Image acquisition: (a) multiple-camera translational and rotational acquisition; (b) single-camera rotational acquisition.
Figure 2. Image acquisition: (a) multiple-camera translational and rotational acquisition; (b) single-camera rotational acquisition.
Applsci 12 02729 g002
Figure 3. Parallax introduced by the discrepancy of the camera.
Figure 3. Parallax introduced by the discrepancy of the camera.
Applsci 12 02729 g003
Figure 4. Mosaicing development.
Figure 4. Mosaicing development.
Applsci 12 02729 g004
Figure 5. UAV mosaicing implementation.
Figure 5. UAV mosaicing implementation.
Applsci 12 02729 g005
Figure 6. Development by continent.
Figure 6. Development by continent.
Applsci 12 02729 g006
Figure 7. Development by country.
Figure 7. Development by country.
Applsci 12 02729 g007
Figure 8. Percentage of aerial image applications.
Figure 8. Percentage of aerial image applications.
Applsci 12 02729 g008
Figure 9. Most used feature methodology.
Figure 9. Most used feature methodology.
Applsci 12 02729 g009
Figure 10. Global single transforms implemented on UAVs.
Figure 10. Global single transforms implemented on UAVs.
Applsci 12 02729 g010
Figure 11. Local hybrid transform implemented on UAVs.
Figure 11. Local hybrid transform implemented on UAVs.
Applsci 12 02729 g011
Figure 12. Methodology developments by year.
Figure 12. Methodology developments by year.
Applsci 12 02729 g012
Table 1. UAV and satellite characteristics comparison.
Table 1. UAV and satellite characteristics comparison.
CharacteristicsUAVsSatellites
FlexibilityHighLow
Cloud dependenceNoYes
Direct meteorological constraintWind and precipitationNo
Operator requiredYesNo
PayloadInterchangeablePermanent
LegislationRestrictiveNone
Data updateConstant RefreshingPeriodical
Working TimeShort (battery life)Long (Limited to satellite life)
Table 2. Harris Corner Applied on UAVs.
Table 2. Harris Corner Applied on UAVs.
AuthorAdvantage
X. Yuanting et al. (2019) [78]This algorithm improves the stitching speed.
C. Cheng et al. (2017) [80]Image matching accuracy is improved with less processing time.
Y. Hong et al. (2013) [79]Efficiency and accuracy are improved by registration constraint.
Table 3. SIFT applied on UAVs.
Table 3. SIFT applied on UAVs.
AuthorAdvantage
D. Ghosh et al. (2013) [2]The SR algorithm improves in effectiviness.
J. Ye et al. (2018) [83]The speed estimation performs the aerial panorama in a short time with appropriate aspect ratios and good visual quality.
P. Tsao et al. (2019) [4]A positioning system based on image stitching and top-view transformation is proposed, relating it to the GPS data to calculate the relative UAV position for distance measurements and object localization.
J. Xiaoyue et al. (2018) [84]Stitching region prediction based on IMU and GPS information is used for image stitching using SIFT.
S. Verykokou et al. (2018) [87]A FAST 3D modeling of fully or partially collapsed buildings using images from UAVs for the Urban Search and Rescue task is proposed.
Table 4. FAST applied on UAVs.
Table 4. FAST applied on UAVs.
AuthorAdvantage
T. Botterill, S. Mills, R. Green [92]Images are registered and stitched together seamlessly in real time.
X. Zhang, Q. Hu, M. Ai et al. [93]By applying phase congruence, the images are stitched evenly with color changes and illumination.
Ali Almagbile [94]Accuracy of FAST-9 and FAST-12 methodology, compared in terms of completeness and correctness, is improved.
Table 5. ORB applied on UAVs.
Table 5. ORB applied on UAVs.
AuthorAdvantage
J. Chen et al. (2018) [95]The LPM with Bayesian framework improves the computation time and the efficiency while ensuring accuracy compared with the state-of-the-art methods.
O. Zia et al. (2019) [42]By using fisheye lenses, a good region of overlap is obtained between adjacent cameras.
C. Yeh et al. (2018) [96]ORB / PCA splice detection is faster and more accurate than the classic SIFT and SURF approaches. In addition, the GPU performs the test 2.6 times faster than the CPU test.
Y. Zhang et al. (2019) [97]The methodology reduces the calculation time of completing the reconstruction of the panorama compared to SIFT and classic ORB.
R. Reboucas et al. (2013) [53]A fast visual odometry tracking system is developed.
Table 6. SURF applied on UAVs.
Table 6. SURF applied on UAVs.
AuthorAdvantage
E. Hadrovic, D. Osmankovic, J. Velagic. [100]The algorithm is relatively fast compared to alignment algorithms based on SIFT feature matching with a high-quality alignment.
M. Yue, Q. Yan [102]A real-time reconnaissance and monitoring application can achieve an accurate positioning without the need of increasing the camera accuracy.
A. Micheal, K. Vani [101]Implementing a semiautomatic object tracking method using SIFT or SURF with a high detection rate, the region of interest is specified by the user.
Z. Wu, P. Yue, M. Zhang et al. [99]The workflow approach generates an automatic mosaic of UAV images with the flexibility to edit the workflow depending on the user needs.
Table 7. BRISK applied on UAVs.
Table 7. BRISK applied on UAVs.
AuthorAdvantage
C. Tsai, Y. Lin [103]The positional accuracy of the UAV orthoimage by applying the proposed image registration scheme improves the correctness of the process.
W.Yuan, D. Choi [15]The stitching speed of 100 thermal images within 30 s and RGB correlation and classification are improved.
Table 8. Mesh-based methods applied on UAV.
Table 8. Mesh-based methods applied on UAV.
AuthorAdvantage
F. Fang et al. [104]A superpixel image is generated, improving the efficiency and flexibility of the target image to reduce the color differences between the two input images.
J. Leng, S. Wang [105]The SPHP algorithm is improved, removing the ghost image of the stitched image and generating better stitching results.
Y. Zhou et al. (2019) [112]Image stitching improves from the captured video by eliminating the ghosts caused by moving objects and object detection module, providing high detection accuracy.
Y. Yuan et al. [15]The SLIC algorithm is used to generate superpixels in the seam cutting and color blending stages, affording spatial coherency and improving the efficiency.
Q. Wan et al. [107]The local alignment model introduces parallax errors as a constraint term into the minimum energy function and uses the mesh-based deformation to accelerate the calculation.
L. Luo, Q. Wan, J. Chen et al. [113]The inaccuracy results are compared with RMS and show an improvement compared to APAP, SPHP, and REW in time processing.
Q. Xu, L. Luo, J. Chen et al. [109]The accuracy of the method is improved, compared to most used mesh analyses, and the computational cost is comparable to that of AANAP.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gómez-Reyes, J.K.; Benítez-Rangel, J.P.; Morales-Hernández, L.A.; Resendiz-Ochoa, E.; Camarillo-Gomez, K.A. Image Mosaicing Applied on UAVs Survey. Appl. Sci. 2022, 12, 2729. https://doi.org/10.3390/app12052729

AMA Style

Gómez-Reyes JK, Benítez-Rangel JP, Morales-Hernández LA, Resendiz-Ochoa E, Camarillo-Gomez KA. Image Mosaicing Applied on UAVs Survey. Applied Sciences. 2022; 12(5):2729. https://doi.org/10.3390/app12052729

Chicago/Turabian Style

Gómez-Reyes, Jean K., Juan P. Benítez-Rangel, Luis A. Morales-Hernández, Emmanuel Resendiz-Ochoa, and Karla A. Camarillo-Gomez. 2022. "Image Mosaicing Applied on UAVs Survey" Applied Sciences 12, no. 5: 2729. https://doi.org/10.3390/app12052729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop