Next Article in Journal
Benchmarking 2D Multi-Object Detection and Tracking Algorithms in Autonomous Vehicle Driving Scenarios
Next Article in Special Issue
Single-Channel Blind Image Separation Based on Transformer-Guided GAN
Previous Article in Journal
A Threshold Helium Leakage Detection Switch with Ultra Low Power Operation
Previous Article in Special Issue
Design and Analysis of Area and Energy Efficient Reconfigurable Cryptographic Accelerator for Securing IoT Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm

by
Nicola Giulietti
1,*,
Paolo Chiariotti
1 and
Gian Marco Revel
2
1
Department of Mechanical Engineering, Politecnico di Milano, Via La Masa 1, 20156 Milan, Italy
2
Department of Industrial Engineering and Mathematical Science, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(8), 4023; https://doi.org/10.3390/s23084023
Submission received: 24 February 2023 / Revised: 12 April 2023 / Accepted: 13 April 2023 / Published: 16 April 2023
(This article belongs to the Special Issue Computer Vision and Sensor Technology)

Abstract

:
Accurately assessing the geometric features of curvilinear structures on images is of paramount importance in many vision-based measurement systems targeting technological fields such as quality control, defect analysis, biomedical, aerial, and satellite imaging. This paper aims at laying the basis for the development of fully automated vision-based measurement systems targeting the measurement of elements that can be treated as curvilinear structures in the resulting image, such as cracks in concrete elements. In particular, the goal is to overcome the limitation of exploiting the well-known Steger’s ridge detection algorithm in these applications because of the manual identification of the input parameters characterizing the algorithm, which are preventing its extensive use in the measurement field. This paper proposes an approach to make the selection phase of these input parameters fully automated. The metrological performance of the proposed approach is discussed. The method is demonstrated on both synthesized and experimental data.

1. Introduction

The line detection task is typically addressed in computer vision by two image filtering approaches: edge detection and ridge detection. Despite being similar in terms of the final goal, these two methods embed essential differences. An edge filter is typically a first derivative operator that measures how fast the intensity level changes across the entire image. These filters (e.g., Sobel, Canny, Roberts, etc. [1], to cite some) detect the boundaries between areas of different intensity values, e.g., high and low gray values, but their output consists of a double line, one on each side of the target. Contrarily, ridge detection algorithms work as second-derivative operators, and provide a single line as output for each target line, since they detect lines that are darker or brighter than their neighboring pixel. This feature has made ridge detectors very suitable not only for line detection tasks, but also for measurement purposes (e.g., line width, line extension, etc.), because it allows detection of the centerline position with high accuracy.
The computer vision literature has exposed several ridge detection algorithms over the years. These algorithms are also widely used in industrial contexts to automatically perform what used to be manual operations [2,3,4]. One well-established ridge detection algorithm that can locate, connect, and measure several geometric features of lines, e.g., width, is the well-known Steger’s line detection algorithm [5]. This algorithm, which makes the extraction of curvilinear structures possible, has found numerous applications in various fields since 1998. In [6], Zhang et al. use this method for detecting edges in images to develop a novel method for image resizing that preserves the image edge structures. This method is also widely used in medical imaging. Dobbe et al. [7] exploit the algorithm to detect the vessels’ centerline in the analysis of microcirculatory images. Fleming et al. [8] propose an application to the analysis of skin lesions, while Zhang [9] applied it to the quantitative measurement of neurites. Examples of microcirculatory geometry [7] and ophthalmic applications in the extraction of blood vessels [10] are also present. In the civil sector, the Steger’s line detector has been used for the identification of roads from satellite images and for the study of traffic with the automatic recognition of lines of cars [11]. In physics, the Steger’s algorithm has been used for recognizing gravitational waves from the study of time-frequency diagrams [12], and in document analysis to automate the process of digitizing lines in engineering drawings [13]. In machine vision, it has been highly exploited in 3D reconstruction techniques through stereo vision [14,15] and laser structured light techniques [16,17,18,19]. The Steger algorithm is used in [20] to delineate the central line of cracks and fractures in images using the concept of fractional differential. In [21], the line detection algorithm is compared with a deep neural network for the identification of filamentous complexes of proteins.
Despite its widely recognized benefit, the Steger’s algorithm also presents some drawbacks. Indeed, the algorithm requires accurate tweaking of its input parameters, which mainly relates to the width and contrast, with respect to the background, of the line to be identified in the image. The incorrect selection of these input parameters could not always guarantee the correct identification of the centerline or the correct identification of the width of the curvilinear structure. This has prevented the use of Steger’s approach in fully automated measurement systems targeted at the extraction of the geometrical features of structures with varying width and contrast levels.
In this context, this work aims at overcoming these issues by proposing a fully automated strategy providing the capability to select the optimal input parameters for the Steger algorithm, thus paving the way to its exploitation in measurement applications targeting variable-width/contrast curvilinear structures, such as, for example, cracks or scratches over surfaces, laser blade profile measurement, checking the alignment between surfaces, and so on. The developed method is restricted to the measurement of a single curved feature in an image. The paper is organized as follows: Section 2 discusses the theory behind the approach proposed. Section 3 discusses the metrological characterization of the proposed method, while Section 4 presents the results of the experimental campaign set up to validate the working strategy. Section 5 will draw the main conclusions of the work.

2. Optimizing Steger’s Algorithm Input Parameters

Lines in 2D images can be modeled through different characteristic profiles that involve the gray values along the direction perpendicular to the line. In [5,22], Steger discusses asymmetric bar-shaped, parabolic and Gaussian line profiles. In this paper we consider only bar-shaped profiles because they are more common in real applications and they effectively represent profiles identifiable on cracks, scratches or defects on surfaces, which turn out to be high-contrast lines with very sharp edges on the framed images. Equation (1) represents an asymmetrical bar-shaped line profile (dark line over a bright background) of half-width w, asymmetry a [ 0 , 1 ) , and line contrast h with respect to the background. Note that the same considerations will also apply to bright lines on dark backgrounds (there is a minus sign in the Equation (1)) and that w represents the distance of the centerline to the side edges, so the actual line width will be 2 w . The asymmetry parameter a represents the asymmetry of all the lines which have different surrounding intensity values; this parameter will be equal to zero if there is no asymmetry. Figure 1 shows an example of an asymmetrical bar-shaped line profile (dark line over a bright background) of half-width w = 1 pixel, asymmetry a = 0.2 , and line contrast h = 1 with respect to the background.
f a ( x ) = 0 , x < w h , | x | w h a , x > w
The original Steger’s line detection algorithm is very well-described in [5,22]. Apart from the target image, the algorithm obtains as input the contrast value h (defining the contrast of the line with respect to the background) and the σ parameter, which is the parameter defining the aperture of the Gaussian kernel to be convoluted with the original image. The algorithm provides as output the position and width of each line detected with sub-pixel accuracy. The contrast value depends solely on the intensity value difference between the line and the surrounding pixels. The contrast is related, in the Steger algorithm [5], to an upper threshold u and a low threshold l value, which represent the hysteresis threshold parameters. The low threshold value can be set equal to 10% of the upper threshold value and relate to the contrast according to Equations (2) and (3).
u = h 2 3 2 π σ 2 e 3 / 2
l = 0.1 u
Any pixel whose second derivative is above the upper threshold will be marked as a pixel belonging to the line, as well as any other pixel whose intensity value lies between the higher and the lower thresholds, but is close enough, i.e., it is connected, to a pixel whose intensity value is above the higher threshold. All the other pixels of the image are not considered as pixels belonging to a line. The relations between u, l and the line contrast h suggest that, if the contrast of a line with respect to the background is high, it will be easier for the algorithm to isolate the line from the surrounding background.
If the line contrast can be easily estimated, e.g., by calculating the histogram associated with the intensity distribution of the image, the σ value strongly depends on the width of the line to be detected. This latter value can not be known a priori. Choosing the σ parameter involves the definition of a σ upper threshold. In fact, a σ value too high will cut out any line whose width is below a certain width, since the smoothing is too severe. Moreover, Steger establishes a lower threshold on the σ value through the following inequality (4), which represents the point at which the convolution function between the image and the second derivative of the Gaussian kernel reaches its maximum.
σ w 3
The σ parameter therefore assumes the role of a scale-space parameter that drives the width range of the targeted line.

2.1. Automating a Line Width Measurement

A dedicated analysis was performed to test the Steger’s line detector dependency on the σ value. The rationale of the analysis is reported in the flow chart shown in Figure 2. Since the contrast value of the synthesized line is known, it is possible to separate the contributions of the various input parameters of the algorithm.
A synthesized line is generated with known width and asymmetry. The line contrast h is known, so the upper and lower threshold parameters can be calculated analytically for each σ value according to the Equations (2) and (3). Starting from σ i = σ 1 = σ m i n , the Steger’s line detector algorithm is applied to the image. The mean width w ¯ ( σ i ) of the detected line is then stored. The mean width is calculated as the arithmetic mean of all widths calculated by the Steger algorithm at each of the m line points identified.
w ¯ ( σ i ) = 1 m j = 0 m w j ( σ i )
The σ parameter is then increased by an incremental value s. These steps are repeated for increasing σ i values up to σ i = σ m a x . The interval σ m i n σ m a x is chosen to be compatible with the line width under analysis. In this way, it is possible to obtain the intersection of the function w ¯ = w ¯ ( σ ) with the function reported in expression (4) in order to obtain the w ¯ s and σ s values that fulfill the Steger’s inequality. The approach described above was tested on a synthesized horizontal line with a constant width of 5 pixels, and assuming σ m i n = 0.4 , σ m a x = 5 , s = 0.2 , and a = 0 (Figure 3). The w ¯ ( σ ) 0 values obtained were then linearly interpolated, making it possible to generate a w ¯ ( σ ) function. The intersection of Steger’s inequality (4) with the w ¯ ( σ ) function, returns a value of w ¯ s = 5.019 pixels. This results in a line width estimation error of 0.019 pixels.
To verify the potential sensitivity to the incremental value s, this value was varied from 0.05 to 1 with step 0.05 for the same σ range. The results of this iterative analysis are reported in Figure 4 in terms of absolute difference between the measured value w ¯ s and the target value. As shown in Figure 4, no matter the step s, the error remains very small and well below one pixel. As for the number of iterations, a value of s = 0.05 , with σ m i n = 0.4 and σ m a x = 5 , results in 92 iterations, while it reduces to 4 iterations with s = 0.1 . If the very same procedure is to be applied in an actual line width measurement, a trade-off between the required accuracy and the execution time of the algorithm, which actually increases by an order of magnitude by reducing the iteration step, will necessarily be needed.
The test was then repeated to verify the sensitivity to the asymmetry parameter a. The following parameters were kept fixed, i.e., σ m i n = 0.4 , σ m a x = 5 , s = 0.2 , and w ¯ = 5 pixel, while the asymmetry parameter a ranged from 0 to 1. A step of 0.05 was selected as the increasing step of the a parameter. The results, in terms of absolute difference between measured value w ¯ s and target value, are reported in Figure 5. As shown in the graph, the error increases as the imposed asymmetry increases.
Last but not least, the following parameter values were considered: the w ¯ parameter was made to vary from 2 to 20 pixels, adopting an incremental step of 1 pixel, with a = 0 , s = 0.1 , σ m i n = 0.4 , and σ m a x = 15 . As the graphs in Figure 6 shows, the absolute error e s increases as the target width increases. The e s is calculated according to the following equation:
e s = | w ¯ s w ¯ |
Nevertheless, the percentage error remains well below 1% (see Figure 7). If we consider the percentage error, an average percentage value of 0.47% is identified for the range of width investigated.

2.2. Optimizing the Sigma Parameter Selection

Section 2.1 demonstrated that is possible to exploit the Steger’s line detection algorithm to calculate the position and width of curvilinear structures in 2D images automatically with no manual selection of input parameters. This section investigates the possibility to further optimize this process to improve the metrological performance of a measurement system exploiting the approach.
By intersecting the inequality (4) with the w ¯ ( σ ) function, which is obtained by performing a linear interpolation of w ¯ ( σ ) for w ¯ ( σ ) 0 , it is possible to obtain the mean width of the target curvilinear structure ( w ¯ s ). Although this technique is very robust and returns a low error even when varying the target mean width and the asymmetry of the synthetic line generated, it is easy to see that the point identified (Figure 3) is not the one that minimizes the overall width error. In fact, from the graph shown in Figure 3, it can be seen that the algorithm, with low σ values, fails to detect the line, thus leading to a null w ¯ s . From a certain value of σ , which from now on we will call σ * (the minimum sigma at which the algorithm identifies the line and then returns a width value greater than zero), the algorithm identifies the line and therefore its mean width. From here on, the measured w ¯ s ( σ ) will tend to increase as the σ value used increases. If we sketch the error trend from σ * onward, we can see that the minimum error is located at σ * and not at σ s (Figure 8). More specifically, at σ s , the error is e s = 0.019 pixel, while at σ * the error is e * = 5 × 10 7 pixels.
To test the performance of the algorithm at different line widths, the following iterative analysis was performed. The w ¯ values were varied from 2 to 20 pixels in steps of 1 pixel, with a = 0 , s = 0.1 , σ m i n = 0.4 and σ m a x = 15 . As the graphs in Figure 9 shows, the absolute error increases as the target width increases. The same happens for the percentage error (Figure 10).
This high performance can only be achieved through the use of a very fine step s (see Figure 11). In practice, it is not possible for all applications to iterate (e.g., by using a brute force approach) the line detection algorithm for several iterations, as the computing time would increase excessively, especially with high resolution images.
Figure 12 shows the ( σ * , w ¯ * ) progression for a synthesized line width ranging between 2 and 20 pixels. The σ parameter incremental step was set as s = 0.1 .
By performing a linear regression on these points we obtain a straight line of equation r ( σ ) = 2.551 σ + 0.458 , resulting in an M S E = 0.014 pixels (mean square error) and a squared correlation coefficient R 2 = 0.999 . This interpolation line represents an optimization line for the σ parameter.
Given the results shown, it is possible to understand that a procedure to identify the optimal sigma parameter would pave the way to an automated use of the Steger algorithm. This said, we propose one possible optimization approach to tackle this issue. Other algorithms can be exploited with the same purpose. The one addressed in the paper was selected to ease implementation also in embedded systems characterized by low computing power. To better describe the the sigma optimization procedure proposed, let us consider the flow chart reported in Figure 13. Starting from an input image containing a curvilinear structure (1), the Steger’s line detection algorithm is applied iteratively (2). In this phase all the input parameters (h,s) must be defined according to the scenario. Specifically, the step s is chosen according to the maximum computing time allowed. At each iteration, the value of sigma, which starts at 0.6 (i.e., the lowest value that can be used by the algorithm [5]), is increased by s. In the loop, the horizontal distance d i of the point ( σ i , w ¯ ( σ i ) ) to the optimization line r ( σ ) is then calculated (Equation (7)).
d i = | w ¯ ( σ i ) 0.458 2.551 σ i |
The loop continues until the following condition is fulfilled: d i > d i 1 w ¯ ( σ i 1 ) > 0 . Thanks to this process, there is no longer any need to define a minimum and maximum value for the σ parameter in the iteration (2). The output of the process is σ * = σ i 1 , i.e., the first sigma value at which the vector comes close to the optimization line and results in a mean width greater than zero. If the value of s chosen in (2) is sufficiently small, the value of w ¯ o p t , which in this case coincides with w ¯ * , is obtained. If the value of s is high because the computing time is to be kept low, then, starting from the point ( σ * , w ¯ * ) (Figure 14b) a horizontal line is drawn up to the optimization line (Figure 14c). The σ value corresponding to the intersection of this horizontal line with the optimization line represents the σ o p t value (Figure 14d) (3). Once this value is obtained, the Steger’s algorithm can be applied to the input image with the optimal sigma value, obtaining the w ¯ o p t value (4) as output.

3. Metrological Performance Assessment

3.1. Target-to-Camera Relative Orientation Dependence

Changes in the target-to-camera relative pose give rise to increased errors in geometric features assessment. These errors, which are mainly due to perspective distortion issues, can be mitigated by applying perspective distortion mitigation approaches. However, these approaches typically require featured images to work properly. This condition is borderline when targeting images presenting only curvilinear structures. Given this, it is the authors’ opinion that it is worth discussing the target-to-camera dependence with no perspective distortion mitigation pre-processing, as this better represents the majority of the working conditions one might have when addressing the topic of measuring geometrical features on curvilinear structures. The dependence of the developed method to the target-to-camera relative orientation was determined through the measurement of a real curvilinear structure in a controlled environment. To this end, a 6-dof (degrees of freedom) anthropomorphic robot was used. A plate embedding a curvilinear feature of constant width (1.25 mm) was attached to the end-effector of the robot, while three fiducial markers were placed on the plate to provide pixel-to-millimeter conversion factors, as well as to make eventual perspective correction possible.
The reference system of the end-effector is shown in Figure 15. The z axis is the axis normal to the plane hosting the line. The origin of the axes reference is at the center of the line. The same optical system exploited in Section 4 was used as the imaging system. The reflex camera was placed on a tripod at a known distance from the target. The parallelism between the camera sensor surface and the target surface was guaranteed through a custom-made alignment system composed of a laser projector and a mirror (Figure 16). A laser was rigidly mounted on the target surface (3) and a mirror (2) was attached to the front surface of the imaging lens. Through manual control of the robot, the end-effector was positioned in such a way that the laser beam, reflecting on the mirror surface, scattered back to the emission point. After the alignment operation, the laser and mirror were removed and the acquisition started. To ensure repeatability on the mounting of the laser on the camera, a mounting tool was used. The tool made it possible to rigidly fix the laser on the target surface.
Three types of acquisitions were performed:
  • Controlled rotation of the target surface from 30 to + 30 , with steps of 0 . 5 , rotating around the x axis, for a total of 60 images acquired;
  • Controlled rotation of the target surface from 30 to + 30 , with steps of 0 . 5 , rotating around the y axis, for a total of 60 images acquired;
  • Controlled rotation of the target surface with random rotation angles from 30 to + 30 around the x and y axes, for a total of 360 images acquired.
The first two tests were carried out to separate the single contribution due to the relative angle laying between the surfaces. The third type of test, on the other hand, was used to estimate the type A uncertainty of the system as the target-to-camera relative pose angles vary. All these measurements were repeated at different distances: 700 mm, 1000 mm, 1300 mm, and 2400 mm. The approach discussed in the previous section was then applied to all the images acquired.

3.1.1. Target Rotated around the x Axis

A controlled rotation around the x axis was imposed to the target surface. The rotation angles ranged from 30 to + 30 , with an angular step of 0 . 5 . The measured mean width value and the pixel-to-millimeter conversion factor were calculated on each image acquired for each rotation angle and for each working distance tested.
Figure 17 shows the dependence of the absolute error, i.e., the absolute difference between the mean width measured by the algorithm and the target width (1.25 mm), to the rotation angle around the x axis for each working distance tested. The outliers relating to the measurement carried out at a distance of 2400 mm are due to the fact that at this distance the algorithm is not always able to correctly identify the markers used. It is easy to notice that the measurement results do not depend on the target-to-camera relative distance. This is an intrinsic benefit of the ridge detection algorithm used. In fact, once the line position is detected, the algorithm measures the line width perpendicularly to its center through the identification of the line edges-to-background intensity transitions. This reduces the importance of the line width in terms of pixel numbers. Indeed, this concept is clearly highlighted in Figure 18, where the width of the line reported in pixels is shown for each rotation angle around the x axis and for each working distance tested. As it is logical to expect, when moving away from the target, the number of pixels identifying the line width decreases as well as the absolute error in pixels. This might be due to an incorrect parallelism between the measurement and the plane embedding the target line.

3.1.2. Target Rotated around y Axis

A controlled rotation around the y axis of the reference system shown in Figure 15 was imposed on the target surface in order to cover an angular range from 30 to + 30 , with an angular step of 0 . 5 . The test was repeated for each working distance identified in the introduction of Section 3.1.
Figure 19 and Figure 20 show the width absolute error dependency, in mm and pixels, respectively, to the rotation angle around the y axis. Again, the absolute error was calculated as the difference between the mean width measured by the proposed algorithm and the target width (1.25 mm). The outliers relating to the measurement carried out at a distance of 2400 mm are due to the fact that at this distance the algorithm is not always able to correctly identify the markers used. The same considerations of Section 3.1.1 hold.

3.1.3. Target Rotated around x and y Axis

To understand the effect of a combined rotation of the target around the x and y axes, a dedicated test was carried out. Pure random rotations around these axes were imposed on the target. The pure rotation movement of the end-effector was imposed to keep the same target-to-camera distance during all the different target orientations. Each angular value of the two axes could range from 30 to + 30 . The random rotation values were assigned according to a uniform distribution for each rotation axis. The test was repeated for all the four distances (i.e., 700 mm, 1000 mm, 1300 mm and, 2400 mm). This resulted in a total of 360 images collected. The proposed algorithm was then applied to each acquired image to calculate the average width of the detected line.
Figure 21 and Figure 22 show the trend of the standard deviation as the maximum random absolute angle varies. The standard deviation addressed at an angle θ x y , i is estimated as the standard deviation of the distribution of the line widths identified in the ranges θ x y , i θ x θ x y , i θ x y , i θ y θ x y , i . The first figure shows the standard deviation in mm, the second in pixels.
Similarly to what happened with the fixed rotation at θ x and θ y angles, the target-to-camera distance does not affect the trend of the standard deviation of the dataset (Figure 21). Contrarily, the higher the rotation angle, the higher the standard deviation and the measurement error.
A further interesting aspect is linked to the analysis of the variability of the sigma value with respect to the target-to-camera relative distance. Figure 23 shows the dispersion of the values at each distance, normalized to the equivalent pixel width of the line. For example, if we consider a line that is 7 pixels thick, the optimal sigma value selected by the proposed algorithm varies from 1.0 to 2.4 (no matter the target-to-camera angle). The variability range spans from 5.8 to 8.4 for an equivalent line width of 30 pixels (i.e., image taken from a closer distance). In this latter case, the sigma value can assume 13 different values, if we assume s = 0.2 . This clearly demonstrates how fundamental the choice of a correct sigma value is and how much this can vary a lot even in images taken at the same distance from the target. The Steger algorithm, without the proposed automatic optimization technique, could never have returned the results obtained without knowing a-priori the optimal sigma value.

3.2. Performance Comparison on a Reference Target

The metrological performance of the developed method was tested on a reference target by performing multiple acquisitions at different target-to-camera distances while keeping null the target-to-camera θ x and θ y angles. A reference target was manufactured in fused deposition modeling (FDM) 3D printing. As shown in Figure 24, the target embedded a central groove of 5 mm (nominal design value) in width.
The reference target and the camera were mounted on an optical bench, and their relative distance was varied between 700 mm and 1675 mm, at discrete steps of 25 mm. An image was acquired at each target-to-camera distance, thus resulting in a total dataset of 40 images (i.e., first image acquired at 700 mm, last image acquired at 1675 mm). This dataset made it possible to estimate the expanded uncertainty U = 0.018 mm (coverage factor k = 2 ) associated with the measurement system. Moreover, the sample mean x i = 4.657 mm of the groove width distribution was also estimated. Given the difference with respect to the nominal width of the groove (5 mm), and being aware of the uncertainty associated to the dimensional features of the FDM 3D-printed parts, two further high-precision optical-based measurement systems were also exploited to assess the reference groove width value:
  • A custom Telecentric-based imaging system (TIS—Figure 25) in backlight arrangement (declared expanded uncertainty: U T I S = 25.5   μ m), used for “in-production” dimensional quality control [23];
  • A Wenglor MLSL132 laser profilometer (LP—Figure 26—measured expanded uncertainty at working distance of 150 mm: U L P = 11 μ m).
Table 1 reports the width values measured by the three systems. It is worth saying that all the width values of Table 1 are width values averaged over several cross-sections at different heights along the groove. Since both TIS and LP are affected by the target-to-device pose, a double testing approach was adopted: (a) The same alignment procedure described in Section 3.1 was exploited for guaranteeing the target to be perpendicular to the optical axis of the camera of TIS. (b) The horizontal grooves were used as targets to align the laser line of LP on the target.
It interesting to notice that all the three devices shift to lower width values with respect to the nominal one (5 mm). Moreover, all the three measurements are compatible, thus demonstrating the validity of the approach developed.
This is even more evident if looking at Figure 27, which reports the averaged width values ( w ^ ) estimated using the proposed approach for different target-to-camera working distances. It is indeed well evident that the proposed approach makes it possible to obtain results compatible with those provided by the other instruments even at working distances varying over quite a wide range. To provide a clearer idea of the impact of the target-to-camera working distance on the acquired image we reported an example of such in Figure 28, where two images acquired at a target-to-camera working distance of 700 mm and 1675 mm are shown. Despite the clear size difference between the sub-areas of the images embedding the groove, the width values estimated are still compatible with those measured by TIS and LP.

4. Experimental Validation

The whole approach developed was tested on a test setup (Figure 29) specifically arranged to ensure the possibility to acquire images of grooves of variable width. Indeed, the target groove was artificially created by mounting two plastic parts (Figure 29 (3)) on the mobile and fixed components of a micrometric stage (Figure 29 (4)) (Newport 3-axis motion controller ESP300; uncertainty ± 0.01 mm). The scene was framed with a camera (Figure 29 (1)) mounted on a tripod (Figure 29 (2)). The target was illuminated homogeneously, to avoid illumination gradient problems. The distance between the camera and the target was fixed to 500 mm. The camera used in this setup was a 24 Mp Nikon D7200 equipped with a 60 mm 2.8 Nikon Nikko Macro Lens.
Two fiducial ArUco [24] markers (5) were used to identify the target-to-camera relative pose and pixel-to-millimeter conversion factor. The width of the framed groove was varied through the micrometric stage between 0.1 and 2.5 mm. A step of 0.2 mm was adopted in varying the groove width, thus resulting in a total of 13 acquired images. The histogram of the region surrounding the line (corresponding to the real groove) was calculated to estimate the contrast value parameter h, as this is not known a priori. Figure 30 reports an example of the histogram (b) extracted for the region of interest containing the groove (3) in a image of the dataset (a). As the groove is darker with respect to the background, it is easy to identify the peak on the histogram representing those pixels belonging to the groove (1) and the peak referring to background pixels (2). The absolute difference between the grayscale values of the two peaks identified represents the h parameter to be used in the Steger algorithm.
Table 2 shows the results of the measurement campaign. The absolute error, i.e., the difference between the value set on the micrometric stage ( w ) and the value measured by the proposed method ( w ¯ ), is given in mm ( e * , p ) and in pixels ( e * , p ). The pixel-to-millimeter conversion was performed using the conversion factor found during the calibration phase of the system. It seems there is no relationship between error and line width. The error is always well below one pixel. The mean error is 0.015 mm, equivalent to 0.383 pixels. Indeed, if comparing the groove width generated by moving the micrometric stage and its width value measured by the proposed method, a correlation coefficient R 2 = 0.99 is obtained, indicating a very strong correlation between the two measurements.
It is interesting to notice that the line detection algorithm starts to correctly identify the groove’s centerline from a certain σ value onward. For example, as shown in Figure 31, for σ = 1.4 , the algorithm does not identify the groove’s centerline, whereas for σ = 1.5 it identifies the groove’s centerline throughout its whole length. In this case, σ * = 1.5 is shown to be the optimum σ value. As σ increases, the algorithm will continue to correctly identify the position of the groove’s centerline but will tend to overestimate its width.
To provide further validity to the approach proposed, an application to a real crack, i.e., a crack on a concrete surface (Figure 32) is discussed hereafter. This application is actually the target application for which the whole approach was developed, as the ultimate goal is the development of a measurement system specifically targeted to the assessment of the geometric features of superficial lesions, such as cracks.
A total of 100 pictures was acquired by varying the target-to-camera relative pose to verify the robustness of the approach to the use by a human operator. This pose variation was obtained by moving the camera closer and further away from the wall and taking the pictures from different heights. All pictures were taken with the same optical set-up used for the artificial groove (Nikon D7200 equipped with a 60 mm 2.8 Nikon Nikkor Macro Lens), but having a human operator handling the camera. The crack width distribution, normalized to the mean value is reported in Figure 33. The type A uncertainty associated with the measurements is estimated to be 0.0019 mm. If a coverage factor of k = 2 is considered, an expanded uncertainty value of 0.0038 mm is identified.

5. Discussion and Conclusions

The aim of this work was to lay the basis for the development of a measurement system targeting an automated measurement of the geometric features of curvilinear structures such as those identifiable in building elements (e.g., concrete cracks). As the main drawback in exploiting the Steger algorithm (a well-known ridge detector approach providing line identification and width measurement) is that it is necessary to manually tune the parameters governing the algorithm, this paper has proposed an approach to overcome this issue, thus paving the way to a wider application of the algorithm for measurement purposes. Indeed, authors have proposed a method to automatically identify the two parameters that mostly affect a correct identification of the line and the assessment of its width, i.e., the h and the σ parameters. It was shown that the h parameter can be extracted by analyzing the histogram of the acquired image as the absolute difference between the two peaks on the histogram representing the curvilinear structure and the background. As for the σ parameter, an approach demonstrating how to identify the optimal parameter was developed and discussed in detail.
The metrological characterization of the proposed system was performed through dedicated tests (Section 3). A preliminary analysis on the influence of the target-to-camera relative pose was performed at first. A 6-dof anthropomorphic robot was used to acquire images at different relative target-to-camera angles and distances. This analysis of the datasets acquired made it possible to demonstrate the following:
  • Contrary to what one might expect, the error in calculating the line width does not depend on the distance, and therefore on the number of pixels representing the line in the framed image. In fact, the width is not calculated through the convolution of the Gaussian profile, but through the use of an asymmetrical bar-shaped profile. The width of the bar-shaped profile therefore does not influence the calculation of the line width itself.
  • The error increases as the angular misalignment between the measuring plane and the sensor plane increases. This happens because the perspective error becomes more marked and the pixel-to-millimeter conversion factor becomes more inaccurate.
  • By varying the target-to-camera relative working angle within the range ± 30 , the maximum error is always below 0.100 mm, no matter the working distance, and therefore the number of pixels representing the line in the framed image.
  • The standard deviation of the measurement increases in a quadratic manner as the angle of misalignment increases. With a maximum camera-target absolute angle of misalignment of ± 5 , a standard deviation of 0.003 mm is obtained, while with ± 30 , a standard deviation of 0.03 mm is estimated. This shows how, according to the application and therefore to the required metrological specifications, it is possible to set different acceptability ranges in terms of admitted angular misalignment.
  • Based on the line width in pixels, it is possible to establish specific ranges of the sigma value representing the boundaries for searching the optimal sigma values. The maximum admissible angular range is ± 30 .
A further test was made to compare the performance of the solution with other non-contact measurement systems, specifically, a telecentric imaging system and a laser profilometer. The test was performed at different target-to-camera distances to demonstrate the invariance of the approach with respect to this variable. The results obtained show the compatibility of the measurements performed with the different systems, hence the robustness of the approach developed.
The whole method was tested on synthesized lines with an asymmetrical bar-shaped profile and a width ranging between 2 and 20 pixels. A maximum error of e * = 1.4 · 10 5 pixels, corresponding to e * = 7 · 10 5 % in calculating the average line width was identified. The methodology developed was also validated on real lines of variable width (from 0.1 to 2.5 mm), resulting in a mean absolute error of 0.015 mm, equivalent to 0.383 pixels, in the measurement of the average line width. As further validation, the approach was tested on a real crack of a concrete surface. The type A uncertainty associated with the measurements was estimated by taking 100 picture of the crack, thus obtaining an expanded uncertainty value of 0.0038 mm (coverage factor k = 2 ). The results obtained are highly promising. Authors are currently working on an approach to isolate the crack in the image, thus making it possible to further optimize the contrast parameter h value in the case of a surface containing multiple elements of variable contrasts, such as aggregates in concrete.

Author Contributions

Conceptualization, N.G., P.C. and G.M.R.; methodology, N.G. and P.C.; software, N.G.; validation, N.G. and P.C.; formal analysis, N.G. and P.C.; investigation, N.G. and P.C.; resources, N.G., P.C. and G.M.R.; data curation, N.G. and P.C.; writing—original draft preparation, N.G.; writing—review and editing, P.C.; visualization, N.G. and P.C.; supervision, G.M.R.; project administration, P.C. and G.M.R.; funding acquisition, P.C. and G.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research activity was carried out within the EnDurCrete (New Environmental friendly and Durable conCrete, integrating industrial by-products and hybrid systems, for civil, industrial and offshore applications) project, funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement no 760639.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MSEMean Squared Error
FDMFused Deposition Modeling
LSLaser Profilometer
TISTelecentric-based Imaging System
Symbols
wLine half-width
aLine asymmetry
hLine contrast
f a Asymmetrical bar-shaped line profile
σ Steger algorithm sigma parameter
uSteger algorithm upper threshold
lSteger algorithm lower threshold
w ¯ Line mean half-width
mNumber of identified line points
w ¯ s Line mean half-width obtained with Steger’s inequality
σ s Sigma parameter correspondent to w ¯ s
sSigma parameter incremental step
σ * Sigma found by proposed method
e s Error using σ s
e * Error using σ *
rProposed optimization line for the σ parameter
σ o p t Sigma obtained using the optimization line
w ¯ o p t w ¯ obtained using the optimization line
rProposed optimization line for the σ parameter
d i The horizontal distance of the point ( σ i , w ¯ ( σ i ) ) to the optimization line r ( σ )
θ x Rotation angle around x-axis
θ y Rotation angle around y-axis
θ x y Rotation angle around x-axis and y-axis

References

  1. Burnham, J.; Hardy, J.; Meadors, K.; Picone, J. Comparison of the roberts, sobel, robinson, canny, and hough image detection algorithms. In Comparison of Edge Detection Algorithms MS State DSP Conference; Image Processing Group: Starkville, MS, USA, 1997. [Google Scholar]
  2. Shirmohammadi, S.; Ferrero, A. Camera as the instrument: The rising trend of vision based measurement. IEEE Instrum. Meas. Mag. 2014, 17, 41–47. [Google Scholar] [CrossRef]
  3. Kinsner, M.; Capson, D.; Spence, A. Accurate measurement of surface grid intersections from close-range video sequences. IEEE Trans. Instrum. Meas. 2012, 61, 1019–1028. [Google Scholar] [CrossRef]
  4. Chen, T.; Wang, Y.; Xiao, C.; Wu, Q.J. A machine vision apparatus and method for can-end inspection. IEEE Trans. Instrum. Meas. 2016, 65, 2055–2066. [Google Scholar] [CrossRef]
  5. Steger, C. An unbiased detector of curvilinear structures. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 113–125. [Google Scholar] [CrossRef]
  6. Zhang, G.X.; Cheng, M.M.; Hu, S.M.; Martin, R.R. A shape-preserving approach to image resizing. Comput. Graph. Forum 2009, 28, 1897–1906. [Google Scholar] [CrossRef]
  7. Dobbe, J.G.; Streekstra, G.J.; Atasever, B.; Van Zijderveld, R.; Ince, C. Measurement of functional microcirculatory geometry and velocity distributions using automated image analysis. Med. Biol. Eng. Comput. 2008, 46, 659–670. [Google Scholar] [CrossRef]
  8. Fleming, M.G.; Steger, C.; Zhang, J.; Gao, J.; Cognetta, A.B.; Pollak, I.; Dyer, C.R. Techniques for a structural analysis of dermatoscopic imagery. Comput. Med. Imaging Graph. 1998, 22, 375–389. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Zhou, X.; Witt, R.M.; Sabatini, B.L.; Adjeroh, D.; Wong, S.T. Dendritic spine detection using curvilinear structure detector and LDA classifier. Neuroimage 2007, 36, 346–360. [Google Scholar] [CrossRef]
  10. Owen, C.G.; Newsom, R.S.; Rudnicka, A.R.; Ellis, T.J.; Woodward, E.G. Vascular response of the bulbar conjunctiva to diabetes and elevated blood pressure. Ophthalmology 2005, 112, 1801–1808. [Google Scholar] [CrossRef]
  11. Leitloff, J.; Hinz, S.; Stilla, U. Vehicle detection in very high resolution satellite images of city areas. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2795–2806. [Google Scholar] [CrossRef]
  12. Anderson, W.G.; Balasubramanian, R. Time-frequency detection of gravitational waves. Phys. Rev. D 1999, 60, 102001. [Google Scholar] [CrossRef]
  13. Jonk, A.; Van Den Boomgaard, R.; Smeulders, A. Grammatical inference of dashed lines. Comput. Vis. Image Underst. 1999, 74, 212–226. [Google Scholar] [CrossRef]
  14. Lemaitre, C.; Miteran, J.; Matas, J. Definition of a model-based detector of curvilinear regions. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Vienna, Austria, 27–29 August 2007. [Google Scholar]
  15. Hinz, S.; Stephani, M.; Schiemann, L.; Zeller, K. An image engineering system for the inspection of transparent construction materials. ISPRS J. Photogramm. Remote Sens. 2009, 64, 297–307. [Google Scholar] [CrossRef]
  16. Sun, J.; Zhang, G.; Wei, Z.; Zhou, F. Large 3D free surface measurement using a mobile coded light-based stereo vision system. Sens. Actuators A Phys. 2006, 132, 460–471. [Google Scholar] [CrossRef]
  17. Wong, A.; Niu, P.; He, X. Fast acquisition of dense depth data by a new structured light scheme. Comput. Vis. Image Underst. 2005, 98, 398–422. [Google Scholar] [CrossRef]
  18. Wei, Z.; Zhou, F.; Zhang, G. 3D coordinates measurement based on structured light sensor. Sens. Actuators A Phys. 2005, 120, 527–535. [Google Scholar] [CrossRef]
  19. Minnetti, E.; Chiariotti, P.; Paone, N.; Garcia, G.; Vicente, H.; Violini, L.; Castellini, P. A smartphone integrated hand-held gap and flush measurement system for in line quality control of car body assembly. Sensors 2020, 20, 3300. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, W.; Li, R.; Wang, K.; Lang, F.; Chen, W.; Zhao, B. Crack and Fracture central line delineation on Steger and Hydrodynamics with improved Fractional differential. Int. J. Wavelets Multiresolut. Inf. Process. 2020, 18, 2050037. [Google Scholar] [CrossRef]
  21. Wagner, T.; Lusnig, L.; Pospich, S.; Stabrin, M.; Schönfeld, F.; Raunser, S. Two particle-picking procedures for filamentous proteins: SPHIRE-crYOLO filament mode and SPHIRE-STRIPER. Acta Crystallogr. Sect. D Struct. Biol. 2020, 76, 613–620. [Google Scholar] [CrossRef]
  22. Steger, C. Unbiased extraction of lines with parabolic and Gaussian profiles. Comput. Vis. Image Underst. 2013, 117, 97–112. [Google Scholar] [CrossRef]
  23. Baleani, A.; Castellini, P.; Chiariotti, P.; Paone, N.; Roccetti, D.; Zampetti, L.; Zannini, M.; Zitti, S. Dimensional measurements in production line: A comparison between a custom-made telecentric optical profilometer and on-the-market measurement systems. In Proceedings of the 2021 IEEE International Workshop on Metrology for Industry 4.0 & IoT (MetroInd4.0 & IoT), Virtual, 7–9 June 2021; pp. 693–698. [Google Scholar]
  24. Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Medina-Carnicer, R. Speeded up detection of squared fiducial markers. Image Vis. Comput. 2018, 76, 38–47. [Google Scholar] [CrossRef]
Figure 1. Example of an asymmetrical bar-shaped line profile (dark line over a bright background) of half-width w = 1 pixel, asymmetry a = 0.2 and line contrast h = 1 with respect to the background.
Figure 1. Example of an asymmetrical bar-shaped line profile (dark line over a bright background) of half-width w = 1 pixel, asymmetry a = 0.2 and line contrast h = 1 with respect to the background.
Sensors 23 04023 g001
Figure 2. Procedure to identify the relationship between the line width w ^ ( σ i ) and the σ parameter.
Figure 2. Procedure to identify the relationship between the line width w ^ ( σ i ) and the σ parameter.
Sensors 23 04023 g002
Figure 3. The measured w ¯ ( σ ) 0 are linearly interpolated, obtaining the w ¯ ( σ ) function (dashed line). Intersecting Steger’s inequality (Equation (4)) with the w ¯ ( σ ) function, returns w ¯ s = 5.019 pixels (corresponding to the red dot). (a.u.: arbitrary unit).
Figure 3. The measured w ¯ ( σ ) 0 are linearly interpolated, obtaining the w ¯ ( σ ) function (dashed line). Intersecting Steger’s inequality (Equation (4)) with the w ¯ ( σ ) function, returns w ¯ s = 5.019 pixels (corresponding to the red dot). (a.u.: arbitrary unit).
Sensors 23 04023 g003
Figure 4. Variation of the absolute error in the measurement of the line mean width as the step s changes for the autonomous measurement of w ¯ , according to the procedure described in Figure 2. (a.u.: arbitrary unit).
Figure 4. Variation of the absolute error in the measurement of the line mean width as the step s changes for the autonomous measurement of w ¯ , according to the procedure described in Figure 2. (a.u.: arbitrary unit).
Sensors 23 04023 g004
Figure 5. Absolute error trend in the measurement of the line mean width as the asymmetry a changes. (a.u.: arbitrary unit).
Figure 5. Absolute error trend in the measurement of the line mean width as the asymmetry a changes. (a.u.: arbitrary unit).
Sensors 23 04023 g005
Figure 6. Absolute error e s trend in the measurement of the line mean width as the target w ¯ changes.
Figure 6. Absolute error e s trend in the measurement of the line mean width as the target w ¯ changes.
Sensors 23 04023 g006
Figure 7. Absolute percentage error e s trend in the measurement of the line mean width as the target w ¯ changes.
Figure 7. Absolute percentage error e s trend in the measurement of the line mean width as the target w ¯ changes.
Sensors 23 04023 g007
Figure 8. Detail of the graph in Figure 3, with the error shown in the abscissa. The minimum error is located at σ * (a.u.: arbitrary unit).
Figure 8. Detail of the graph in Figure 3, with the error shown in the abscissa. The minimum error is located at σ * (a.u.: arbitrary unit).
Sensors 23 04023 g008
Figure 9. Absolute error e * trend in the measurement of the line mean width as the target w ¯ changes.
Figure 9. Absolute error e * trend in the measurement of the line mean width as the target w ¯ changes.
Sensors 23 04023 g009
Figure 10. Absolute percentage error e * trend in the measurement of the line mean width as the target w ¯ changes.
Figure 10. Absolute percentage error e * trend in the measurement of the line mean width as the target w ¯ changes.
Sensors 23 04023 g010
Figure 11. Error e * trend as the parameter s increases (a.u.: arbitrary unit).
Figure 11. Error e * trend as the parameter s increases (a.u.: arbitrary unit).
Sensors 23 04023 g011
Figure 12. The σ optimization line (dashed line) vs. ( σ * , w ¯ * ) points obtained by iteratively applying the algorithm on synthetic line widths ranging from 2 to 20 pixels with s = 0.1 (a.u.: arbitrary unit).
Figure 12. The σ optimization line (dashed line) vs. ( σ * , w ¯ * ) points obtained by iteratively applying the algorithm on synthetic line widths ranging from 2 to 20 pixels with s = 0.1 (a.u.: arbitrary unit).
Sensors 23 04023 g012
Figure 13. Proposed sigma optimization technique flowchart.
Figure 13. Proposed sigma optimization technique flowchart.
Sensors 23 04023 g013
Figure 14. Example of sigma optimization technique application. (a.u.: arbitrary unit). After all necessary ( σ , w ¯ ) (a) are calculated, starting from the point ( σ * , w ¯ * ) (b), an horizontal line is drawn up to the optimization line (c). The σ value corresponding to the intersection of this horizontal line with the optimization line represents the σ o p t value (d).
Figure 14. Example of sigma optimization technique application. (a.u.: arbitrary unit). After all necessary ( σ , w ¯ ) (a) are calculated, starting from the point ( σ * , w ¯ * ) (b), an horizontal line is drawn up to the optimization line (c). The σ value corresponding to the intersection of this horizontal line with the optimization line represents the σ o p t value (d).
Sensors 23 04023 g014
Figure 15. An anthropomorphic robot is used in order to define the working range of the developed device in a controlled environment.
Figure 15. An anthropomorphic robot is used in order to define the working range of the developed device in a controlled environment.
Sensors 23 04023 g015
Figure 16. The image acquisition system (1) is placed at a known distance from the target (4). The parallelism between the sensor surface and the target surface is guaranteed through a system made of a laser (3) and a mirror (2). The two surfaces are to be considered parallel when the laser beam reflected by the mirror returns back to the source.
Figure 16. The image acquisition system (1) is placed at a known distance from the target (4). The parallelism between the sensor surface and the target surface is guaranteed through a system made of a laser (3) and a mirror (2). The two surfaces are to be considered parallel when the laser beam reflected by the mirror returns back to the source.
Sensors 23 04023 g016
Figure 17. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in millimeters for each rotation angle (around x axis) and each working distance tested.
Figure 17. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in millimeters for each rotation angle (around x axis) and each working distance tested.
Sensors 23 04023 g017
Figure 18. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in pixels for each rotation angle (around x axis) and each working distance tested.
Figure 18. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in pixels for each rotation angle (around x axis) and each working distance tested.
Sensors 23 04023 g018
Figure 19. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in mm for each rotation angle (around y axis) and each working distance tested.
Figure 19. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in mm for each rotation angle (around y axis) and each working distance tested.
Sensors 23 04023 g019
Figure 20. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in pixels for each rotation angle (around y axis) and each working distance tested.
Figure 20. Absolute error (i.e., difference between mean width measured by the proposed algorithm and the target width) in pixels for each rotation angle (around y axis) and each working distance tested.
Sensors 23 04023 g020
Figure 21. Standard deviation in mm vs. absolute angle.
Figure 21. Standard deviation in mm vs. absolute angle.
Sensors 23 04023 g021
Figure 22. Standard deviation in pixels vs. absolute angle.
Figure 22. Standard deviation in pixels vs. absolute angle.
Sensors 23 04023 g022
Figure 23. Dispersion of the values at each distance, normalized to the equivalent pixel width of the target line.
Figure 23. Dispersion of the values at each distance, normalized to the equivalent pixel width of the target line.
Sensors 23 04023 g023
Figure 24. A reference target was produced in FDM 3D printing with a built-in central groove of 5 mm (nominal design value) width.
Figure 24. A reference target was produced in FDM 3D printing with a built-in central groove of 5 mm (nominal design value) width.
Sensors 23 04023 g024
Figure 25. Telecentric-based imaging system: (a) optical set-up; (b) silhouette from backlight arrangement.
Figure 25. Telecentric-based imaging system: (a) optical set-up; (b) silhouette from backlight arrangement.
Sensors 23 04023 g025
Figure 26. Test set-up of the Wenglor MLSL132 laser profilometer used to evaluate the value of the reference groove width.
Figure 26. Test set-up of the Wenglor MLSL132 laser profilometer used to evaluate the value of the reference groove width.
Sensors 23 04023 g026
Figure 27. w ( x ) ^ calculated using the proposed solution according to the distance x of the camera from the target. In the graph, the results of the measurements carried out are superimposed with the LP and TIS in terms of average value and uncertainty bounds.
Figure 27. w ( x ) ^ calculated using the proposed solution according to the distance x of the camera from the target. In the graph, the results of the measurements carried out are superimposed with the LP and TIS in terms of average value and uncertainty bounds.
Sensors 23 04023 g027
Figure 28. Acquisition setup. On the left, a picture framed at a distance of 700 mm. On the right a picture framed at a distance of 1675 mm.
Figure 28. Acquisition setup. On the left, a picture framed at a distance of 700 mm. On the right a picture framed at a distance of 1675 mm.
Sensors 23 04023 g028
Figure 29. Experimental set-up to generate grooves of known, variable width. The target groove was artificially created by mounting two plastic parts (3) on the mobile and fixed components of a micrometric stage (4). The scene was framed with a camera (1) mounted on a tripod (2). Two fiducial ArUco markers (5) were used to identify the target-to-camera relative pose and pixel-to-millimeter conversion factor.
Figure 29. Experimental set-up to generate grooves of known, variable width. The target groove was artificially created by mounting two plastic parts (3) on the mobile and fixed components of a micrometric stage (4). The scene was framed with a camera (1) mounted on a tripod (2). Two fiducial ArUco markers (5) were used to identify the target-to-camera relative pose and pixel-to-millimeter conversion factor.
Sensors 23 04023 g029
Figure 30. Histogram (b) of the grayscale image (a) of the area of interest, including the groove under test (3). Two main peaks can be identified in the histogram. On the left, the peak representing the area belonging to the groove (1); and on the right, the peak representing the area belonging to the background (2).
Figure 30. Histogram (b) of the grayscale image (a) of the area of interest, including the groove under test (3). Two main peaks can be identified in the histogram. On the left, the peak representing the area belonging to the groove (1); and on the right, the peak representing the area belonging to the background (2).
Sensors 23 04023 g030
Figure 31. The optimization algorithm identifies the transition point between the unidentified groove condition (a) and the correctly identified one (b). As the σ value increases, the algorithm continues to identify the centerline well but overestimates its width value (c), represented by the green lines in the figure.
Figure 31. The optimization algorithm identifies the transition point between the unidentified groove condition (a) and the correctly identified one (b). As the σ value increases, the algorithm continues to identify the centerline well but overestimates its width value (c), represented by the green lines in the figure.
Sensors 23 04023 g031
Figure 32. Target concrete wall crack.
Figure 32. Target concrete wall crack.
Sensors 23 04023 g032
Figure 33. Distribution normalized with respect to the average value of mean width values measured on 100 images by the same operator.
Figure 33. Distribution normalized with respect to the average value of mean width values measured on 100 images by the same operator.
Sensors 23 04023 g033
Table 1. Comparisons between the average groove width measured with the three different devices.
Table 1. Comparisons between the average groove width measured with the three different devices.
w ^ U ( k = 2 )
[mm][mm]
Proposed solution4.6570.018
TIS4.6600.025
LP4.6670.006
Table 2. Comparisons between the groove width set on the micrometric stage ( w ) and the corresponding groove width measured through the proposed vision-based method ( w ¯ ). The absolute error is given in pixels ( e * , p ) and mm ( e * ).
Table 2. Comparisons between the groove width set on the micrometric stage ( w ) and the corresponding groove width measured through the proposed vision-based method ( w ¯ ). The absolute error is given in pixels ( e * , p ) and mm ( e * ).
w w ¯ e * e * , p
[mm][mm][mm][pixels]
0.1000.1280.0280.726
0.3000.3010.0010.032
0.5000.4730.0270.699
0.7000.6740.0260.686
0.9000.8720.0280.741
1.1001.0910.0090.229
1.3001.2940.0060.154
1.5001.5060.0060.164
1.7001.6970.0030.088
1.9001.9180.0180.480
2.1002.0960.0040.100
2.3002.2800.0200.534
2.5002.5130.0130.346
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Giulietti, N.; Chiariotti, P.; Revel, G.M. Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm. Sensors 2023, 23, 4023. https://doi.org/10.3390/s23084023

AMA Style

Giulietti N, Chiariotti P, Revel GM. Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm. Sensors. 2023; 23(8):4023. https://doi.org/10.3390/s23084023

Chicago/Turabian Style

Giulietti, Nicola, Paolo Chiariotti, and Gian Marco Revel. 2023. "Automated Measurement of Geometric Features in Curvilinear Structures Exploiting Steger’s Algorithm" Sensors 23, no. 8: 4023. https://doi.org/10.3390/s23084023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop