Abstract
Classical work on line segment detection is knowledge-based; it uses carefully designed geometric priors using either image gradients, pixel groupings, or Hough transform variants. Instead, current deep learning methods do away with all prior knowledge and replace priors by training deep networks on large manually annotated datasets. Here, we reduce the dependency on labeled data by building on the classic knowledge-based priors while using deep networks to learn features. We add line priors through a trainable Hough transform block into a deep network. Hough transform provides the prior knowledge about global line parameterizations, while the convolutional layers can learn the local gradient-like line features. On the Wireframe (ShanghaiTech) and York Urban datasets we show that adding prior knowledge improves data efficiency as line priors no longer need to be learned from data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Almazan, E.J., Tal, R., Qian, Y., Elder, J.H.: MCMLSD: a dynamic programming approach to line segment detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2031–2039 (2017)
Barbu, A., et al.: Objectnet: a large-scale bias-controlled dataset for pushing the limits of object recognition models. In: Advances in Neural Information Processing Systems, pp. 9448–9458 (2019)
Beatty, J.: The Radon Transform and the Mathematics of Medical Imaging. Honors thesis, Digital Commons @ Colby (2012)
Beltrametti, M.C., Campi, C., Massone, A.M., Torrente, M.L.: Geometry of the Hough transforms with applications to synthetic data. CoRR (2019)
Bruna, J., Mallat, S.: Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1872–1886 (2013)
Burns, J.B., Hanson, A.R., Riseman, E.M.: Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell. 4, 425–455 (1986)
Cho, N.G., Yuille, A., Lee, S.W.: A novel linelet-based representation for line segment detection. IEEE Trans. Pattern Anal. Mach. Intell. 40(5), 1195–1208 (2017)
Denis, P., Elder, J.H., Estrada, F.J.: Efficient edge-based methods for estimating manhattan frames in urban imagery. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 197–210. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88688-4_15
Do, M.N., Vetterli, M.: The finite Ridgelet transform for image representation. IEEE Trans. Image Process. 12(1), 16–28 (2003)
Duda, R.O., Hart, P.E.: Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 15(1), 11–15 (1972)
Furukawa, Y., Shinagawa, Y.: Accurate and robust line segment extraction by analyzing distribution around peaks in Hough space. Comput. Vis. Image Underst. 92(1), 1–25 (2003)
Gershikov, E., Libe, T., Kosolapov, S.: Horizon line detection in marine images: which method to choose? In. J. Adv. Intell. Syst. 6(1) (2013)
Guerreiro, R.F., Aguiar, P.M.: Connectivity-enforcing Hough transform for the robust extraction of line segments. IEEE Trans. Image Process. 21(12), 4819–4829 (2012)
He, J., Ma, J.: Radon inversion via deep learning. In: Medical Imaging (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hillel, A.B., Lerner, R., Levi, D., Raz, G.: Recent progress in road and lane detection: a survey. Mach. Vis. Appl. 25(3), 727–745 (2014)
Huang, K., Wang, Y., Zhou, Z., Ding, T., Gao, S., Ma, Y.: Learning to parse wireframes in images of man-made environments. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 626–635 (2018)
Jacobsen, J.H., van Gemert, J., Lou, Z., Smeulders, A.W.: Structured receptive fields in CNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2610–2619 (2016)
Kamat-Sadekar, V., Ganesan, S.: Complete description of multiple line segments using the Hough transform. Image Vis. Comput. 16(9–10), 597–613 (1998)
Kayhan, O.S., van Gemert, J.C.: On translation invariance in CNNs: convolutional layers can exploit absolute spatial location. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14274–14285 (2020)
Lee, S., et al.: VPGNet: vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1947–1955 (2017)
Magnusson, M.: Linogram and other direct fourier methods for tomographic reconstruction. Linköping studies in science and technology: Dissertations, Department of Mechanical Engineering, Linköping University (1993)
Maire, M., Arbelaez, P., Fowlkes, C., Malik, J.: Using contours to detect and localize junctions in natural images. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)
Martin, D.R., Fowlkes, C.C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell. 26(5), 530–549 (2004)
Matas, J., Galambos, C., Kittler, J.: Robust detection of lines using the progressive probabilistic Hough transform. Comput. Vis. Image Underst. 78(1), 119–137 (2000)
Min, J., Lee, J., Ponce, J., Cho, M.: Hyperpixel flow: semantic correspondence with multi-layer neural features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3395–3404 (2019)
Nguyen, V.N., Jenssen, R., Roverso, D.: LS-Net: Fast single-shot line-segment detector. CoRR (2019)
Nikolaev, D.P., Karpenko, S.M., Nikolaev, I.P., Nikolayev, P.P.: Hough transform: underestimated tool in the computer vision field. In: Proceedings of the 22th European Conference on Modelling and Simulation, vol. 238, p. 246 (2008)
Niu, J., Lu, J., Xu, M., Lv, P., Zhao, X.: Robust lane detection using two-stage feature extraction with curve fitting. Pattern Recogn. 59, 225–233 (2016)
Pătrăucean, V., Gurdjos, P., von Gioi, R.G.: A parameterless line segment and elliptical arc detector with enhanced ellipse fitting. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7573, pp. 572–585. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33709-3_41
Porzi, L., Rota Bulò, S., Ricci, E.: A deeply-supervised deconvolutional network for horizon line detection. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 137–141 (2016)
Qi, C.R., Litany, O., He, K., Guibas, L.J.: Deep Hough voting for 3D object detection in point clouds. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9277–9286 (2019)
Rim, D.: Exact and fast inversion of the approximate discrete radon transform from partial data. Appl. Math. Lett. 102, 106159 (2020)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Satzoda, R.K., Trivedi, M.M.: Efficient lane and vehicle detection with integrated synergies (ELVIS). In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 708–713 (2014)
Shelhamer, E., Wang, D., Darrell, T.: Blurring the line between structure and learning to optimize and adapt receptive fields. CoRR (2019)
Sheshkus, A., Ingacheva, A., Arlazarov, V., Nikolaev, D.: Houghnet: neural network architecture for vanishing points detection (2019)
Simon, G., Fond, A., Berger, M.-O.: A-Contrario horizon-first vanishing point detection using second-order grouping laws. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 323–338. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_20
Sosnovik, I., Szmaja, M., Smeulders, A.: Scale-equivariant steerable networks. In: International Conference on Learning Representations (2020)
Sun, J., Liang, L., Wen, F., Shum, H.Y.: Image vectorization using optimized gradient meshes. ACM Trans. Graph. (TOG) 26(3), 11-es (2007)
Toft, P.: The Radon Transform: Theory and Implementation. Section for Digital Signal Processing, Technical University of Denmark, Department of Mathematical Modelling (1996)
Urban, G., et al.: Do deep convolutional nets really need to be deep and convolutional? In: International Conference on Learning Representations (2016)
Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32(4), 722–732 (2008)
Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: On straight line segment detection. J. Math. Imaging Vis. 32(3), 313 (2008)
Wei, H., Bing, W., Yue, Z.: X-LineNet: Detecting aircraft in remote sensing images by a pair of intersecting line segments. CoRR (2019)
Wei, Q., Feng, D., Zheng, W.: Funnel transform for straight line detection. CoRR (2019)
Workman, S., Zhai, M., Jacobs, N.: Horizon lines in the wild. In: British Machine Vision Conference (2016)
Xu, Z., Shin, B.S., Klette, R.: Accurate and robust line segment extraction using minimum entropy with Hough transform. IEEE Trans. Image Process. 24(3), 813–822 (2014)
Xu, Z., Shin, B.S., Klette, R.: A statistical method for line segment detection. Comput. Vis. Image Underst. 138, 61–73 (2015)
Xue, N., Bai, S., Wang, F., Xia, G.S., Wu, T., Zhang, L.: Learning attraction field representation for robust line segment detection. In: The IEEE Conference on Computer Vision and Pattern Recognition, June 2019
Xue, N., et al.: Holistically-attracted wireframe parsing. In: Conference on Computer Vision and Pattern Recognition (2020)
Zhang, Z., et al.: PPGNet: learning point-pair graph for line segment detection. In: Conference on Computer Vision and Pattern Recognition (2019)
Zhou, Y., Qi, H., Ma, Y.: End-to-end wireframe parsing. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 962–971 (2019)
Zhou, Y., et al.: Learning to reconstruct 3D manhattan wireframes from a single image. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7698–7707 (2019)
Zou, J.J., Yan, H.: Cartoon image vectorization based on shape subdivision. In: Proceedings of Computer Graphics International 2001, pp. 225–231 (2001)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lin, Y., Pintea, S.L., van Gemert, J.C. (2020). Deep Hough-Transform Line Priors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12367. Springer, Cham. https://doi.org/10.1007/978-3-030-58542-6_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-58542-6_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58541-9
Online ISBN: 978-3-030-58542-6
eBook Packages: Computer ScienceComputer Science (R0)