Skip to main content

U-Net: Deep Learning for Extracting Building Boundary Collected by Drone of Agadir’s Harbor

  • Conference paper
  • First Online:
Digital Technologies and Applications (ICDTA 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 211))

Included in the following conference series:

Abstract

Identification of a specific object in an image might be a trivial task for humans but are often quite challenging for machines. Recently the field has witnessed groundbreaking research with cutting-edge results. However, for real-world application problems of this research remain a challenge. The approach used is based on a training model from a Dataset, and this model will be used in all processing to detect homes from sample images. All the images were extracted from the unmanned aerial vehicle (UAV) recordings. This paper presents a method to detect segmentation of the Building footprint using U-Net architecture; in order to build footprints without needing manual digitizing with higher accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Naya F, et al (2006) Workers' routine activity recognition using body movements and location information. In: 2006 10th IEEE international symposium on wearable computers. IEEE

    Google Scholar 

  2. Chafiq T, Ouadoud M, Elboukhari K (2020) Covid-19 forecasting in Morocco using FBprophet Facebook's Framework in Python. Int J 9(5)

    Google Scholar 

  3. Ding P et al (2018) A light and faster regional convolutional neural network for object detection in optical remote sensing images. ISPRS J Photogram Remote Sens 141:208–218

    Article  Google Scholar 

  4. Martin F-M et al (2018) Using single-and multi-date UAV and satellite imagery to accurately monitor invasive knotweed species. Remote Sens 10(10):1662

    Article  Google Scholar 

  5. Darraj B, Gourja B, Faiq A, Zerraf S, Tace EM, Belaaouad S (2019) The use of concept maps to evaluate learning strategies for nursing students. Int J Innov Technol Explor Eng 9(1)

    Google Scholar 

  6. Kattenborn T, Eichel J, Fassnacht FE (2019) Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci Rep 9(1):1–9

    Google Scholar 

  7. Huang B, et al (2018) Large-scale semantic classification: outcome of the first year of Inria aerial image labeling benchmark. In: IGARSS 2018–2018 IEEE international geoscience and remote sensing symposium. IEEE

    Google Scholar 

  8. Corbane C et al (2015) Remote sensing for mapping natural habitats and their conservation status–New opportunities and challenges. Int J Appl Earth Obs Geoinf 37:7–16

    Article  Google Scholar 

  9. White JC et al (2016) Remote sensing technologies for enhancing forest inventories: A review. Can J Remote Sens 42(5):619–641

    Article  Google Scholar 

  10. Hirschmugl M et al (2017) Methods for mapping forest disturbance and degradation from optical earth observation data: a review. Current Forestry Reports 3(1):32–45

    Article  Google Scholar 

  11. Kattenborn T et al (2019) UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote Sens Environ 227:61–73

    Article  Google Scholar 

  12. Fassnacht FE, et al (2017) Estimating stand density, biomass and tree species from very high resolution stereo-imagery–towards an all-in-one sensor for forestry applications? Forestry: Int J Forest Res 90(5):613–631

    Google Scholar 

  13. Porway J, Wang Q, Zhu SC (2010) A hierarchical and contextual model for aerial image parsing. Int J Comput Vision 88(2):254–283

    Article  MathSciNet  Google Scholar 

  14. Lin Y et al (2014) Rotation-invariant object detection in remote sensing images based on radial-gradient angle. IEEE Geosci Remote Sens Lett 12(4):746–750

    Google Scholar 

  15. Zhang F et al (2016) Weakly supervised learning based on coupled convolutional neural networks for aircraft detection. IEEE Trans Geosci Remote Sens 54(9):5553–5563

    Article  Google Scholar 

  16. Moranduzzo T, Melgani F (2014) Detecting cars in UAV images with a catalog-based approach. IEEE Trans Geosci Remote Sens 52(10):6356–6367

    Article  Google Scholar 

  17. Benarchid O, Raissouni N (2013) Support vector machines for object based building extraction in suburban area using very high resolution satellite images, a case study: Tetuan, Morocco. IAES Int J Artif Intell 2(1)

    Google Scholar 

  18. Liu Z et al (2016) Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds. IEEE Geosci Remote Sens Lett 13(8):1074–1078

    Article  Google Scholar 

  19. Vakalopoulou M, et al (2015) Simultaneous registration and change detection in multitemporal, very high resolution remote sensing data. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops

    Google Scholar 

  20. Liu W et al (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26

    Article  Google Scholar 

  21. Zhang X et al (2020) How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery? Remote Sens 12(3):417

    Article  Google Scholar 

  22. Chafiq T, et al (2018) Application of data integrity algorithm for geotechnical data quality management12(8)

    Google Scholar 

  23. Ouadoud M, Chafiq T, Chkouri MY (2018) Designing an IMS-LD model for disciplinary information space of learning management system. In: Proceedings of the 3rd international conference on smart city applications (2018)

    Google Scholar 

  24. Hubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J Physiol 160(1):106

    Article  Google Scholar 

  25. Fukushima M (1980) Dirichlet forms and Markov processes, vol. 23. Elsevier

    Google Scholar 

  26. LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances in eural information processing systems

    Google Scholar 

  27. Krut P (2019) MNIST Handwritten digits classification using a Convolutional Neural Network (CNN). https://towardsdatascience.com/mnist-handwritten-digits-classification-using-a-convolutional-neural-network-cnn-af5fafbc35e9. Accessed 12 Apr 2020

  28. Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International conference on Medical image computing and computer-assisted intervention. Springer

    Google Scholar 

  29. Ouadoud M, et al (2019) Generate a meta-model content for collaboration space of learning management system compatible with IMS-LD

    Google Scholar 

  30. Zhou X et al (2017) Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys 44(10):5221–5233

    Article  Google Scholar 

  31. Guo X, et al (2017) Deep clustering with convolutional autoencoders. In: International conference on neural information processing. Springer

    Google Scholar 

  32. Forney GD Jr (1974) Convolutional codes III. Sequential decoding. Inf Control 25(3):267–297

    Article  MathSciNet  Google Scholar 

  33. Allen DW (2011) Getting to know ArcGIS modelBuilder. Esri Press

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chafiq, T., Hachimi, H., Raji, M., Zerraf, S. (2021). U-Net: Deep Learning for Extracting Building Boundary Collected by Drone of Agadir’s Harbor. In: Motahhir, S., Bossoufi, B. (eds) Digital Technologies and Applications. ICDTA 2021. Lecture Notes in Networks and Systems, vol 211. Springer, Cham. https://doi.org/10.1007/978-3-030-73882-2_11

Download citation

Publish with us

Policies and ethics