Skip to main content
Log in

Rubber tapping line detection in near-range images via customized YOLO and U-Net branches with parallel aggregation heads convolutional neural network

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Current convolutional neural network structures for image-related tasks lean toward directed acyclic graphs with multiple output nodes. This enables a solution for the rubber tapping line detection that desires various output types, such as bounding boxes, points in pixels, or edges. This paper demonstrates multibranch deep convolutional networks whose outputs are bounding boxes and pixel segmentation masks by adopting YOLOv3 and U-Net structures. This paper proposes the functions of column-wise argmax and column-wise Softmax with redundant mask outputs intended to enhance pixel classification accuracy. Experiments with the networks discovered some novel segmentation loss functions, such as Dice’s coefficient, Focal, and Tversky’s index, having different characters for the tapping line prediction, which were observed by Hausdorff distance and F1-score. The network with multiple mask predictions can omit their weaknesses and yield higher tapping line detection accuracy compared to every single one. In the context of image processing, the column-wise Softmax and argmax algorithms were superior to the edge-thinning algorithm for detecting line vertices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availability

Availability of Near-range Tapping Line Dataset: Please send an inquiry to the corresponding author, Mr. Thanate Khaorapapong.

Code availability

Source code is made available at https://github.com/rwttr/NearcamTaplineDetector.

Notes

  1. Tensor’s size naming convention given by width \(\times \) height \(\times \) depth, and also applied across this paper.

  2. Concerning the stability of training dynamics with single precision floating point, the regularization factor was set to 2/m whereas m was the number of learnable weights.

  3. Intel 4.0 GHz hexa-core CPU with Nvidia GTX 1070Ti GPU.

References

  1. Abraham P (1992) Chapter 12—Tapping of Hevea brasiliensis. In: Sethuraj M, Mathew N (eds) Natural rubber, developments in crop science, vol 23. Elsevier, Amsterdam, pp 263–281. https://doi.org/10.1016/B978-0-444-88329-2.50018-0

    Chapter  Google Scholar 

  2. Angel TS, Amrithesh K, Krishna K et al (2022) Artificial intelligence-based rubber tapping robot. In: Ranganathan G, Fernando X, Shi F (eds) Inventive communication and computational technologies. Springer Singapore, Singapore, pp 427–438. https://doi.org/10.1007/978-981-16-5529-6_34

    Chapter  Google Scholar 

  3. Badrinarayanan V, Kendall A, Cipolla R (2015) Segnet: a deep convolutional encoder–decoder architecture for image segmentation. CoRR arXiv:1511.00561

  4. Bin Z, Xuelei W, Taiyu W et al (May 2020) One kind rubber tapping robot. China Patent No. CN107494194A. Google Patents

  5. Bolya D, Zhou C, Xiao F et al (2022) Yolact++ better real-time instance segmentation. IEEE Trans Pattern Anal Mach Intell 44:1108–1121

    Article  Google Scholar 

  6. Chen J, Wang Z, Wu J et al (2021) An improved YOLOv3 based on dual path network for cherry tomatoes detection. J Food Process Eng 44(10):e13803. https://doi.org/10.1111/jfpe.13803

    Article  Google Scholar 

  7. Chen J, Wu J, Wang Z et al (2021) Detecting ripe fruits under natural occlusion and illumination conditions. Comput Electron Agric 190:106450. https://doi.org/10.1016/j.compag.2021.106450

    Article  Google Scholar 

  8. Chu P, Li Z, Lammers K et al (2021) Deep learning-based apple detection using a suppression mask R-CNN. Pattern Recognit Lett 147:206–211. https://doi.org/10.1016/j.patrec.2021.04.022

    Article  Google Scholar 

  9. Deepthi SR, DSouza RMD, Shri KA (2020) Automated rubber tree tapping and latex mixing machine for quality production of natural rubber. In: 2020 IEEE-HYDCON, pp 1–4. https://doi.org/10.1109/HYDCON48903.2020.9242699

  10. Feng A, Lifu L, Guishui X et al (Oct. 2018) Automatic integrated rubber tapping and collecting method based on image identification and automatic integrated rubber tapping and collecting device based on image identification. China Patent No. CN105494031A. Google Patents

  11. Feng A, Lifu L, Guishui X et al (Oct. 2018) A kind of integrated automatic rubber tapping receipts gluing method and device based on image recognition. China Patent No. CN105494031B. Google Patents

  12. Girshick RB (2015) Fast R-CNN. CoRR arXiv:1504.08083

  13. He K, Zhang X, Ren S et al (2015) Deep residual learning for image recognition. CoRR arXiv:1512.03385

  14. He T, Zhang Z, Zhang H et al (2019) Bag of tricks for image classification with convolutional neural networks. In: 2019 IEEE/CVF Conference on computer vision and pattern recognition (CVPR), pp 558–567

  15. Huang G, Liu Z, Weinberger KQ (2016) Densely connected convolutional networks. CoRR arXiv:1608.06993

  16. Huttenlocher D, Klanderman G, Rucklidge W (1993) Comparing images using the Hausdorff distance. IEEE Trans Pattern Anal Mach Intell 15(9):850–863. https://doi.org/10.1109/34.232073

    Article  Google Scholar 

  17. Innes M (2018) Flux: elegant machine learning with Julia. J Open Source Softw. https://doi.org/10.21105/joss.00602

    Article  Google Scholar 

  18. Jia W, Tian Y, Luo R et al (2020) Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot. Comput Electron Agric 172:105380. https://doi.org/10.1016/j.compag.2020.105380

    Article  Google Scholar 

  19. Kamil MFM, Zakaria WNW, Tomari MRM et al (2020) Design of automated rubber tapping mechanism. IOP Confer Ser Mater Sci Eng 917(1):012–016. https://doi.org/10.1088/1757-899x/917/1/012016

    Article  Google Scholar 

  20. Kohli A, Hombalmath M, Patil AY et al (2021) Rubber tapping machine performance and procedure. J Phys Confer Ser 2070(1):012157. https://doi.org/10.1088/1742-6596/2070/1/012157

    Article  Google Scholar 

  21. Kuznetsova A, Maleva T, Soloviev V (2020) Using YOLOv3 algorithm with pre- and post-processing for apple detection in fruit-harvesting robot. Agronomy 10:1–1016. https://doi.org/10.3390/agronomy10071016

    Article  Google Scholar 

  22. Lawal MO (2021) Tomato detection based on modified YOLOv3 framework. Sci Rep 11(1):1–11. https://doi.org/10.1038/s41598-021-81216-5

    Article  Google Scholar 

  23. Lawal OM (2021) Development of tomato detection model for robotic platform using deep learning. Multimed Tools Appl 80(17):26751–26772. https://doi.org/10.1007/s11042-021-10933-w

    Article  Google Scholar 

  24. LeCun YA, Bottou L, Orr GB et al (2012) Efficient BackProp. Springer, Berlin, pp 9–48. https://doi.org/10.1007/978-3-642-35289-8_3

    Book  Google Scholar 

  25. Li Q, Jia W, Sun M et al (2021) A novel green apple segmentation algorithm based on ensemble U-Net under complex orchard environment. Comput Electron Agric 180:105900. https://doi.org/10.1016/j.compag.2020.105900

    Article  Google Scholar 

  26. Li S, Zhang J, Zhang J et al (2018) Study on the secant segmentation algorithm of rubber tree. J Phys Confer Ser 1004:012033. https://doi.org/10.1088/1742-6596/1004/1/012033

    Article  Google Scholar 

  27. Lin T, Goyal P, Girshick RB et al (2017) Focal loss for dense object detection. CoRR arXiv:1708.02002

  28. Liu G, Nouaze JC, Touko Mbouembe PL et al (2020) YOLO-tomato: a robust algorithm for tomato detection based on YOLOv3. Sensors 20(7):2145. https://doi.org/10.3390/s20072145

    Article  Google Scholar 

  29. Milletari F, Navab N, Ahmadi S (2016) V-Net: fully convolutional neural networks for volumetric medical image segmentation. CoRR arXiv:1606.04797

  30. Misra D (2019) Mish: a self regularized non-monotonic neural activation function. CoRR arXiv:1908.08681

  31. Redmon J, Farhadi A (2016) YOLO9000: better, faster, stronger. CoRR arXiv:1612.08242

  32. Redmon J, Farhadi A (2018) YOLOv3: an incremental improvement. CoRR arXiv:1804.02767

  33. Ren S, He K, Girshick RB et al (2015) Faster R-CNN: towards real-time object detection with region proposal networks. CoRR arXiv:1506.01497

  34. Rong J, Dai G, Wang P (2021) A peduncle detection method of tomato for autonomous harvesting. Complex Intell Syst. https://doi.org/10.1007/s40747-021-00522-7

    Article  Google Scholar 

  35. Ronneberger O, Fischer P, Brox T (2015) U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM et al (eds) Medical image computing and computer-assisted intervention—MICCAI 2015. Springer International Publishing, Cham, pp 234–241. https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  36. Salehi SSM, Erdogmus D, Gholipour A (2017) Tversky loss function for image segmentation using 3D fully convolutional deep networks. CoRR arXiv:1706.05721

  37. Sudre CH, Li W, Vercauteren T et al (2017) Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. CoRR arXiv:1707.03237

  38. Tong K, Wu Y, Zhou F (2020) Recent advances in small object detection based on deep learning: a review. Image Vis Comput 97:103910. https://doi.org/10.1016/j.imavis.2020.103910

    Article  Google Scholar 

  39. Wang S, Zhou H, Zhang C et al (2022) Design, development and evaluation of latex harvesting robot based on flexible toggle. Robot Auton Syst 147:103906. https://doi.org/10.1016/j.robot.2021.103906

    Article  Google Scholar 

  40. Wongtanawijit R, Khaorapapong T (2021) Nighttime rubber tapping line detection in near-range images. Multimed Tools Appl. https://doi.org/10.1007/s11042-021-11140-3

    Article  Google Scholar 

  41. Yatawara Y, Brito W, Perera M et al (2019) “appuhamy”–the fully automatic rubber tapping machine. Eng J Inst Eng Sri Lanka 52:27. https://doi.org/10.4038/engineer.v52i2.7351

    Article  Google Scholar 

  42. Yu Y, Zhang K, Yang L et al (2019) Fruit detection for strawberry harvesting robot in non-structural environment based on mask-rcnn. Comput Electron Agric 163:104846. https://doi.org/10.1016/j.compag.2019.06.001

    Article  Google Scholar 

  43. Zhang C, Yong L, Chen Y et al (2019) A rubber-tapping robot forest navigation and information collection system based on 2D lidar and a gyroscope. Sensors (Basel, Switzerland) 19(9):2136. https://doi.org/10.3390/s19092136

    Article  Google Scholar 

  44. Zhang J, Liu Y, Xing H (2019) Application of improved 2-D entropy algorithm in rubber tree image segmentation. In: 2019 2nd International conference on safety produce informatization (IICSPI), pp 311–314. https://doi.org/10.1109/IICSPI48186.2019.9096014

  45. Zhou H, Zhang S, Zhang J et al (2021) Design, development, and field evaluation of a rubber tapping robot. J Field Robot. https://doi.org/10.1002/rob.22036

    Article  Google Scholar 

Download references

Acknowledgements

Our appreciation goes to the Graduate School at Prince of Songkla University for research funding and to the Computer Engineering Department at Prince of Songkla University for computing infrastructure. We would also like to thank the rubber latex farmers, Mr. Jan and Mrs. Ree, for their participation in the rubber tree provision. Unfortunately, Mrs. Ree is experiencing amnesia from her accident. We would like to express our deepest sympathy and wish Mrs. Ree and her family all the best.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thanate Khaorapapong.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wongtanawijit, R., Khaorapapong, T. Rubber tapping line detection in near-range images via customized YOLO and U-Net branches with parallel aggregation heads convolutional neural network. Neural Comput & Applic 34, 20611–20627 (2022). https://doi.org/10.1007/s00521-022-07475-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07475-z

Keywords

Navigation