Skip to main content
Log in

Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

  • Original Paper
  • Published:
Sensing and Imaging Aims and scope Submit manuscript

Abstract

Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Gao, Y., & Lee, H. J. (2016). Local tiled deep networks for recognition of vehicle make and model. Sensors, 16(2), 226.

    Article  Google Scholar 

  2. Chen, L. C., Hsieh, J. W., Yan, Y., & Chen, D. Y. (2015). Vehicle make and model recognition using sparse representation and symmetrical SURFs. Pattern Recognition, 48(6), 1979–1998.

    Article  Google Scholar 

  3. More, N., & Tidke, B. (2015). License plate identification using artificial neural network and wavelet transformed feature selection. In 2015 IEEE international conference on pervasive computing (ICPC) (pp. 1–5).

  4. Wang, Y. C., Han, C. C., Hsieh, C. T., & Fan, K. C. (2014). Vehicle color classification using manifold learning methods from urban surveillance videos. EURASIP Journal on Image and Video Processing, 2014(1), 1.

    Article  Google Scholar 

  5. Chen, R., Hawes, M., Mihaylova, L., Xiao, J., & Liu, W. (2016). Vehicle logo recognition by spatial-sift combined with logistic regression. In Proceedings of fusion 2016.

  6. Huttunen, H., Yancheshmeh, F. S., & Chen, K. (2016). Car type recognition with deep neural networks. arXiv preprint arXiv:1602.07125.

  7. Baek, N., Park, S. M., Kim, K. J., & Park, S. B. (2007). Vehicle color classification based on the support vector machine method. In International conference on intelligent computing (pp. 1133–1139). Berlin: Springer.

  8. Kim, K. J., Park, S. M., & Choi. Y. J. (2008). Deciding the number of color histogram bins for vehicle color recognition. In AsiaPacific services computing conference, 2008. APSCC’08, IEEE (pp. 134–138).

  9. Chen, P., Bai, X., & Liu, W. (2014). Vehicle color recognition on urban road by feature context. IEEE Transactions on Intelligent Transportation Systems, 15(5), 2340–2346.

    Article  Google Scholar 

  10. Hsieh, J. W., Chen, L. C., Chen, S. Y., & Chen, D. Y. (2015). Vehicle color classification under different lighting conditions through color correction. IEEE Sensors Journal, 15(2), 971–983.

    Article  Google Scholar 

  11. Hu, W., Yang, J., Bai, L., & Yao, L. (2013). A new approach for vehicle color recognition based on specular-free image. In Sixth international conference on machine vision (ICMV 13), international society for optics and photonics (pp. 90671Q–90671Q-5).

  12. Rachmadi, R. F., & Purnama, I. (2015). Vehicle color recognition using convolutional neural network. arXiv preprintarXiv:1510.07391.

  13. Hu, C., Bai, X., Qi, L., Chen, P., Xue, G., & Mei, L. (2015). Vehicle color recognition with spatial pyramid deep learning. IEEE Transactions on Intelligent Transportation Systems, 16(5), 2925–2934.

    Article  Google Scholar 

  14. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097–1105).

  15. Lazebnik, S., Schmid, C., & Ponce, J. (2006). Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In 2006 IEEE computer society Conference on computer vision and pattern recognition (Vol. 2, pp. 2169–2178).

  16. Yang, J., Zhang, D., Frangi, A. F., & Yang, J. Y. (2004). Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1), 131–137.

    Article  Google Scholar 

  17. Shafer, S. A. (1985). Using color to separate reflection components. Color Research and Application, 10(4), 210–218.

    Article  Google Scholar 

  18. Tan, R. T., & Katsushi, I. (2005). Separating reflection components of textured surfaces using a single image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(2), 178–193.

    Article  Google Scholar 

  19. Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504–507.

    Article  MathSciNet  MATH  Google Scholar 

  20. Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.

    Article  MathSciNet  MATH  Google Scholar 

  21. LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., et al. (1989). Back propagation applied to handwritten zip code recognition. Neural Computation, 1(4), 541–551.

    Article  Google Scholar 

  22. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1–9).

  23. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

  24. Zou, H., Hastie, T., & Tibshirani, R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15(2), 265–286.

    Article  MathSciNet  Google Scholar 

  25. Niyogi, X. (2004). Locality preserving projections neural information processing systems. MIT, 16, 153.

    Google Scholar 

  26. Roweis, S. T., & Saul, L. K. (2000). Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2323–2326.

    Article  Google Scholar 

  27. Raducanu, B., & Dornaika, F. (2012). A supervised non-linear dimensionality reduction approach for manifold learning. Pattern Recognition, 45(6), 2432–2444.

    Article  MATH  Google Scholar 

  28. Timofte, R., & Van Gool, L. (2011). Sparse representation based projections. In Proceedings of the 22nd British machine vision conference-BMVC 2011 (pp. 61.1–61.12). BMVC Press.

  29. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.

    Article  MathSciNet  Google Scholar 

  30. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., & Darrell, T. (2014). Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on multimedia (pp. 675–678).

Download references

Acknowledgements

The work in this paper is supported by the National Natural Science Foundation of China (Nos. 61531006, 61372149, 61370189, 61471013, and 61602018), the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions (Nos. CIT&TCD20150311, CIT&TCD201404043), the Beijing Natural Science Foundation (Nos. 4142009, 4163071), the Science and Technology Development Program of Beijing Education Committee (Nos. KM201410005002, KM201510005004), Funding Project for Academic Human Resources Development in Institutions of Higher Learning Under the Jurisdiction of Beijing Municipality.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Zhuo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Li, J., Zhuo, L. et al. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features. Sens Imaging 18, 20 (2017). https://doi.org/10.1007/s11220-017-0173-8

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1007/s11220-017-0173-8

Keywords

Navigation