Abstract
Motion plays a crucial role in understanding videos and most state-of-the-art neural models for video classification incorporate motion information typically using optical flows extracted by a separate off-the-shelf method. As the frame-by-frame optical flows require heavy computation, incorporating motion information has remained a major computational bottleneck for video understanding. In this work, we replace external and heavy computation of optical flows with internal and light-weight learning of motion features. We propose a trainable neural module, dubbed MotionSqueeze, for effective motion feature extraction. Inserted in the middle of any neural network, it learns to establish correspondences across frames and convert them into motion features, which are readily fed to the next downstream layer for better prediction. We demonstrate that the proposed method provides a significant gain on four standard benchmarks for action recognition with only a small amount of additional cost, outperforming the state of the art on Something-Something-V1 & V2 datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vision (IJCV) 92(1), 1–31 (2011)
Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: Multi-fiber networks for video recognition. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Dosovitskiy, A., et al.: Flownet: learning optical flow with convolutional networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2015)
Fan, L., Huang, W., Gan, C., Ermon, S., Gong, B., Huang, J.: End-to-end learning of motion representation for video understanding. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Feichtenhofer, C., Pinz, A., Wildes, R.: Spatiotemporal residual networks for video action recognition. In: Proceedings of Neural Information Processing Systems (NeurIPS) (2016)
Feichtenhofer, C., Pinz, A., Zisserman, A.: Convolutional two-stream network fusion for video action recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Goyal, R., et al.: The "something something" video database for learning and evaluating visual common sense. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2017)
Han, K., et al.: Scnet: learning semantic correspondence. In: Proceeding of IEEE International Conference on Computer Vision (ICCV) (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Honari, S., Molchanov, P., Tyree, S., Vincent, P., Pal, C., Kautz, J.: Improving landmark localization with semi-supervised learning. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift (2015). arXiv preprint arXiv:1502.03167
Jiang, B., Wang, M., Gan, W., Wu, W., Yan, J.: Stm: spatiotemporal and motion encoding for action recognition. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019)
Kay, W., et al.: The kinetics human action video dataset (2017). arXiv preprint arXiv:1705.06950
Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: HMDB: a large video database for human motion recognition. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011)
Lee, J., Kim, D., Ponce, J., Ham, B.: SFNET: learning object-aware semantic correspondence. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Lee, M., Lee, S., Son, S., Park, G., Kwak, N.: Motion feature network: fixed motion filter for action recognition. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
Lin, J., Gan, C., Han, S.: Tsm: temporal shift module for efficient video understanding. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019)
Liu, X., Lee, J.Y., Jin, H.: Learning video representations from correspondence proposals. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Martinez, B., Modolo, D., Xiong, Y., Tighe, J.: Action recognition with spatial-temporal discriminative filter banks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019)
Min, J., Lee, J., Ponce, J., Cho, M.: Hyperpixel flow: semantic correspondence with multi-layer neural features. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of International Conference on Machine Learning (ICML) (2010)
Ng, J.Y.H., Choi, J., Neumann, J., Davis, L.S.: Actionflownet: learning motion representation for action recognition. In: Proceedings of Winter Conference on Applications of Computer Vision (WACV) (2018)
Piergiovanni, A., Ryoo, M.S.: Representation flow for action recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)
Rocco, I., Arandjelovic, R., Sivic, J.: Convolutional neural network architecture for geometric matching. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M.J.: On the integration of optical flow and action recognition. In: Proceedings of German Conference on Pattern Recognition (GCPR) (2018)
Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Proceedings of Neural Information Processing Systems (NeurIPS) (2014)
Stroud, J., Ross, D., Sun, C., Deng, J., Sukthankar, R.: D3D: distilled 3D networks for video action recognition. In: Proceedings of Winter Conference on Applications of Computer Vision (WACV) (2020)
Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Sun, S., Kuang, Z., Sheng, L., Ouyang, W., Zhang, W.: Optical flow guided feature: a fast and robust motion representation for video action recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2015)
Tran, D., Wang, H., Torresani, L., Feiszli, M.: Video classification with channel-separated convolutional networks. In: Proceedings of IEEE International Conference on Computer Vision (ICCV) (2019)
Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Wang, H., Kläser, A., Schmid, C., Cheng-Lin, L.: Action recognition by dense trajectories. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2011)
Wang, L., Qiao, Y., Tang, X.: Action recognition with trajectory-pooled deep-convolutional descriptors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Wang, L., et al.: Temporal segment networks: towards good practices for deep action recognition. In: Proceedings of European Conference on Computer Vision (ECCV) (2016)
Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Wang, X., Gupta, A.: Videos as space-time region graphs. In: Proceedings of European Conference on Computer Vision (ECCV), pp. 399–417 (2018)
Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
Zach, C., Pock, T., Bischof, H.: A duality based approach for realtime tv-l 1 optical flow. In: Hamprecht, Fred A., Schnörr, Christoph, Jähne, Bernd (eds.) DAGM 2007. LNCS, vol. 4713, pp. 214–223. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74936-3_22
Zhao, Y., Xiong, Y., Lin, D.: Recognize actions by disentangling components of dynamics. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Zhao, Y., Xiong, Y., Lin, D.: Trajectory convolution for action recognition. In: Proceedings of Neural Information Processing Systems (NeurIPS) (2018)
Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
Zolfaghari, M., Singh, K., Brox, T.: Eco: efficient convolutional network for online video understanding. In: Proceedings of European Conference on Computer Vision (ECCV) (2018)
Acknowledgements
This work is supported by Samsung Advanced Institute of Technology (SAIT), and also by Basic Science Research Program (NRF-2017R1E1A1A010 77999, NRF-2018R1C1B6001223) and Next-Generation Information Computing Development Program (NRF-2017M3C4A7069369) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kwon, H., Kim, M., Kwak, S., Cho, M. (2020). MotionSqueeze: Neural Motion Feature Learning for Video Understanding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12361. Springer, Cham. https://doi.org/10.1007/978-3-030-58517-4_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-58517-4_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58516-7
Online ISBN: 978-3-030-58517-4
eBook Packages: Computer ScienceComputer Science (R0)