VIDEO SUMMARIZATION USING BACKGROUND SUBTRACTION TECHNIQUES | Huy | TNU Journal of Science and Technology

VIDEO SUMMARIZATION USING BACKGROUND SUBTRACTION TECHNIQUES

About this article

Received: 24/02/22                Revised: 28/04/22                Published: 11/05/22

Authors

1. Ngo Huu Huy Email to author, TNU – University of Information and Communication Technology
2. Le Hung Linh, TNU – University of Information and Communication Technology
3. Nguyen Duy Minh, TNU – University of Information and Communication Technology
4. Ngo Thi Thu Hang, Kim Dong Secondary School - Ha Long City, Quang Ninh

Abstract


Multimedia information systems have been massively and diversely used in research and practical applications. Among them, video data is one of the most common data types. However, the management and use of videos have faced difficulties, such as organizing storage or finding events in a video. Therefore, this study presents an efficient and simple method based on the background subtraction technique for video summarization. First, the input video is used to extract consecutive frames. These frames are then preprocessed, such as converting to grayscale images and image smoothing. The background subtraction technique is used to detect movement in the current frame relative to the previous frame. If this frame has motion detection, it will be saved for the output video. We also propose a video summarization algorithm. Experimental results demonstrate the effectiveness of this method, especially for video surveillance.

Keywords


Background subtraction; Motion detection; Motion tracking; Video summarization; Video surveillance

References


[1] A. S. Murugan, K. S. Devi, A. Sivaranjani, and P. Srinivasan, “A Study on Various Methods Used for Video Summarization and Moving Object Detection for Video Surveillance Applications,” Multimedia Tools and Applications, vol. 77, no. 18, pp. 23273-23290, 2018.

[2] H. Wei, B. Ni, Y. Yan, H. Yu, X. Yang, and C. Yao, “Video Summarization via Semantic Attended Networks,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 01, pp. 216-223, 2018.

[3] B. A. Plummer, M. Brown, and S. Lazebnik, “Enhancing Video Summarization via Vision-Language Embedding,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 5781-5789.

[4] Y. Yuan, T. Mei, P. Cui, and W. Zhu, “Video Summarization by Learning Deep Side Semantic Embedding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 1, pp. 226-237, 2019.

[5] S. Zhang, Y. Zhu, and A. K. Roy-Chowdhury, “Context-Aware Surveillance Video Summarization,” IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5469-5478, 2016.

[6] S. K. Kuanar, K. B. Ranga, and A. S. Chowdhury, “Multi-View Video Summarization Using Bipartite Matching Constrained Optimum-Path Forest Clustering,” IEEE Transactions on Multimedia, vol. 17, no. 8, pp. 1166-1173, 2015.

[7] O. Elharrouss, N. Al-Maadeed, and S. Al-Maadeed, “Video Summarization Based on Motion Detection for Surveillance Systems,” in Proc. 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier, Morocco, 2019, pp. 366-371.

[8] A. Kanehira, L. Van Gool, Y. Ushiku, and T. Harada, “Viewpoint-Aware Video Summarization,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 2018, pp. 7435-7444.

[9] T. -J. Fu, S. -H. Tai, and H. -T. Chen, “Attentive and Adversarial Learning for Video Summarization,” in Proc. IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2019, pp. 1579-1587.

[10] X. He, Y. Hua, T. Song, Z. Zhang, Z. Xue, R. Ma, N. Robertson, and H. Guan, “Unsupervised Video Summarization with Attentive Conditional Generative Adversarial Networks,” in Proc. 27th ACM International Conference on Multimedia, New York, NY, USA, 2019, pp. 2296-2304.

[11] Z. Ji, K. Xiong, Y. Pang, and X. Li, “Video Summarization with Attention-Based Encoder–Decoder Networks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 30, no. 6, pp. 1709-1717, 2020.

[12] M. Rochan and Y. Wang, “Video Summarization by Learning from Unpaired Data,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 7902-7911.

[13] M. Rochan, L. Ye, and Y. Wang, “Video Summarization Using Fully Convolutional Sequence Networks,” in Proc. European Conference on Computer Vision (ECCV), Munich, Germany, 2018, pp. 347-363.

[14] A. Khumaidi, E. M. Yuniarno, and M. H. Purnomo, “Welding Defect Classification Based on Convolution Neural Network (CNN) and Gaussian Kernel,” in Proc. International Seminar on Intelligent Technology and Its Applications (ISITIA), Surabaya, Indonesia, 2017, pp. 261-265.

[15] S. Zhang, E. Staudt, T. Faltemier, and A. K. Roy-Chowdhury, “A Camera Network Tracking (CamNeT) Dataset and Performance Baseline,” in Proc. IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2015, pp. 365-372.

[16] S. Oh, A. Hoogs, A. Perera, N. Cuntoor, C. -C. Chen, J. T. Lee, S. Mukherjee, J. K. Aggarwal, H. Lee, L. Davis, E. Swears, X. Wang, Q. Ji, K. Reddy, and M. Shah, “A Large-Scale Benchmark Dataset for Event Recognition in Surveillance Video,” in Proc. IEEE Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 2011, pp. 3153-3160.




DOI: https://doi.org/10.34238/tnu-jst.5582

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved