FAST AND ROBUST MODEL FOR MULTIPLE OBJECTS TRACKING USING KEY-FRAME DETECTION AND CO-TRAINED CLASSIFIER
About this article
Received: 06/10/20                Revised: 30/11/20                Published: 30/11/20Abstract
This paper proposes our new approach for multiple objects tracking for real-time video tracking applications. The new tracking method can improve tracking speed and reduce track fragmentation and confusion by using two convolutional neural networks to detect and distinguish the targets. This mechanism ensures real-time capability when you do not have to perform deep learning detector continuously while still ensuring constant and accurate updating of the target's position. This is called a co-training mechanism. The keyframe detection model is a Single Shot Detector that also operates as a data generator; the second neural network is a classifier that will be trained from data collected from the main detector. The tracker is presented as a combination of techniques that we named DCT (Detector-Classifier Tracker). This article will fully explain the working mechanism of DCT and presents the test results for the combined image attachment method according to the frame processing experiments on data of long range thermal imaging cameras.
This paper proposes a new approach for multiple object tracking for real-time video tracking applications. The new tracking method uses two convolutional neural networks (CNN) to detect and distinguish the targets and the background. This mechanism ensures real-time capability when you do not have to perform deep learning detector continuously while still ensuring constant and accurate updating of the target's position. We call this a co-training mechanism. The detection model is a Single Shot Detector (SSD) that also operates as a data generator; the second neural network is a classifier that will be trained from data collected from the main detector. This article will fully explain the working mechanism of this tracking mechanism and present the test results for the combined image attachment method according to the frame processing scheme on data of long range thermal imaging cameras.This paper proposes a new approach for multiple object tracking for real-time video tracking applications. The new tracking method uses two convolutional neural networks (CNN) to detect and distinguish the targets and the background. This mechanism ensures real-time capability when you do not have to perform deep learning detector continuously while still ensuring constant and accurate updating of the target's position. We call this a co-training mechanism. The detection model is a Single Shot Detector (SSD) that also operates as a data generator; the second neural network is a classifier that will be trained from data collected from the main detector. This article will fully explain the working mechanism of this tracking mechanism and present the test results for the combined image attachment method according to the frame processing scheme on data of long range thermal imaging cameras.This paper proposes a new approach for multiple object tracking for real-time video tracking applications. The new tracking method uses two convolutional neural networks (CNN) to detect and distinguish the targets and the background. This mechanism ensures real-time capability when you do not have to perform deep learning detector continuously while still ensuring constant and accurate updating of the target's position. We call this a co-training mechanism. The detection model is a Single Shot Detector (SSD) that also operates as a data generator; the second neural network is a classifier that will be trained from data collected from the main detector. This article will fully explain the working mechanism of this tracking mechanism and present the test results for the combined image attachment method according to the frame processing scheme on data of long range thermal imaging cameras.
Keywords
Full Text:
PDFReferences
[1]. Anton Milan, Laura Leal-Taixe, Ian Reid, Stefan Roth, and Konrad Schindler. “A benchmark for multi-object tracking” arXiv:1603.00831v2 [cs.CV] 3 May 2016.
[2]. Qian Yu, Thang Ba Dinh, Gerard Medioni. “Online Tracking and Reacquisition using Co-trained generative and discriminative trackers”, Conference: Computer Vision - ECCV, 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part II, 2008
[3]. Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, Ben Upcroft. “Simple online and realtime tracking” arXiv:1602.00763v2 [cs.CV] Vol.7 Jul 2017.
[4]. Christoph Feichtenhofer, Axel Pinz, Andrew Zisserman. “Detect to track and track to detect”, arXiv:1710.03958v2 [cs.CV] 7 Mar 2018.
[5]. João F. Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. “High-speed tracking with Kernelized correlation filters”, arXiv:1404.7584v3 [cs.CV] 5 Nov 2014.
[6]. David S.Bolme, J. Ross Beveridge, Bruce A. Draper, Yui Man Lui. “Visual object tracking using adaptive correlation filters”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 2010.
[7]. Da Zhang, Hamid Maei, Xin Wang, and Yuan-Fang Wang. “Deep reinforcement learning for visual object tracking in video”, arXiv:1701.08936v2 [cs.CV] 10 Apr 2017.DOI: https://doi.org/10.34238/tnu-jst.3678
Refbacks
- There are currently no refbacks.





