CHARACTER RECOGNITION FOR LICENSE PLATE RECOGNITION TRAFFIC CAMERA IN VIETNAM | Tôn | TNU Journal of Science and Technology

CHARACTER RECOGNITION FOR LICENSE PLATE RECOGNITION TRAFFIC CAMERA IN VIETNAM

About this article

Received: 18/05/20                Revised: 28/05/20                Published: 31/05/20

Authors

1. Le Huu Ton Email to author, University of Science and Technology of Hanoi
2. Nguyen Hoang Ha, University of Science and Technology of Hanoi

Abstract


Optical Character Recognition (OCR) is an active research direction with many practical applications, including digital character classification for license plate recognition on traffic cameras. The OCR models usually deploy a single classifier for all the categories in the dataset. However, the classification difficulties among all the classes in the dataset are different, some characters are easier to be misclassified compared to the others. Due to this reason, the classification performances across the classes are not equal. In this paper, we deploy a 2-stage classifier in order to improve the classification accuracy for difficult classes. The first classifier is used to classify all the classes while the second one is used only for difficult classes, in order to refine the predictions made by the first classifier. The experiment results on two datasets SVHN and license plate characters demonstrate that the proposed method helps to enhance the classification accuracy of some difficult classes by 1.4%.


Keywords


Image processing; optical character recognition; convolutional neural network; deep learning; image classification.

References


[1]. C. Yao, X. Bai, B. Shi, and W. Liu, “Strokelets: A learned multi-scale representation for scene text recognition,” In Proceedings of the IEEE Conference on Computer Vision and PatternRecognition (CVPR), 2014, pp. 4042-4049.

[2]. M. Buta, L. Neumann and J. Matas, "FASText: Efficient Unconstrained Scene Text Detector," 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 1206-1214, doi: 10.1109/ICCV.2015.143.

[3]. T. Q. Phan, P. Shivakumara, S. Tian, and C. L. Tan, “Recognizing text with perspective distortion in natural scenes,” In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 569-576.

[4]. A. Krizhevsky, I. Sutskever, and G Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS), 2012.

[5]. A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, 2017. [Online]. Available: http://arxiv.org/abs/1704.04861 [Accessed May 12, 2018].

[6]. D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen, Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules, 2019, [Online]. Available: https://arxiv.org/abs/1905.05393 [Accessed May 12, 2019].

[7]. M. Galar, A. Fernández, E. Barrenechea, H. Bustince, and F. Herrera, “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches,” IEEE Trans. Syst., Man, Cybernet., Part C: Appl. Rev., vol. 42, no. 4, pp. 463-484, 2012.

[8]. Freund, and R. E. Schapire, “A decision-theoretic generalization ofon-line learning and an application to boosting,” J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119-139, 1997.

[9]. L. Rokach, “Ensemble-based classifiers,”Artif. Intell. Rev., vol. 33, pp. 1-39, 2010.

[10]. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and Y. Andrew, “Reading Digits in Natural Images with Unsupervised Feature Learning NIPS,” Workshop on Deep Learning and Unsupervised Feature Learning, 2011.


Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved