A SOLUTION FOR AUTOMATED WATER METER READING FROM IMAGES BY APPLYING DEEP LEARNING | Tích | TNU Journal of Science and Technology

A SOLUTION FOR AUTOMATED WATER METER READING FROM IMAGES BY APPLYING DEEP LEARNING

About this article

Received: 11/09/23                Revised: 06/11/23                Published: 06/11/23

Authors

1. Pham Xuan Tich Email to author, University of Transport and Communications
2. Nguyen Dinh Duong, University of Transport and Communications

Abstract


In this paper, we propose an automated meter reading (AMR) method applied to water meters by applying deep learning. We design a two-stage method using Rotational Region Convolutional Neural Networks (R2CNN). The first stage uses a R2CNN network to detect the digit region; the second stage applies another R2CNN network to recognize digits on an image that has only the alphanumeric region. The digits after identification are processed and sorted to obtain the counter meter. In ARM studies, most datasets are not available to the research community because the images belong to service companies. Therefore, in this study, we created a new dataset for the proposed method using it for training and testing. The result is a process with deep learning models that determine water meter readings from images of the watch face with high accuracy, and this process has been integrated into Citywork software to initially help developers. The manager audits the accuracy of the readings recorded by the employee manually to see if they match the readings in the images.

Keywords


Automated meter reading; Deep learning; Convolutional neural network; Recurrent neural network; Rotational Region

References


[1] L. Neumann and J. Matas, “Real-time scene text localization and recognition,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012, pp. 3538–3545.

[2] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman, “Synthetic data and artificial neural networks for natural scene text recognition,” Computer Science, June 2014, doi: 10.48550/arXiv.1406.2227.

[3] Wang, T., Wu, D.J., Coates, A. and Ng, A.Y. “End-to-end text recognition with convolutional neural networks,” 21st International Conference on Pattern Recognition (ICPR), 2012, pp. 3304–3308.

[4] A. Bissacco, M. Cummins, Y. Netzer, and H. Neven, “Photoocr: Reading text in uncontrolled conditions,” Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 785–792.

[5] T. D. Le, D. T. Nguyen, and Q. B. Truong, “Identification of some types of longan (through leaves) using image and deep learning technology,” TNU Journal of Science and Technology, vol. 228, no. 02, pp. 128 – 135, 2023.

[6] Q. T. Nguyen, Q. U. Nguyen, K. P. Phung, M. T. Nguyen, and M. S. Nguyen, “Detecting and measuring environmental desasters based on image segmentation deep learning technique,” TNU Journal of Science and Technology, vol. 227, no. 16, pp. 140 – 148, 2022.

[7] B. Shi, X. Bai, and C. Yao, “An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 2298-2304, Nov. 2017.

[8] C.-Y. Lee and S. Osindero “Recursive Recurrent Nets with Attention Modeling for OCR in the Wild,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2231–2239.

[9] R. Laroca, V. Barroso, M. A. Diniz, G. R. Gonc¸alves, W. R. Schwartz, and D. Menotti “Convolutional Neural Networks for Automatic Meter Reading,” Journal of Electronic Imaging, vol. 28, no. 01, pp. 1-14, 2019, doi: 10.1117/1.JEI.28.1.013023.

[10] M. L. W. Concio, F. S. Bernardo, J. M. Opulencia, G. L. Ortiz, and J. R. I. Pedrasa "Automated Water Meter Reading Through Image Recognition," TENCON 2022 - 2022 IEEE Region 10 Conference (TENCON), 01-04 November 2022, doi: 10.1109/TENCON55691.2022.9977678.

[11] Y. Liang, Y. Liao, S. Li, W. Wu, T. Qiu, and W. Zhang "Research on water meter reading recognition based on deep learning," Scientific Reports, vol. 12, 2022, Art. no. 12861, doi: 10.1038/s41598-022-17255-3.

[12] Y. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, P. Fu, and Z. Luo, “R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection,” Computer Science, June 2017, doi: 10.48550/arXiv.1706.09579.

[13] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” Advances in Neural Information Processing Systems, vol. 28, 2015, doi: 10.48550/arXiv.1506.01497.

[14] R. Girshick, J. Donahue, T. Darrell, and J. Malik. “Rich feature hierarchies for accurate object detection and semantic segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, doi: 10.48550/arXiv.1311.2524.




DOI: https://doi.org/10.34238/tnu-jst.8741

Refbacks

  • There are currently no refbacks.
TNU Journal of Science and Technology
Rooms 408, 409 - Administration Building - Thai Nguyen University
Tan Thinh Ward - Thai Nguyen City
Phone: (+84) 208 3840 288 - E-mail: jst@tnu.edu.vn
Based on Open Journal Systems
©2018 All Rights Reserved