Facial recognition of masked people using MediaPipe Facemesh and deep learning algorithms
Subject Areas : information technologyMansour Hesabi Moghaddam 1 , Hamid Reza Ghaffary 2 , Mahdi khazaiepoor 3
1 - 1Computer Engineering Department, Birjand Branch, Islamic Azad University, Birjand, Iran
2 - Computer Engineering Department, Ferdos Branch, Islamic Azad University, Khorasan, Iran
3 - 1Computer Engineering Department, Birjand Branch, Islamic Azad University, Birjand, Iran
Keywords: Face Recognition, Mask, MediaPipe, Deep Learning,
Abstract :
In response to the fundamental need for accurate recognition of tense faces wearing masks, this paper presents an innovative approach that takes advantage of parallel two-stage deep learning methods combined with hybrid meta-heuristic algorithms. The challenges associated with mask-wearing detection are addressed through a comprehensive framework that leverages state-of-the-art technologies and diverse inputs. This method includes a parallel algorithmic strategy, which optimizes the detection of faces with and without masks for accuracy. A special algorithm is used when detecting unmasked faces, while detecting masked faces uses a separate algorithm. In addition, an integration of multiple data sources including masked face images and inputs from temperature sensors increases the recognition accuracy. The main focus of this research is on data clustering, where datasets are organized based on their volume, then classification is performed using a proposed convolutional neural network. Duplicate features are carefully removed from each cluster, and parallel post-processing is performed by differentiating algorithms. In this study, two hybrid algorithms are introduced, and with the increase in data volume, additional algorithms can be easily inserted to provide scalability and increase accuracy. This innovative approach demonstrates the ability to significantly improve the accuracy and efficiency of complex facial recognition systems and addresses an important need in the fields of security to public health and beyond. Also, with the advancement of technology and the progress of research in this field, the possibility of improving the accuracy of detecting contracted faces is still promising.
1. Mukhiddinov, M.; Djuraev, O.; Akhmedov, F.; Mukhamadiyev, A.; Cho, J. Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People. Sensors 2023.
2. Van Kleef, G.A. How emotions regulate social life: The emotions as social information (EASI) model. Curr. Dir. Psychol. Sci. 2009, 18, 184–188.
3. Hess, U. Who to whom and why: The social nature of emotional mimicry. Psychophysiology 2020, 58, e13675.
4. Mukhamadiyev, A.; Khujayarov, I.; Djuraev, O.; Cho, J. Automatic Speech Recognition Method Based on Deep Learning Approaches for Uzbek Language. Sensors 2022, 22, 3683
5. Keltner, D.; Sauter, D.; Tracy, J.; Cowen, A. Emotional Expression: Advances in Basic Emotion Theory. J. Nonverbal Behav. 2019, 43, 133–160.
6. Mukhiddinov, M.; Jeong, R.-G.; Cho, J. Saliency Cuts: Salient Region Extraction based on Local Adaptive Thresholding for Image Information Recognition of the Visually Impaired. Int. Arab. J. Inf. Technol. 2020, 17, 713–720.
7. Susskind, J.M.; Lee, D.H.; Cusi, A.; Feiman, R.; Grabski, W.; Anderson, A.K. Expressing fear enhances sensory acquisition. Nat. Neurosci. 2008, 11, 843–850.
8. Guo, K.; Soornack, Y.; Settle, R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vis. Res. 2019, 157, 112–122.
9. Ramdani, C.; Ogier, M.; Coutrot, A. Communicating and reading emotion with masked faces in the Covid era: A short review of the literature. Psychiatry Res. 2022, 114755.
10. Canal, F.Z.; Müller, T.R.; Matias, J.C.; Scotton, G.G.; de Sa Junior, A.R.; Pozzebon, E.; Sobieranski, A.C. A survey on facial emotion recognition techniques: A state-of-the-art literature review. Inf. Sci. 2021, 582, 593–617.
11. Maithri, M.; Raghavendra, U.; Gudigar, A.; Samanth, J.; Barua, P.D.; Murugappan, M.; Chakole, Y.; Acharya, U.R. Automated emotion recognition: Current trends and future perspectives. Comput. Methods Programs Biomed. 2022, 215, 106646.
12. Xia, C.; Pan, Z.; Li, Y.; Chen, J.; Li, H. Vision-based melt pool monitoring for wire-arc additive manufacturing using deep learning method. Int. J. Adv. Manuf. Technol. 2022, 120, 551–562.
13. Li, W.; Zhang, L.; Wu, C.; Cui, Z.; Niu, C. A new lightweight deep neural network for surface scratch detection. Int. J. Adv. Manuf. Technol. 2022, 123, 1999–2015.
14. Mukhiddinov, M.; Akmuradov, B.; Djuraev, O. Robust text recognition for Uzbek language in natural scene images. In Proceedings of the 2019 International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 4–6 November 2019; pp. 1–5.
15. Khamdamov, U.R.; Djuraev, O.N. A novel method for extracting text from natural scene images and TTS. Eur. Sci. Rev. 2018, 1, 30–33.
16. Chen, X.; Wang, X.; Zhang, K.; Fung, K.-M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444.
17. Avazov, K.; Abdusalomov, A.; Mukhiddinov, M.; Baratov, N.; Makhmudov, F.; Cho, Y.I. An improvement for the automatic classification method for ultrasound images used on CNN. Int. J. Wavelets Multiresolution Inf. Process. 2021, 20, 2150054.
18. Mellouk, W.; Handouzi, W. Facial emotion recognition using deep learning: Review and insights. Procedia Comput. Sci. 2020, 175, 689–694.
19. Saxena, A.; Khanna, A.; Gupta, D. Emotion Recognition and Detection Methods: A Comprehensive Survey. J. Artif. Intell. Syst. 2020, 2, 53–79.
20. Ko, B.C. A Brief Review of Facial Emotion Recognition Based on Visual Information. Sensors 2018, 18, 401.
21. Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human Emotion Recognition: Review of Sensors and Methods. Sensors 2020, 20, 592.
22. Hangaragi, S., Singh, T., & Neelima N. Face Detection and Recognition Using Face Mesh and Deep Neural Network. 2021, 10.1007/978-981-16-5436-7_9.