*Result*: Automatic face detection based on bidirectional recurrent neural network optimized by improved Ebola optimization search algorithm.
Med Rev (2021). 2022 Jun 29;2(3):285-307. (PMID: 37724193)
J Transl Int Med. 2022 Aug 31;10(3):264-271. (PMID: 36776237)
Med Rev (2021). 2022 Feb 14;1(2):172-198. (PMID: 37724302)
Med Rev (2021). 2024 Jan 5;3(6):514-520. (PMID: 38282803)
Med Rev (2021). 2022 Jul 1;2(3):244-250. (PMID: 37724189)
Comput Math Methods Med. 2021 Nov 8;2021:5595180. (PMID: 34790252)
IEEE Trans Image Process. 2010 Jun;19(6):1635-50. (PMID: 20172829)
IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):4008-4020. (PMID: 32750774)
Eur J Neurosci. 2024 Sep;60(6):5328-5347. (PMID: 39161111)
Open Med (Wars). 2020 Sep 08;15(1):860-871. (PMID: 33336044)
*Further Information*
*Face detection is a multidisciplinary research subject that employs fundamental computer algorithms, image processing, and patterning. Neural networks, on the other hand, have been widely developed to solve challenges in the domains of feature extraction, pattern detection, and the like in general. The presented study investigates the DNN (deep neural networks) use in the creation of facial detection operating systems. In this study, a novel optimized deep network has been presented to face detection. In this paper, after using some preprocessing stages for contrast enhancement and increasing the data number for the next deep tool, they fed to a bidirectional recurrent neural network (BRNN). The network is optimized via a novel enhanced version of Ebola optimization algorithm to provide far greater accuracy. The suggested procedure is examined on GTFD (Georgia Tech Face Database) and the results indicate that the proposed technique significantly outperforms other comparative methods, attaining an accuracy of 94.3%, a precision of 93.51%, a recall of 94.53%, and an F1-score of 92.47%. Furthermore, the method exhibits resilience against various challenges, achieving an accuracy of 95.6% under occlusions, 96.3% under lighting variations, 94.8% under pose variations, and 92.4% under low resolution conditions. Simulation results depict that the suggested technique gives far greater accuracy in comparison with the other comparative approaches.
(© 2024. The Author(s).)*
*Declarations Competing interests The authors declare no competing interests.*