Using Artificial Intelligence: Focusing on Identity Verification in Police missions
محورهای موضوعی : Artificial Intelligence Tools in Software and Data EngineeringMohammad Yavari 1 , Mehdi Hamidi 2
1 - Department of law, May.C., Islamic Azad University, Maybod, Iran.
2 - Head of the Office of Applied Research, Yazd Provincial Police Command
کلید واژه: : Artificial Intelligence, Face Recognition, Privacy, Bias, Surveillance,
چکیده مقاله :
Extended Abstract
Article type: Research Article
Article history: Received 12 May 2024, Revised 12 July, 2024, Accepted 25 Oct. 2024 Published 30 Oct. 2024
Introduction and Problem Statement: Law enforcement agencies are increasingly adopting AI for crime prediction and identity verification. However, the transition from static to dynamic authentication systems—such as facial recognition—faces significant technical, legal, and ethical hurdles. The core problem lies in balancing the efficiency of AI-powered surveillance with the protection of fundamental human rights, as technical inaccuracies and lack of transparency threaten civil liberties.
Research Objective and Questions: This research aims to analyze the dual nature of AI in policing, specifically focusing on how identity verification tools can improve public safety while addressing the challenges of privacy violation and algorithmic bias. It seeks to answer how technical limitations and legal frameworks impact the successful deployment of these technologies.
Methodology: This study is a Review Article. It synthesizes existing research and legal frameworks such as the UDHR, ICCPR, and GDPR—to evaluate the current state of AI implementation in security and law enforcement.
Findings: The study identifies major technical challenges, including high error rates (false positives/negatives) in real-world conditions like low lighting. Legal findings highlight that traditional privacy laws are ill-equipped for the digital age, leading to potential violations of the right to anonymity. Furthermore, AI bias often stems from non-representative training data, resulting in higher error rates for women and minority communities.
Conclusion: While AI holds immense potential for reducing response times and combating organized crime, its success depends on a delicate balance with ethical values. The research recommends developing robust legal frameworks for biometric data, increasing public transparency, using diverse training datasets to mitigate bias, and educating police forces on the ethical limitations of the technology.
Keywords: Artificial Intelligence, Face Recognition, Privacy, Bias, Surveillance.
Cite this article: M. Yavari1, M. Hamidi. (2024). Using Artificial Intelligence for Identity Verification in Police Missions. Journal of Artificial Intelligence Tools in Software and Data Engineering (AITSDE), 2 (1), pages.
© M. Yavari, M. Hamidi. Publisher: Yazd Campus (Ya.C.), Islamic Azad University
Extended Abstract
Article type: Research Article
Article history: Received 12 May 2024, Revised 12 July, 2024, Accepted 25 Oct. 2024 Published 30 Oct. 2024
Introduction and Problem Statement: Law enforcement agencies are increasingly adopting AI for crime prediction and identity verification. However, the transition from static to dynamic authentication systems—such as facial recognition—faces significant technical, legal, and ethical hurdles. The core problem lies in balancing the efficiency of AI-powered surveillance with the protection of fundamental human rights, as technical inaccuracies and lack of transparency threaten civil liberties.
Research Objective and Questions: This research aims to analyze the dual nature of AI in policing, specifically focusing on how identity verification tools can improve public safety while addressing the challenges of privacy violation and algorithmic bias. It seeks to answer how technical limitations and legal frameworks impact the successful deployment of these technologies.
Methodology: This study is a Review Article. It synthesizes existing research and legal frameworks such as the UDHR, ICCPR, and GDPR—to evaluate the current state of AI implementation in security and law enforcement.
Findings: The study identifies major technical challenges, including high error rates (false positives/negatives) in real-world conditions like low lighting. Legal findings highlight that traditional privacy laws are ill-equipped for the digital age, leading to potential violations of the right to anonymity. Furthermore, AI bias often stems from non-representative training data, resulting in higher error rates for women and minority communities.
Conclusion: While AI holds immense potential for reducing response times and combating organized crime, its success depends on a delicate balance with ethical values. The research recommends developing robust legal frameworks for biometric data, increasing public transparency, using diverse training datasets to mitigate bias, and educating police forces on the ethical limitations of the technology.
Keywords: Artificial Intelligence, Face Recognition, Privacy, Bias, Surveillance.
Cite this article: M. Yavari1, M. Hamidi. (2024). Using Artificial Intelligence for Identity Verification in Police Missions. Journal of Artificial Intelligence Tools in Software and Data Engineering (AITSDE), 2 (1), pages.
© M. Yavari, M. Hamidi. Publisher: Yazd Campus (Ya.C.), Islamic Azad University
[1] D. Board, “Law enforcement on the AI frontier,” SAIC Publishing, 2024.
[2] S. Tiwari, R, Mishra, “AI and Behavioural Biometrics in Real-Time Identity Verification: A New Era for Secure Access Control,” International Journal of All Research Education and Scientific Methods, vol. 11, 2023.
[3] T. W. Ford, “It’s time to address facial recognition, the most troubling law enforcement AI tool,” Bulletin of the Atomic Scientists, 2021.
[4] R. Ullah, “The impact of Artificial Intelligence (AI) on privacy rights: An analytical exploration,” Annual Methodological Archive Research Review, vol. 3, p. 327, 2025.
[5] N. Ezeh, A. Widgery, C. Canada, “Artificial Intelligence and Law Enforcement,” National Conference of State Legislatures, 2025.
[6] L. Belenguer, “AI bias: Exploring discriminatory algorithmic,” PMC PubMed Central Publishing, 2022.
[7] R. A. Berk, “Artificial Intelligence, Predictive Policing and Risk Assessment for Law Enforcement,” Annual Review of Criminology, vol. 4, p. 209, 2021.
[8] Q. Dong, “Leakage Prediction on Machine Learning Models,” PMC PubMed Central Publishing, 2022.
[9] A. S. Molashaik, “The risk of AI in cybersecurity: Finding the right balance between security and privacy,” IAPP Publishing, 2025.
[10] C. J. Hoofnagle, B. V. Sloot, F. Z. Borgesius, “The European Union general data protection regulation,” Information & Communications Technology Law, vol. 28, p. 65, 2019.
