Impact of biases on AI-based computer software
DOI:
https://doi.org/10.33936/isrtic.v9i1.7406Keywords:
biases, algorithmic fairness, computer systems, machine learning, artificial intelligenceAbstract
The objective of the research is to analyze biases and their impact in various fields related to personnel selection, facial recognition, recidivism prediction, medical diagnosis, and credit evaluation. Their presence affects the effectiveness and accuracy of AI-based applications, thus exacerbating social differences. To this end, the PRISMA method was used, allowing for the search of theoretical contributions, filtering according to consultation criteria, and selecting relevant scientific articles. The findings of the study reveal the following: facial recognition applications exhibit racial biases due to the use of unbalanced data; recruitment and recidivism prediction systems are susceptible to errors in gender and race identification as a result of implementing complex algorithms that influence their understanding; medical analysis applications deliver faulty clinical diagnoses due to the use of inappropriate techniques that adversely affect certain groups of people. The aforementioned points demonstrate the complex relationship between the selected data, algorithm coding, and the ethical aspects that should govern the implementation of AI-based applications. New scientific contributions should focus on researching performance metrics using both balanced and unbalanced data, along with the techniques required to correct inequalities present in data volumes.
Downloads
References
Ávila Bravo-Villasante, M. (2023). La agenda feminista ante la cuarta revolución industrial. Mujeres y algoritmización de la esfera pública. Cuestiones de Género: De La Igualdad y La Diferencia, 18, 132–155. https://doi.org/10.18002/cg.i18.7573
Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60. https://doi.org/10.1016/j.ijinfomgt.2021.102387
Bagga, S., & Piper, A. (2020). Measuring the effects of bias in training data for literary classification. LaTeCH-CLfL.
Beltramelli Gula, N., Ferro, C., Goñi Mazzitelli, M., Etcheverry, L., & Rocamora, M. (2023). Un concepto viajero. Novos Rumos Sociológicos, 10(18). https://doi.org/10.15210/norus.v10i18.4847
Bravo Bolado, A. (2023). Justicia algorítmica: Un enfoque sociotécnico. Estudios Penales y Criminológicos, 1–42. https://doi.org/10.15304/epc.44.8838
Cary, M. P., Zink, A., Wei, S., Olson, A., Yan, M., Senior, R., Bessias, S., Gadhoumi, K., Jean-Pierre, G., Wang, D., Ledbetter, L. S., Economou-Zavlanos, N. J., Obermeyer, Z., & Pencina, M. J. (2023). Mitigating racial and ethnic bias and advancing health equity in clinical algorithms: A scoping review. Health Affairs, 42(10). https://doi.org/10.1377/HLTHAFF.2023.00553
Chang, X. (2023). Gender bias in hiring: An analysis of the impact of Amazon’s recruiting algorithm. Advances in Economics, Management and Political Sciences, 23(1). https://doi.org/10.54254/2754-1169/23/20230367
Chen, Z., Zhou, Y., Wang, Z., Liu, F., Leng, T., & Zhu, H. (2025). A bias evaluation solution for multiple sensitive attribute speech recognition. Computer Speech & Language, 93, 101787. https://doi.org/10.1016/J.CSL.2025.101787
Coraglia, G., Genco, F. A., Piantadosi, P., Bagli, E., Giuffrida, P., Posillipo, D., & Primiero, G. (2024). Evaluating AI fairness in credit scoring with the BRIO tool. arXiv. https://arxiv.org/abs/2406.03292
de Lima, R. M., Pisker, B., & Corrêa, V. S. (2023). Gender bias in artificial intelligence: A systematic review of the literature. Journal of Telecommunications and the Digital Economy, 11(2). https://doi.org/10.18080/jtde.v11n2.690
DeCamp, M., & Lindvall, C. (2023). Mitigating bias in AI at the point of care. Science, 381(6654). https://doi.org/10.1126/science.adh2713
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(1). https://doi.org/10.3390/sci6010003
Fuster, A., Goldsmith-Pinkham, P. S., Ramadorai, T., & Walther, A. (2021). Predictably unequal? The effects of machine learning on credit markets. Journal of Finance. https://doi.org/10.2139/ssrn.3072038
Ghosal, I., & Hooker, G. (2020). Boosting random forests to reduce bias; One-step boosted forest and its variance estimate. Journal of Computational and Graphical Statistics, 30(2). https://doi.org/10.1080/10618600.2020.1820345
Gilbert, S., Mehl, A., Baluch, A., Cawley, C., Challiner, J., Fraser, H., Millen, E., Montazeri, M., Multmeier, J., Pick, F., Richter, C., Türk, E., Upadhyay, S., Virani, V., Vona, N., Wicks, P., & Novorol, C. (2020). How accurate are digital symptom assessment apps for suggesting conditions and urgency advice? A clinical vignettes comparison to GPs. BMJ Open, 10(12), e040269. https://doi.org/10.1136/bmjopen-2020-040269
Golden, S. H., Joseph, J. J., & Hill-Briggs, F. (2021). Casting a health equity lens on endocrinology and diabetes. Journal of Clinical Endocrinology and Metabolism, 106(4). https://doi.org/10.1210/clinem/dgaa938
Hamilton, Z., Duwe, G., Kigerl, A., Gwinn, J., Langan, N., & Dollar, C. (2022). Tailoring to a mandate: The development and validation of the Prisoner Assessment Tool Targeting Estimated Risk and Needs (PATTERN). Justice Quarterly, 39(6). https://doi.org/10.1080/07418825.2021.1906930
Jie, Z., Zhiying, Z., & Li, L. (2021). A meta-analysis of Watson for Oncology in clinical application. Scientific Reports, 11(1), 5792. https://doi.org/10.1038/s41598-021-84973-5
Kudina, O., & de Boer, B. (2021). Co-designing diagnosis: Towards a responsible integration of Machine Learning decision-support systems in medical diagnostics. Journal of Evaluation in Clinical Practice, 27(3). https://doi.org/10.1111/jep.13535
Li, R., Kingsley, S., Fan, C., Sinha, P., Wai, N., Lee, J., Shen, H., Eslami, M., & Hong, J. (2023). Participation and division of labor in user-driven algorithm audits: How do everyday users work together to surface algorithmic harms? In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3544548.3582074
Martínez, N., Vinas, A., & Matute, H. (2021). Examining potential gender bias in automated-job alerts in the Spanish market. PLoS ONE, 16(12), e0260409. https://doi.org/10.1371/journal.pone.0260409
Noiret, S., Lumetzberger, J., & Kampel, M. (2021). Bias and fairness in computer vision applications of the criminal justice system. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 1–8). https://doi.org/10.1109/SSCI50451.2021.9660177
Paredes Meneses, J. (2023). Aplicación informática para reconocimiento de la especie Camu Camu (Myrciaria Dubia) a través de redes neuronales convolucionales, en Iquitos Perú, durante el año 2017. Aplicación informática para reconocimiento de la especie Camu Camu (Myrciaria Dubia) a través de redes neuronales convolucionales, en Iquitos Perú, durante el año 2017. https://doi.org/10.37811/cli_w945
Peng, Y. (2023). The role of ideological dimensions in shaping acceptance of facial recognition technology and reactions to algorithm bias. Public Understanding of Science, 32(2). https://doi.org/10.1177/09636625221113131
Pérez López, J. I. (2023). Inteligencia artificial y contratación laboral. Revista De Estudios Jurídico Laborales Y De Seguridad Social (REJLSS), 7, 186–205. https://doi.org/10.24310/rejlss7202317557
Reyes Campos, J. E. M., Castañeda Rodríguez, C. S., Alva Luján, L. D., & Mendoza de los Santos, A. C. (2023). Sistema de reconocimiento facial para el control de accesos mediante inteligencia artificial. Innovación y Software, 4(1). https://doi.org/10.48168/innosoft.s11.a78
Sanabria Moyano, J. E., Roa Avella, M. del P., & Lee Pérez, O. I. (2022). Tecnología de reconocimiento facial y sus riesgos en los derechos humanos. Revista Criminalidad, 64(3), 61–78. https://doi.org/10.47741/17943108.366
Santiago Arenas, A., Samboni, O., Villegas Trujillo, L. M., Zamora Córdoba, I., & Alfonso Morales, G. (2023). Tratamiento de la hipersensibilidad dentinaria primaria: Revisión exploratoria. Salud Uninorte, 39(3), 1120–1152. https://doi.org/10.14482/sun.39.03.741.258
Seyyed-Kalantari, L., Zhang, H., McDermott, M. B. A., Chen, I. Y., & Ghassemi, M. (2021). Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine, 27(12). https://doi.org/10.1038/s41591-021-01595-0
Simonetta, A., Trenta, A., Paoletti, M. C., & Vetrò, A. (2021). Metrics for identifying bias in datasets. CEUR Workshop Proceedings, 3118.
Tang, L., Li, J., & Fantus, S. (2023). Medical artificial intelligence ethics: A systematic review of empirical studies. Digital Health, 9. https://doi.org/10.1177/20552076231186064
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2022). The ethics of algorithms: Key problems and solutions. AI and Society, 37(1). https://doi.org/10.1007/s00146-021-01154-8
Varona, D., & Suárez, J. L. (2022). Discrimination, bias, fairness, and trustworthy AI. Applied Sciences, 12(12). https://doi.org/10.3390/app12125826
Vela, M. B., Erondu, A. I., Smith, N. A., Peek, M. E., Woodruff, J. N., & Chin, M. H. (2022). Eliminating explicit and implicit biases in health care: Evidence and research needs. Annual Review of Public Health, 43. https://doi.org/10.1146/annurev-publhealth-052620-103528
Wang, S., & Wang, H. (2020). Big data for small and medium-sized enterprises (SME): A knowledge management model. Journal of Knowledge Management, 24(4), 881–897. https://doi.org/10.1108/JKM-02-2020-0081
Yu, Y., & Saint-Jacques, G. (2022). Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn. arXiv. https://arxiv.org/abs/2202.07300
Zhang, J. (2016). Testing the predictive validity of the LSI-R using a sample of young male offenders on probation in Guangzhou, China. International Journal of Offender Therapy and Comparative Criminology, 60(4). https://doi.org/10.1177/0306624X14557471
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Freddy Aníbal Jumbo Castillo, Johnny Paul Novillo Vicuña, Camilly Yuliana Pacheco Ordoñez, Joselyn Franco Avila

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Articles submitted to this journal for publication will be released for open access under a Creative Commons Attribution Non-Commercial No Derivative Works licence (http://creativecommons.org/licenses/by-nc-nd/4.0).
The authors retain copyright, and are therefore free to share, copy, distribute, perform and publicly communicate the work under the following conditions: Acknowledge credit for the work specified by the author and indicate if changes were made (you may do so in any reasonable way, but not in a way that suggests that the author endorses your use of his or her work. Do not use the work for commercial purposes. In case of remixing, transformation or development, the modified material may not be distributed.



