Exploring the Ethics and Risks of Artificial Intelligence in Automated Decision Making
Keywords:
Automated Decision Making, Artificial Intelligence Ethics, Algorithmic Bias, Explainable AI (XAI), Human-in-the-Loop, Risk Assessment FrameworkAbstract
The transformation of crucial sectors such as law enforcement, healthcare, and finance is currently heavily influenced by the adoption of Artificial Intelligence (AI) in Automated Decision-Making (ADM) mechanisms. However, the increased independence of these systems brings new issues related to algorithm opacity, or the "black box" phenomenon. This poses a serious obstacle to ensuring accountability, transparency, and the protection of basic human rights amidst massive automation. This research is designed to examine the various ethical consequences and technical threats that accompany ADM systems. The primary focus of the analysis is directed at the influence of algorithmic bias and the lack of system interpretability on the principles of social justice and individual freedom. This research applies a qualitative approach through a literature review of 45 scientific articles and case studies published between 2020 and 2026. The methodological process was carried out by dissecting central themes related to global ethical standards and accountability structures in the use of algorithms. The data shows that the majority of the ADM systems studied (around 70%) contain "historical bias" derived from learning data, thus triggering discriminatory actions against marginalized communities. Furthermore, a substantial regulatory gap was identified, with contemporary legal systems unable to define who should be held accountable for errors resulting from automated decisions. The study concluded that the efficiency offered by ADM has not been matched by ethical maturity, so human oversight remains essential. The implementation of Explainable AI (XAI) and mandatory algorithm audits are crucial solutions to mitigate these risks. Going forward, international regulatory standardization governing legal liability in the artificial intelligence ecosystem is needed.Downloads
References
[1] C. Grünloh, “Using technological frames as an analytic tool in value sensitive design,” Ethics Inf. Technol., vol. 23, no. 1, pp. 53–57, Mar. 2021, doi: 10.1007/s10676-018-9459-3.
[2] D. Metaxa et al., “Auditing Algorithms: Understanding Algorithmic Systems from the Outside In,” Foundations and Trends® in Human–Computer Interaction, vol. 14, no. 4, pp. 272–344, Nov. 2021, doi: 10.1561/1100000083.
[3] F. P. E. Putra, U. Ubaidi, R. O. F. Kusuma, and ..., “Effect Of Distance On Wi-Fi Signal Quality In The Home Environment,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4319
[4] I. D. Raji et al., “Closing the AI accountability gap,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
[5] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “SoK: Security and Privacy in Machine Learning,” in 2018 IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, Apr. 2018, pp. 399–414. doi: 10.1109/EuroSP.2018.00035.
[6] F. P. E. Putra, R. M. Ilhamsyah, and ..., “Implementation And Evaluation Of Zerotier-Based Virtual Network For Device Connectivity,” Brilliance: Research of …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/5966
[7] M. Veale and R. Binns, “Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data,” Big Data Soc., vol. 4, no. 2, p. 205395171774353, Dec. 2017, doi: 10.1177/2053951717743530.
[8] M. Coeckelbergh, AI Ethics. The MIT Press, 2020. doi: 10.7551/mitpress/12549.001.0001.
[9] F. P. E. Putra, F. Muslim, N. Hasanah, R. Paradina, and ..., “Analisis Komparasi Protokol Websocket dan MQTT Dalam Proses Push Notification,” Jurnal Sistim Informasi …, 2023, [Online]. Available: http://www.jsisfotek.org/index.php/JSisfotek/article/view/325
[10] F. P. E. Putra, U. Ubaidi, R. N. Saputra, and ..., “Application of Internet of Things technology in monitoring water quality in fishponds,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4231
[11] N. Bostrom and E. Yudkowsky, “The ethics of artificial intelligence,” in The Cambridge Handbook of Artificial Intelligence, Cambridge University Press, 2014, pp. 316–334. doi: 10.1017/CBO9781139046855.020.
[12] T. Gebru et al., “Datasheets for datasets,” Commun. ACM, vol. 64, no. 12, pp. 86–92, Dec. 2021, doi: 10.1145/3458723.
[13] F. P. E. Putra, K. Mufidah, R. M. Ilhamsyah, and ..., “Tinjauan performa RouterOS Mikrotik dalam jaringan internet: Analisis kinerja dan kelayakan,” Digital …, 2023, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/digitech/article/view/3446
[14] M. Mitchell et al., “Model Cards for Model Reporting,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2019, pp. 220–229. doi: 10.1145/3287560.3287596.
[15] I. D. Raji et al., “Closing the AI accountability gap,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
[16] B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The ethics of algorithms: Mapping the debate,” Big Data Soc., vol. 3, no. 2, Dec. 2016, doi: 10.1177/2053951716679679.
[17] A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat. Mach. Intell., vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
[18] E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat. Med., vol. 25, no. 1, pp. 44–56, Jan. 2019, doi: 10.1038/s41591-018-0300-7.
[19] F. P. E. Putra, A. Hamzah, W. Agel, and ..., “Impelementasi Sistem Keamanan Jaringan Mikrotik Menggunakan Firewall Filtering dan Port Knocking,” Jurnal Sistim Informasi …, 2023, [Online]. Available: https://ipv6.jsisfotek.org/index.php/JSisfotek/article/view/329
[20] F. P. E. Putra, M. Dafid, and I. Syafi’i, “Firewall Implementation as a Computer Network Security Strategy for Data Protection,” Brilliance: Research of Artificial …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/6162
[21] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, Jun. 2020, doi: 10.1016/j.inffus.2019.12.012.
[22] E. Ntoutsi et al., “Bias in data‐driven artificial intelligence systems—An introductory survey,” WIREs Data Mining and Knowledge Discovery, vol. 10, no. 3, May 2020, doi: 10.1002/widm.1356.
[23] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi, “Fairness Beyond Disparate Treatment & Disparate Impact,” in Proceedings of the 26th International Conference on World Wide Web, Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee, Apr. 2017, pp. 1171–1180. doi: 10.1145/3038912.3052660.
[24] B. Mittelstadt, “Principles alone cannot guarantee ethical AI,” Nat. Mach. Intell., vol. 1, no. 11, pp. 501–507, Nov. 2019, doi: 10.1038/s42256-019-0114-4.
[25] T. Hagendorff, “The Ethics of AI Ethics: An Evaluation of Guidelines,” Minds Mach. (Dordr)., vol. 30, no. 1, pp. 99–120, Mar. 2020, doi: 10.1007/s11023-020-09517-8.
[26] A. Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data, vol. 5, no. 2, pp. 153–163, Jun. 2017, doi: 10.1089/big.2016.0047.
[27] F. P. E. Putra, M. Irfan, M. Aziz, and ..., “Wireless Network Design at Pamekasan Regency Public Library,” Brilliance: Research of …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/5876
[28] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018, doi: 10.1109/ACCESS.2018.2870052.
[29] Q. Yang, A. Steinfeld, C. Rosé, and J. Zimmerman, “Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, Apr. 2020, pp. 1–13. doi: 10.1145/3313831.3376301.
[30] P. Nemitz, “Constitutional democracy and technology in the age of artificial intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2133, p. 20180089, Nov. 2018, doi: 10.1098/rsta.2018.0089.
[31] D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, “XAI—Explainable artificial intelligence,” Sci. Robot., vol. 4, no. 37, Dec. 2019, doi: 10.1126/scirobotics.aay7120.
[32] B. Shneiderman, “Bridging the Gap Between Ethics and Practice,” ACM Trans. Interact. Intell. Syst., vol. 10, no. 4, pp. 1–31, Dec. 2020, doi: 10.1145/3419764.
[33] S. S, J. Paul, C. Strong, and J. Pius, “Consumer response towards social media advertising: Effect of media interactivity, its conditions and the underlying mechanism,” Int. J. Inf. Manage., vol. 54, p. 102155, Oct. 2020, doi: 10.1016/j.ijinfomgt.2020.102155.
[34] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Sep. 2019, doi: 10.1145/3236009.
[35] F. P. E. Putra, U. Ubaidi, A. Hamzah, and ..., “Systematic literature review: Security gap detection on websites using OWASP ZAP,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4227
[36] F. P. E. Putra, A. Zulfikri, G. Arifin, and ..., “Analysis of phishing attack trends, impacts and prevention methods: literature study,” … : Research of Artificial …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4357
[37] C. Cath, “Governing artificial intelligence: ethical, legal and technical opportunities and challenges,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2133, p. 20180080, Nov. 2018, doi: 10.1098/rsta.2018.0080.
[38] “One membership. Unlimited knowledge,” IEEE Secur. Priv., vol. 16, no. 3, pp. c3–c3, May 2018, doi: 10.1109/MSP.2018.2701157.
[39] B. Hutchinson et al., “Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Mar. 2021, pp. 560–575. doi: 10.1145/3442188.3445918.
[40] S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR,” SSRN Electronic Journal, 2017, doi: 10.2139/ssrn.3063289.
[41] J. M. Logg, J. A. Minson, and D. A. Moore, “Algorithm appreciation: People prefer algorithmic to human judgment,” Organ. Behav. Hum. Decis. Process., vol. 151, pp. 90–103, Mar. 2019, doi: 10.1016/j.obhdp.2018.12.005.
[42] B. J. Dietvorst, J. P. Simmons, and C. Massey, “Algorithm aversion: People erroneously avoid algorithms after seeing them err.,” J. Exp. Psychol. Gen., vol. 144, no. 1, pp. 114–126, 2015, doi: 10.1037/xge0000033.
[43] M. Gastelum, “Scale Matters: Temporality in the Perception of Affordances,” Front. Psychol., vol. 11, Jun. 2020, doi: 10.3389/fpsyg.2020.01188.
[44] J. Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation,” Philos. Technol., vol. 29, no. 3, pp. 245–268, Sep. 2016, doi: 10.1007/s13347-015-0211-1.
[45] E. Awad et al., “The Moral Machine experiment,” Nature, vol. 563, no. 7729, pp. 59–64, Nov. 2018, doi: 10.1038/s41586-018-0637-6.
[46] M. Ananny and K. Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” New Media Soc., vol. 20, no. 3, pp. 973–989, Mar. 2018, doi: 10.1177/1461444816676645.
[47] M. Wieringa, “What to account for when accounting for algorithms,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 1–18. doi: 10.1145/3351095.3372833.
[48] P. Suárez-Serrato, E. I. Velázquez Richards, and M. Yazdani, “Socialbots Supporting Human Rights,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA: ACM, Dec. 2018, pp. 290–296. doi: 10.1145/3278721.3278734.
[49] A. Holzinger, “Interactive machine learning for health informatics: when do we need the human-in-the-loop?,” Brain Inform., vol. 3, no. 2, pp. 119–131, Jun. 2016, doi: 10.1007/s40708-016-0042-6.
[50] I. Rahwan, “Society-in-the-loop: programming the algorithmic social contract,” Ethics Inf. Technol., vol. 20, no. 1, pp. 5–14, Mar. 2018, doi: 10.1007/s10676-017-9430-8.
[51] F. M. Zanzotto, “Viewpoint: Human-in-the-loop Artificial Intelligence,” Journal of Artificial Intelligence Research, vol. 64, pp. 243–252, Feb. 2019, doi: 10.1613/jair.1.11345.
Published
Issue
Section
License
Copyright (c) 2026 Suro Jalil, Waliyur Rohman (Penulis)

This work is licensed under a Creative Commons Attribution 4.0 International License.








