Eksplorasi Etika dan Resiko Kecerdasan Buatan dalam Pengambilan Keputusan Otomatis
Kata Kunci:
Pengambilan Keputusan Otomatis, Etika Kecerdasan Buatan, Bias Algoritma, AI yang Dapat Dijelaskan (XAI), Pengawasan Manusia (Human-in-the-Loop), Kerangka Penilaian ResikoAbstrak
Transformasi sektor-sektor krusial seperti penegakan hukum, kesehatan, dan finansial saat ini sangat dipengaruhi oleh adopsi Kecerdasan Buatan (AI) dalam mekanisme Automated Decision-Making (ADM). Kendati demikian, peningkatan kemandirian sistem tersebut membawa persoalan baru terkait opasitas algoritma atau fenomena "kotak hitam". Hal ini menjadi hambatan serius dalam menjamin akuntabilitas, transparansi, serta perlindungan hak-hak dasar manusia di tengah otomatisasi yang masif. Penelitian ini dirancang untuk menelaah berbagai konsekuensi etis beserta ancaman teknis yang menyertai sistem ADM. Fokus utama analisis diarahkan pada pengaruh bias algoritma dan minimnya interpretabilitas sistem terhadap prinsip keadilan sosial serta kebebasan individu. Penelitian ini menerapkan pendekatan kualitatif melalui peninjauan literatur terhadap 45 artikel ilmiah dan studi kasus yang terbit dalam rentang waktu 2020 hingga 2026. Proses metodologis dilakukan dengan membedah tema-tema sentral terkait standar etika global serta struktur pertanggungjawaban dalam pemanfaatan algoritma. Data menunjukkan bahwa mayoritas sistem ADM yang diteliti (sekitar 70%) mengandung "bias historis" yang bersumber dari data pembelajaran, sehingga memicu tindakan diskriminatif pada komunitas marginal. Selain itu, ditemukan adanya kekosongan regulasi yang substansial, di mana sistem hukum kontemporer belum mampu mendefinisikan siapa yang harus bertanggung jawab atas kesalahan yang dihasilkan oleh keputusan otomatis. Studi ini menyimpulkan bahwa efisiensi yang ditawarkan ADM belum dibarengi dengan kematangan aspek etis, sehingga kehadiran pengawasan manusia tetap menjadi sebuah keharusan. Implementasi Explainable AI (XAI) dan kewajiban audit algoritma menjadi solusi krusial untuk menekan resiko tersebut. Ke depannya, diperlukan standarisasi regulasi internasional yang mengatur tanggung jawab hukum dalam ekosistem kecerdasan buatan.
Unduhan
Referensi
[1] C. Grünloh, “Using technological frames as an analytic tool in value sensitive design,” Ethics Inf. Technol., vol. 23, no. 1, pp. 53–57, Mar. 2021, doi: 10.1007/s10676-018-9459-3.
[2] D. Metaxa et al., “Auditing Algorithms: Understanding Algorithmic Systems from the Outside In,” Foundations and Trends® in Human–Computer Interaction, vol. 14, no. 4, pp. 272–344, Nov. 2021, doi: 10.1561/1100000083.
[3] F. P. E. Putra, U. Ubaidi, R. O. F. Kusuma, and ..., “Effect Of Distance On Wi-Fi Signal Quality In The Home Environment,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4319
[4] I. D. Raji et al., “Closing the AI accountability gap,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
[5] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “SoK: Security and Privacy in Machine Learning,” in 2018 IEEE European Symposium on Security and Privacy (EuroS&P), IEEE, Apr. 2018, pp. 399–414. doi: 10.1109/EuroSP.2018.00035.
[6] F. P. E. Putra, R. M. Ilhamsyah, and ..., “Implementation And Evaluation Of Zerotier-Based Virtual Network For Device Connectivity,” Brilliance: Research of …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/5966
[7] M. Veale and R. Binns, “Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data,” Big Data Soc., vol. 4, no. 2, p. 205395171774353, Dec. 2017, doi: 10.1177/2053951717743530.
[8] M. Coeckelbergh, AI Ethics. The MIT Press, 2020. doi: 10.7551/mitpress/12549.001.0001.
[9] F. P. E. Putra, F. Muslim, N. Hasanah, R. Paradina, and ..., “Analisis Komparasi Protokol Websocket dan MQTT Dalam Proses Push Notification,” Jurnal Sistim Informasi …, 2023, [Online]. Available: http://www.jsisfotek.org/index.php/JSisfotek/article/view/325
[10] F. P. E. Putra, U. Ubaidi, R. N. Saputra, and ..., “Application of Internet of Things technology in monitoring water quality in fishponds,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4231
[11] N. Bostrom and E. Yudkowsky, “The ethics of artificial intelligence,” in The Cambridge Handbook of Artificial Intelligence, Cambridge University Press, 2014, pp. 316–334. doi: 10.1017/CBO9781139046855.020.
[12] T. Gebru et al., “Datasheets for datasets,” Commun. ACM, vol. 64, no. 12, pp. 86–92, Dec. 2021, doi: 10.1145/3458723.
[13] F. P. E. Putra, K. Mufidah, R. M. Ilhamsyah, and ..., “Tinjauan performa RouterOS Mikrotik dalam jaringan internet: Analisis kinerja dan kelayakan,” Digital …, 2023, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/digitech/article/view/3446
[14] M. Mitchell et al., “Model Cards for Model Reporting,” in Proceedings of the Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2019, pp. 220–229. doi: 10.1145/3287560.3287596.
[15] I. D. Raji et al., “Closing the AI accountability gap,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 33–44. doi: 10.1145/3351095.3372873.
[16] B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The ethics of algorithms: Mapping the debate,” Big Data Soc., vol. 3, no. 2, Dec. 2016, doi: 10.1177/2053951716679679.
[17] A. Jobin, M. Ienca, and E. Vayena, “The global landscape of AI ethics guidelines,” Nat. Mach. Intell., vol. 1, no. 9, pp. 389–399, Sep. 2019, doi: 10.1038/s42256-019-0088-2.
[18] E. J. Topol, “High-performance medicine: the convergence of human and artificial intelligence,” Nat. Med., vol. 25, no. 1, pp. 44–56, Jan. 2019, doi: 10.1038/s41591-018-0300-7.
[19] F. P. E. Putra, A. Hamzah, W. Agel, and ..., “Impelementasi Sistem Keamanan Jaringan Mikrotik Menggunakan Firewall Filtering dan Port Knocking,” Jurnal Sistim Informasi …, 2023, [Online]. Available: https://ipv6.jsisfotek.org/index.php/JSisfotek/article/view/329
[20] F. P. E. Putra, M. Dafid, and I. Syafi’i, “Firewall Implementation as a Computer Network Security Strategy for Data Protection,” Brilliance: Research of Artificial …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/6162
[21] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, Jun. 2020, doi: 10.1016/j.inffus.2019.12.012.
[22] E. Ntoutsi et al., “Bias in data‐driven artificial intelligence systems—An introductory survey,” WIREs Data Mining and Knowledge Discovery, vol. 10, no. 3, May 2020, doi: 10.1002/widm.1356.
[23] M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi, “Fairness Beyond Disparate Treatment & Disparate Impact,” in Proceedings of the 26th International Conference on World Wide Web, Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee, Apr. 2017, pp. 1171–1180. doi: 10.1145/3038912.3052660.
[24] B. Mittelstadt, “Principles alone cannot guarantee ethical AI,” Nat. Mach. Intell., vol. 1, no. 11, pp. 501–507, Nov. 2019, doi: 10.1038/s42256-019-0114-4.
[25] T. Hagendorff, “The Ethics of AI Ethics: An Evaluation of Guidelines,” Minds Mach. (Dordr)., vol. 30, no. 1, pp. 99–120, Mar. 2020, doi: 10.1007/s11023-020-09517-8.
[26] A. Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data, vol. 5, no. 2, pp. 153–163, Jun. 2017, doi: 10.1089/big.2016.0047.
[27] F. P. E. Putra, M. Irfan, M. Aziz, and ..., “Wireless Network Design at Pamekasan Regency Public Library,” Brilliance: Research of …, 2025, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/5876
[28] A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, 2018, doi: 10.1109/ACCESS.2018.2870052.
[29] Q. Yang, A. Steinfeld, C. Rosé, and J. Zimmerman, “Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, Apr. 2020, pp. 1–13. doi: 10.1145/3313831.3376301.
[30] P. Nemitz, “Constitutional democracy and technology in the age of artificial intelligence,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2133, p. 20180089, Nov. 2018, doi: 10.1098/rsta.2018.0089.
[31] D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, “XAI—Explainable artificial intelligence,” Sci. Robot., vol. 4, no. 37, Dec. 2019, doi: 10.1126/scirobotics.aay7120.
[32] B. Shneiderman, “Bridging the Gap Between Ethics and Practice,” ACM Trans. Interact. Intell. Syst., vol. 10, no. 4, pp. 1–31, Dec. 2020, doi: 10.1145/3419764.
[33] S. S, J. Paul, C. Strong, and J. Pius, “Consumer response towards social media advertising: Effect of media interactivity, its conditions and the underlying mechanism,” Int. J. Inf. Manage., vol. 54, p. 102155, Oct. 2020, doi: 10.1016/j.ijinfomgt.2020.102155.
[34] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A Survey of Methods for Explaining Black Box Models,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Sep. 2019, doi: 10.1145/3236009.
[35] F. P. E. Putra, U. Ubaidi, A. Hamzah, and ..., “Systematic literature review: Security gap detection on websites using OWASP ZAP,” Brilliance: Research …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4227
[36] F. P. E. Putra, A. Zulfikri, G. Arifin, and ..., “Analysis of phishing attack trends, impacts and prevention methods: literature study,” … : Research of Artificial …, 2024, [Online]. Available: https://itscience-indexing.com/jurnal/index.php/brilliance/article/view/4357
[37] C. Cath, “Governing artificial intelligence: ethical, legal and technical opportunities and challenges,” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2133, p. 20180080, Nov. 2018, doi: 10.1098/rsta.2018.0080.
[38] “One membership. Unlimited knowledge,” IEEE Secur. Priv., vol. 16, no. 3, pp. c3–c3, May 2018, doi: 10.1109/MSP.2018.2701157.
[39] B. Hutchinson et al., “Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Mar. 2021, pp. 560–575. doi: 10.1145/3442188.3445918.
[40] S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR,” SSRN Electronic Journal, 2017, doi: 10.2139/ssrn.3063289.
[41] J. M. Logg, J. A. Minson, and D. A. Moore, “Algorithm appreciation: People prefer algorithmic to human judgment,” Organ. Behav. Hum. Decis. Process., vol. 151, pp. 90–103, Mar. 2019, doi: 10.1016/j.obhdp.2018.12.005.
[42] B. J. Dietvorst, J. P. Simmons, and C. Massey, “Algorithm aversion: People erroneously avoid algorithms after seeing them err.,” J. Exp. Psychol. Gen., vol. 144, no. 1, pp. 114–126, 2015, doi: 10.1037/xge0000033.
[43] M. Gastelum, “Scale Matters: Temporality in the Perception of Affordances,” Front. Psychol., vol. 11, Jun. 2020, doi: 10.3389/fpsyg.2020.01188.
[44] J. Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation,” Philos. Technol., vol. 29, no. 3, pp. 245–268, Sep. 2016, doi: 10.1007/s13347-015-0211-1.
[45] E. Awad et al., “The Moral Machine experiment,” Nature, vol. 563, no. 7729, pp. 59–64, Nov. 2018, doi: 10.1038/s41586-018-0637-6.
[46] M. Ananny and K. Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability,” New Media Soc., vol. 20, no. 3, pp. 973–989, Mar. 2018, doi: 10.1177/1461444816676645.
[47] M. Wieringa, “What to account for when accounting for algorithms,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, New York, NY, USA: ACM, Jan. 2020, pp. 1–18. doi: 10.1145/3351095.3372833.
[48] P. Suárez-Serrato, E. I. Velázquez Richards, and M. Yazdani, “Socialbots Supporting Human Rights,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA: ACM, Dec. 2018, pp. 290–296. doi: 10.1145/3278721.3278734.
[49] A. Holzinger, “Interactive machine learning for health informatics: when do we need the human-in-the-loop?,” Brain Inform., vol. 3, no. 2, pp. 119–131, Jun. 2016, doi: 10.1007/s40708-016-0042-6.
[50] I. Rahwan, “Society-in-the-loop: programming the algorithmic social contract,” Ethics Inf. Technol., vol. 20, no. 1, pp. 5–14, Mar. 2018, doi: 10.1007/s10676-017-9430-8.
[51] F. M. Zanzotto, “Viewpoint: Human-in-the-loop Artificial Intelligence,” Journal of Artificial Intelligence Research, vol. 64, pp. 243–252, Feb. 2019, doi: 10.1613/jair.1.11345.
Diterbitkan
Terbitan
Bagian
Lisensi
Hak Cipta (c) 2026 Suro Jalil, Waliyur Rohman (Penulis)

Artikel ini berlisensi Creative Commons Attribution 4.0 International License.








