Penerapan Artificial Intelligence dalam Analisis Teks untuk Meningkatkan Interaksi pada Game Berbasis Natural Language Processing
Keywords:
Artificial Intelligence, Natural Language Processing, Analisis Teks, Interaksi Game, Non-Player Character (NPC), Keamanan JaringanAbstract
Perkembangan industri video game mendorong terciptanya pengalaman bermain yang semakin imersif dan realistis, namun masih terdapat kendala pada sistem dialog Non-Player Character (NPC) yang umumnya bersifat statis dan berbasis decision tree sehingga membatasi fleksibilitas interaksi pemain. Penelitian ini bertujuan untuk mengimplementasikan Artificial Intelligence (AI), khususnya Natural Language Processing (NLP), dalam menganalisis masukan teks pemain guna menghasilkan interaksi yang lebih dinamis dan kontekstual. Metode yang digunakan meliputi integrasi model NLP untuk intent classification (klasifikasi niat) dan sentiment analysis (analisis sentimen) agar sistem mampu memahami maksud serta emosi dalam bahasa alami pemain. Pengembangan sistem dilakukan menggunakan pendekatan System Development Life Cycle (SDLC) dan diimplementasikan dalam bentuk purwarupa game interaktif, lengkap dengan evaluasi performa latensi jaringan (Quality of Service) dan sistem keamanan (firewall) pada arsitektur cloud server. Hasil pengujian menunjukkan bahwa model NLP mampu menginterpretasikan struktur kalimat dengan tingkat akurasi yang tinggi, serta pertukaran data komunikasi dapat berjalan stabil dan aman, sehingga NPC dapat memberikan respons yang adaptif, variatif, dan sesuai konteks. Dengan demikian, penerapan AI dalam analisis teks terbukti efektif dalam mengatasi keterbatasan dialog statis, meningkatkan keterlibatan pemain, serta menciptakan pengalaman interaksi yang lebih responsif dalam lingkungan virtual.
Downloads
References
[1]Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT). https://doi.org/10.48550/arXiv.1810.04805.
[2]Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33. https://doi.org/10.48550/arXiv.2005.14165.
[3]Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. https://doi.org/10.48550/arXiv.1706.03762.
[4]Vinyals, O., & Le, Q. (2015). A neural conversational model. arXiv preprint. https://doi.org/10.48550/arXiv.1506.05869.
[5]Serban, I. V., Sordoni, A., Bengio, Y., Courville, A., & Pineau, J. (2016). Building end-to-end dialogue systems using generative hierarchical neural network models. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10339.
[6]Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., & Jurafsky, D. (2016). Deep reinforcement learning for dialogue generation. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://doi.org/10.18653/v1/D16-1127.
[7]Ammanabrolu, P., Hausknecht, M., Côté, M. A., Yuan, X., Kádár, Á., Stiegler, M., & Clark, P. (2020). Learning to play text-based games with language models. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.272.
[8]Côté, M. A., Kádár, Á., Yuan, X., Kybartas, B., Barnes, T., Fine, E., Moore, J., Hausknecht, M., El Asri, L., Adada, M., Tay, Y., & Trischler, A. (2018). TextWorld: A learning environment for text-based games. arXiv preprint. https://doi.org/10.48550/arXiv.1806.11532.
[9]Narasimhan, K., Kulkarni, T., & Barzilay, R. (2015). Language understanding for text-based games using deep reinforcement learning. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://doi.org/10.18653/v1/D15-1001.
[10]Zhang, Y., Sun, S., Galley, M., Chen, Y. C., Brockett, C., Gao, X., Gao, J., Liu, J., & Dolan, B. (2020). DialogPT: Large-scale generative pre-trained dialogue response generation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. https://doi.org/10.18653/v1/2020.acl-demos.30.
[11]Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing. IEEE Computational Intelligence Magazine, 13(3), 55–75. https://doi.org/10.1109/MCI.2018.2840738.
[12]Shum, H. Y., He, X. D., & Li, D. (2018). From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, 19(1), 10–26. https://doi.org/10.1631/FITEE.1700826.
[13]Roller, S., Dinan, E., Goyal, N., Ju, D., Williamson, M., Liu, Y., Xu, J., Ott, M., Smith, E. M., Boureau, Y. L., & Weston, J. (2021). Recipes for building an open-domain chatbot. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.eacl-main.24.
[14]Su, P. H., Gasic, M., Mrksic, N., Rojas-Barahona, L. M., Ultes, S., Vandyke, D., Wen, T. H., & Young, S. (2016). Continuously learning neural dialogue management. arXiv preprint. https://doi.org/10.48550/arXiv.1606.02689.
[15]Budzianowski, P., & Vulic, I. (2019). Hello, it's GPT-2—How can I help you? Towards the use of pretrained language models for task-oriented dialogue systems. Proceedings of the 3rd Workshop on Neural Generation and Translation. https://doi.org/10.18653/v1/D19-5602.
[16]Wolf, T., et al. (2020). Transformers: State-of-the-art natural language processing. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. https://doi.org/10.18653/v1/2020.emnlp-demos.6.
[17]Keskar, N. S., et al. (2019). CTRL: A conditional transformer language model for controllable generation. arXiv preprint. https://doi.org/10.48550/arXiv.1909.05858.
[18]Holtzman, A., Buys, J., Du, L., Forbes, M., & Choi, Y. (2020). The curious case of neural text degeneration. arXiv preprint. https://doi.org/10.48550/arXiv.1904.09751.
[19]See, A., Liu, P. J., & Manning, C. D. (2017). Get to the point: Summarization with pointer-generator networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/P17-1099.
[20]Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence pre-training for natural language generation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.703.
[21]Putra, F. P. E., Mahmud, M. A., & Maqom, I. S. (2023). Pengembangan sistem pemantauan lingkungan berbasis Internet of Things (IoT) di kampus. Digital Transformation Technology (Digitech), 3(2), 996–1001. https://doi.org/10.47709/digitech.v3i2.3457.
[22]Ammanabrolu, P., Hausknecht, M., & Stone, P. (2019). Playing text-adventure games with graph-based deep reinforcement learning. https://doi.org/10.48550/arXiv.1812.01628.
[23]Putra, F. P. E., Ramadhani, N., Fauzan, F., & Mursidi, M. (2024). Service quality analysis of RFID-based smart door lock in Front One Azana Style hotel area. Brilliance: Research of Artificial Intelligence, 4(1), 372–381. https://doi.org/10.47709/brilliance.v4i1.4292.
[24]Young, S., Gašić, M., Thomson, B., & Williams, J. D. (2013). POMDP-based statistical spoken dialogue systems: A review. Proceedings of the IEEE, 101(5), 1160–1179. https://doi.org/10.1109/JPROC.2012.2225812.
[25]Putra, F. P. E., Ubaidi, U., Saputra, R. N., Haris, F. M., & Barokah, S. N. R. (2024). Application of Internet of Things technology in monitoring water quality in fishponds. Brilliance: Research of Artificial Intelligence, 4(1), 356–361. https://doi.org/10.47709/brilliance.v4i1.4231.
[26]Hausknecht, M., Ammanabrolu, P., Côté, M. A., & Yuan, X. (2020). Interactive fiction games: A colossal adventure. https://doi.org/10.48550/arXiv.1909.05398.
[27]Putra, F. P. E., Salsabila, T., Arifin, S., & Pujiastutik, S. D. (2025). Penggunaan teknologi blockchain dalam jaringan komputer untuk meningkatkan keamanan data. Jurnal Informatika dan Teknologi Komputer (JITEK), 5(1), 49–55. https://doi.org/10.55606/jitek.v5i1.5768.
[28]Madotto, A., Wu, C. S., & Fung, P. (2018). Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialogue systems. https://doi.org/10.48550/arXiv.1804.08217.
[29]Putra, F. P. E., Ubaidi, U., Huda, M. A., Hasbullah, H., & Rohman, A. (2024). Computer network management optimization through big data analysis using time series analysis method. Brilliance: Research of Artificial Intelligence, 4(1), 434–441. https://doi.org/10.47709/brilliance.v4i1.4373.
[30]Rachman, A. F., Putra, F. P. E., Syirofi, S., & Wahid, D. (2024). Case study of computer network development for the Internet of Things (IoT) industry in an urban environment. Brilliance: Research of Artificial Intelligence, 4(1), 399–407. https://doi.org/10.47709/brilliance.v4i1.4302.
[31]Putra, F. P. E., Ubaidi, U., Mahendra, M., Surur, M., & Rizki, A. (2024). 4G LTE network performance analysis provider 3 in Pamekasan using the G-Nettrack application. Brilliance: Research of Artificial Intelligence, 4(1), 427–433. https://doi.org/10.47709/brilliance.v4i1.4376.
[32]Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems, 27. https://doi.org/10.48550/arXiv.1409.3215.
[33]Putra, F. P. E., Ubaidi, U., Mayangsari, D., & Hasanah, N. (2024). Netvista public wireless network quality analysis using Quality of Service parameters. Brilliance: Research of Artificial Intelligence, 4(1). https://doi.org/10.47709/brilliance.v4i1.4388.
[34]Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. arXiv preprint. https://doi.org/10.48550/arXiv.1409.0473.
[35]Putra, F. P. E., Dafid, M., & Syafi'i, I. (2025). Firewall implementation as a computer network security strategy for data protection. Brilliance: Research of Artificial Intelligence, 5(1). https://doi.org/10.47709/brilliance.v5i1.6162.
[36]Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint. https://doi.org/10.48550/arXiv.1406.1078.
[37]Putra, F. P. E., Suhdi, S., Ramadhani, A., & Marzuq, M. (2025). Integrasi teknologi kuantum dan fiber optik untuk meningkatkan keamanan dan efisiensi jaringan masa depan. Jurnal Ilmiah ILKOMINFO, 8(2), 151–163. https://doi.org/10.47324/ilkominfo.v8i2.342.
[38]Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint. https://doi.org/10.48550/arXiv.1301.3781.
[39]Koto, F., Lau, J. H., & Baldwin, T. (2020). IndoLEM and IndoBERT benchmark for Indonesian natural language processing. Proceedings of the 28th International Conference on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.66.
[40]Qiu, X., Sun, T., Xu, Y., Shao, Y., Ning, N., & Huang, X. (2020). Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10), 1872-1897. https://doi.org/10.1007/s11431-020-1647-3.
[41]Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP. https://doi.org/10.18653/v1/W18-5446.
[42]Wilie, B., Vincentio, K., Winata, G. I., Cahyawijaya, S., Li, X., Lim, Z. Y., Soleman, S., Mahendra, R., Fung, P., Bahar, S., & Purwarianti, A. (2020). IndoNLU: Benchmark and resources for evaluating Indonesian natural language understanding. Proceedings of the 28th International Conference on Computational Linguistics. https://doi.org/10.18653/v1/2020.coling-main.85.
[43]Conneau, A., & Lample, G. (2019). Cross-lingual language model pretraining. Advances in Neural Information Processing Systems, 32. https://doi.org/10.48550/arXiv.1901.07291.
[44]Reimers, N., & Gurevych, I. (2019). Sentence-BERT: Sentence embeddings using Siamese BERT-networks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.18653/v1/D19-1410.
[45]Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.3115/v1/D14-1162.
[46]Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. (2020). What does BERT look at? An analysis of BERT’s attention. Transactions of the Association for Computational Linguistics, 8. https://doi.org/10.1162/tacl_a_00315.
[47]Ethayarajh, K. (2019). How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. https://doi.org/10.18653/v1/D19-1006.
[48]Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A., & Choi, Y. (2019). HellaSwag: Can a machine really finish your sentence? Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1472.
[49]Lin, Z., Feng, M., Santos, C. N., Yu, M., Xiang, B., Zhou, B., & Bengio, Y. (2017). A structured self-attentive sentence embedding. Proceedings of the 5th International Conference on Learning Representations. https://doi.org/10.48550/arXiv.1703.03130.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics. https://doi.org/10.18653/v1/N18-1202.
Published
Issue
Section
License
Copyright (c) 2026 Moh. Rafael Kamil Ardiansyah, Nabila Ambarwati (Penulis)

This work is licensed under a Creative Commons Attribution 4.0 International License.








