Open Access Peer-reviewed Commentary

Challenges and limitations of ChatGPT and other large language models

Main Article Content

Erwin L. Rimban corresponding author

Abstract

This article explores the challenges and limitations of large language models, focusing on ChatGPT as a representative example. We begin by discussing the potential benefits of large language models, such as their ability to generate natural language text and assist with language-related tasks. However, we also acknowledge the concerns around these models, including their environmental impact, potential for bias, and lack of interpretability. We then delve into specific challenges faced by ChatGPT and similar models, including limitations in their understanding of context, difficulty in handling rare or out-of-vocabulary words, and their tendency to generate nonsensical or offensive text. We conclude with recommendations for future research and development, including the need for increased transparency, interpretability, and ethical considerations in the creation and deployment of large language models.

Keywords
ChatGPT, large language models, natural language generation, ethical considerations

Article Details

How to Cite
Rimban, E. L. (2023). Challenges and limitations of ChatGPT and other large language models. International Journal of Arts and Humanities, 4(1), 147-152. https://doi.org/10.25082/IJAH.2023.01.003

References

  1. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint, arXiv: 2005.14165. https://doi.org/10.48550/arXiv.2005.14165
  2. Dodge, J., Schwartz, R., Smith, N. A., & Etzioni, O. (2020). Fine-tuning language models from human preferences. arXiv preprint, arXiv: 1909.08593.
  3. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé III, H., & Crawford, K. (2018). Datasheets for datasets. arXiv preprint, arXiv: 1803.09010. https://doi.org/10.1145/3458723
  4. Huang, J. (2021). Limitations of language models. In Advances in Neural Information Processing Systems (pp. 10141-10150).
  5. Jia, R., Liang, P. P., Levin, D., & Fei-Fei, L. (2021). Revisiting the evaluation of generative models. arXiv preprint, arXiv: 2101.08127.
  6. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  7. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1), 5485-5551.
  8. Roberts, A., Shin, D., Dong, L., & Wainwright, M. J. (2020). A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint, arXiv: 2012.09672.
  9. Wallace, E., & Jagadeesh, V. (2020). Universal adversarial triggers for attacking and analyzing NLP. arXiv preprint, arXiv: 1908.07125. https://doi.org/10.18653/v1/D19-1221
  10. Zhang, S., Yao, L., Sun, A., & Tay, Y. (2021). Attention mechanisms in language processing: A survey. arXiv preprint, arXiv: 2102.00190.