A new study from the University of Bath and the Technical University of Darmstadt has concluded that large language models (LLMs) like ChatGPT do not pose an existential threat to humanity.
The research, presented at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), indicates that these models cannot learn independently or acquire new skills without explicit instruction, making them inherently controllable and predictable.
The study highlights that while LLMs excel in language proficiency and can follow instructions, they lack the ability to master new skills autonomously.
This finding challenges the prevailing narrative that LLMs could develop complex reasoning skills and pose a threat to humanity.
Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, emphasised that concerns about LLMs acquiring hazardous abilities like reasoning and planning are unfounded.
The research team, led by Professor Iryna Gurevych, conducted experiments to test LLMs’ ability to complete unfamiliar tasks, known as emergent abilities.
They discovered that LLMs’ capabilities are largely due to a process called in-context learning (ICL), where models perform tasks based on examples provided to them. This ability, combined with their instruction-following skills and linguistic proficiency, accounts for both their strengths and limitations.
Despite the lack of existential threat, the study acknowledges the potential misuse of LLMs, such as in creating fake news or facilitating fraud. Dr. Tayyar Madabushi cautioned against enacting regulations based on perceived existential threats, instead urging focus on addressing real risks associated with AI misuse.
Professor Gurevych added that while AI does not pose a threat in terms of emergent complex thinking, it is crucial to control the learning process of LLMs and focus future research on other potential risks.
The study suggests that users should provide explicit instructions and examples to LLMs for complex tasks, as relying on them for advanced reasoning without guidance is likely to result in errors.
Overall, the research provides a clearer understanding of LLMs’ capabilities and limitations, encouraging the continued development and deployment of these technologies without undue fear of existential risks.