Covering Scientific & Technical AI | Monday, October 7, 2024

Is AI an Existential Threat to Humanity’s Future? 

The rise of artificial intelligence (AI) systems has sparked significant concern and anxiety regarding the implications of AI technology, particularly regarding the potential for an existential threat posed by advanced AI. 

According to the Center for AI Safety (CAIS), addressing the potential existential threat posed by AI must be considered a global priority, on par with other significant societal risks like pandemics and nuclear conflict. This perspective reflects a prevalent concern within the AI industry that such risks may start to emerge unless AI technology is strictly regulated on a global scale. 

Several factors have contributed to the belief that AI poses an existential threat including the hypothetical scenario of AI systems achieving superintelligence, exceeding human cognitive abilities. As AI systems become more autonomous, there are also concerns about the ability of the technology to make decisions without human oversight. 

A report commissioned by the US State Department that the most advanced AI systems could pose an extinction-level threat to humans in a worst-case scenario. The findings of this report were based on interviews conducted with over 200 individuals, including senior executives from leading AI firms and cybersecurity researchers. 

While these concerns are certainly valid, a groundbreaking study from researchers at the University of Bath and the Technical University of Darmstadt challenges the narrative, revealing that AI does not pose an existential threat to humanity.

The study was published as part of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) - the leading international conference focused on natural language processing. 

The findings of the research highlight that while ChatGPT and other large language models (LLMs) demonstrate capabilities to follow instructions and show proficiency in language, they lack the ability to master new skills without explicit instructions. As a result, they remain fundamentally predictable, controllable, and safe. 

As these AI models continue to evolve, they are likely to become more sophisticated in their ability to follow detailed prompts, however, they are unlikely to acquire the complex reasoning skills necessary for autonomous decision-making, according to this study. 

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies and also diverts attention from the genuine issues that require our focus,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the new study.

(Stokkete/Shutterstock)

The research was primarily based on testing the LLMs on their “emergent abilities”, such as contextual understanding, complex reasoning, and interactive learning. 

The collaborative research team, led by Professor Iryna Gurevych, conducted several experiments to test the emergent abilities of LLMs. This included testing them on tasks they had never encountered before. 

Assessing emergent abilities is challenging as LLMs possess a remarkable ability to use their in-context learning (ICL) to adapt to new situations and provide relevant outputs based on the context they have been given. 

One of the key goals of the study was to determine which abilities genuinely emerge without ICL and whether functional linguistic abilities in instruction-tuned models originate from ICL rather than being intrinsic.

The study evaluated 20 models across 22 tasks using two settings and multiple metrics, including bias tests and manual analysis. Four model families - GPT, T5, Falcon2, and LLaMA were chosen based on their known abilities and performance. 

The findings revealed that the LLM's abilities mainly come from ICL rather than “learning” new information based on a set of examples provided to them. This highlights the crucial difference between following instructions and possessing the reasoning abilities required to become completely autonomous or achieve superintelligence. The researchers concluded that LLMs lacked emergent complex reasoning abilities. 

While the study claims to have put the existential threat fears to ease, the researchers warn that the findings do not mean AI poses no threat at all. Instead, the research demonstrates that the  “claims about the emergence of complex thinking skills related to specific dangers lack evidence”. According to the researchers, further experiments need to be conducted to understand the other risks posed by AI models.

AIwire