Large language models like ChatGPT are found to have critical vulnerabilities, claim AI researchers.

Health

Large Language Models (LLMs) such as OpenAI’s ChatGPT and Google Bard have gained significant popularity since 2022, leading to substantial investments in their development and sparking an AI race. These AI tools are commonly integrated into chatbots and utilize the vast amount of information available on the internet to generate responses to user prompts.

However, researchers from AI security startup Mindgard and Lancaster University have issued a warning about the potential vulnerabilities of LLMs. They have discovered that certain sections of these models can be replicated within a matter of days and at a cost of only $50. The implications of this replication include targeted attacks, exposure of confidential data, bypassing of security measures, providing inaccurate responses, and enabling further focused attacks.

TechXplore reports that these findings will be presented at CAMLIS 2023, the Conference on Applied Machine Learning for Information Security. The researchers’ work demonstrates the feasibility of cost-effective replication of critical aspects of existing LLMs.

In their study, the researchers employed a tactic known as “model leeching” to understand the functioning of LLMs. They focused on ChatGPT-3.5-Turbo and successfully replicated key elements of the model in a model one hundred times smaller. This replicated model served as a testing ground to uncover vulnerabilities in ChatGPT, resulting in an 11% increased success rate in exploiting these vulnerabilities.

Dr. Peter Garraghan of Lancaster University and Mindgard’s CEO expressed both fascination and concern about their discovery. He emphasized the significance of their work in showcasing the transferability of security vulnerabilities between closed source and open source machine learning models. This raises concerns, given the widespread reliance on publicly available models.

The research also highlights the existence of latent weaknesses within AI technologies, which may be shared across different models. While businesses are eager to invest in their own LLMs for various applications, the researchers emphasize the importance of recognizing and addressing the associated cyber risks.

Garraghan stressed the need for careful consideration of cyber risks when adopting and deploying LLM technology. Despite its transformative potential, understanding and measuring these risks are vital for businesses and scientists alike.

The researchers’ findings shed light on the potential vulnerabilities within LLMs and the need for enhanced security measures. As the popularity of these models continues to grow, it is crucial to address these concerns to ensure the safe and responsible use of AI technology.

(Picture Source: Leon Neal/Getty Images)

ⓒ 2023 TECHTIMES.com All rights reserved. Do not reproduce without permission.