## ChatGPT Discreetness Efforts Riddled With Flaws as Researchers Reveal Data Extraction Via Simple Trick

Researchers have come together to study ChatGPT and their findings reveal that a cost-effective trick can retrieve real user contact information from the AI chatbot, uncovering real phone numbers and email addresses of certain individuals and companies, something that the AI promised to safeguard.

The researchers, coming from Google DeepMind, Cornell, Carnegie Mellon University, ETH Zurich, The University of Washington, and the University of California Berkeley, shared their latest findings in a new published study. They managed to extract ChatGPT’s training data, revealing that large language models powering AI obtain user data from the internet without consent.

Several incidents involving the chatbot’s willingness to divulge user data further prove its vulnerability. The researchers were able to prompt ChatGPT to reveal real phone numbers and email addresses by using simple word repetition commands, leading to privacy and security concerns regarding its capabilities.

Despite OpenAI claims of patching the vulnerability, recent reports say that this issue remains unresolved, casting doubts on the AI’s discretion.

OpenAI has been in the hot seat due to privacy concerns and unauthorized access to user data online. Despite promises to not use paying customer data further, privacy and security remain as top concerns against ChatGPT, OpenAI, and the entire AI industry. The revelation from researchers stresses the need for developers to strengthen the AI against potential vulnerabilities.

Furthermore, there were lingering fears of ChatGPT being potent enough to create malware, further exposing users’ crucial data to potential malicious attacks.

The COVID-19 pandemic has largely accelerated discussions about AI usage and privacy concerns, and with the rapid spread of information online, prioritizing data safety and privacy is crucial for developers and users alike. Researchers’ uncovering of the AI’s susceptibility to simple attacks offers a clarion call for re-assessment and development of robust methods to protect sensitive data within AI systems.

## Conclusion
ChatGPT’s once-promised discreetness has been put in question as researchers successfully extracted real user contact information and other data, highlighting the urgency for reinforcing privacy and data security measures within AI systems. This discovery underscores the need for oversight and strict protocols to safeguard user privacy and prevent potentially harmful information exposure.

By admin

I have over 10 years of experience in the cryptocurrency industry and I have been on the list of the top authors on LinkedIn for the past 5 years. I have a wealth of knowledge to share with my readers, and my goal is to help them navigate the ever-changing world of cryptocurrencies.