Researchers Discover GPT-4’s Ability to Conduct Cyberattacks Without Human Intervention
A recent study has unveiled that the developer version of a leading artificial intelligence model, known for its advanced language processing capabilities, can autonomously carry out cyberattacks, including hacking websites and extracting data from online databases. This breakthrough indicates a significant shift in the landscape of cyber security, suggesting that individuals or groups lacking in technical hacking skills could now potentially deploy AI to execute cyberattacks.
The research focused on evaluating the hacking capabilities of several AI models, including the most advanced versions provided by a well-known AI organization and other models available in the public domain. These models, generally utilized for generating human-like text responses, were adapted to interact with web browsers, digest information on hacking techniques, and apply these strategies in real-world scenarios.
The AI models were tasked with a series of hacking challenges that varied in complexity, from simple database breaches using standard hacking techniques to more sophisticated exploits involving web programming languages. Surprisingly, the most advanced model demonstrated a remarkable proficiency in completing these tasks, achieving success in a majority of the challenges and even uncovering a new vulnerability in an actual website.
This study also highlighted the cost efficiency of utilizing AI for hacking purposes, estimating the operational cost to be significantly lower than that of hiring experienced cybersecurity professionals.
Following the study, revelations emerged about efforts by leading AI and technology firms to combat the misuse of AI technology by malicious actors, including state-affiliated hackers. These efforts were aimed at preventing the use of AI in enhancing malware and conducting intelligence operations. Despite these measures, the study’s findings present a stark contrast to previous assurances given by these companies, suggesting that the potential for AI to aid in cyberattacks may have been underestimated.
The discrepancy between the independent research findings and the companies’ statements has prompted calls for further independent evaluation of AI technologies. Experts argue that such assessments are crucial for understanding the real-world implications of AI and for developing strategies to mitigate potential harms, ensuring the responsible use of AI in cybersecurity and beyond.