News

Global CEOs Highlight Cybersecurity Concerns with Generative AI, PwC Survey Reveals

In a recent global survey conducted by PricewaterhouseCoopers (PwC), 64% of CEOs identified cybersecurity as their primary concern regarding the risks associated with generative AI. This apprehension is set against a backdrop of escalating cyberattacks, with a McKinsey report projecting annual damages from such attacks to reach approximately $10.5 trillion by 2025, a threefold increase from 2015.

The PwC survey, which polled nearly 5,000 CEOs between October and November 2023, also revealed that over half of these business leaders anticipate generative AI to potentially amplify the spread of misinformation within their organizations. These concerns emerge as numerous companies rapidly integrate generative AI into new product offerings. The PwC report highlights the critical societal responsibilities CEOs bear in ensuring responsible AI utilization within their organizations.

OpenAI’s Initiatives to Mitigate Negative Impacts of Generative AI

In response to these concerns, OpenAI, a frontrunner in generative AI technology, announced several projects aimed at countering the technology’s harmful effects. At the World Economic Forum in Davos, the company’s vice president of global affairs detailed collaborations with the US Defense Department to develop open-source cybersecurity software.

Additionally, OpenAI outlined its strategies for addressing the influence of AI on elections, with numerous global voting events scheduled this year. The company’s image generator, Dall-E, is designed with safeguards to refuse requests for generating images of real people, including political figures. OpenAI also prohibits applications that could deter voting.

To enhance transparency in AI-generated content, OpenAI plans to introduce features enabling users to identify the tools used in image production. Furthermore, ChatGPT, another OpenAI product, will soon incorporate real-time news with proper attribution and links, aiding users in discerning the origins and reliability of information.

AI in Political Campaigns and Regulation

The use of AI in political campaigns is already evident, as seen in AI-generated songs featuring India’s Prime Minister Narendra Modi gaining popularity ahead of the country’s elections. However, the rise of deepfakes, AI-generated content that can be misleading, particularly during elections, has raised concerns. In response, tech giants like Google, Meta, and TikTok now mandate labels on election-related advertisements utilizing AI.

In the United States, several states including California, Michigan, Minnesota, Texas, and Washington, have enacted laws either banning or requiring disclosure of political deepfakes, reflecting a growing regulatory movement to address the challenges posed by advanced AI technologies in political processes.

This global focus on the implications of generative AI underscores the need for a balanced approach, where the benefits of innovation are harnessed while mitigating risks and ensuring ethical use, especially in sensitive areas like cybersecurity and political discourse.