A new government report has raised concerns about the potential for artificial intelligence (AI) to exacerbate cyber threats by 2025. It suggests that AI could not only increase the risk of cyber-attacks but also undermine trust in online content.

Additionally, the report warns that AI might assist in planning biological or chemical attacks by terrorists. However, some experts question whether AI will evolve as predicted.

On Thursday, Prime Minister Rishi Sunak is expected to address the opportunities and challenges posed by this technology. The government report focuses on generative AI, the type of system that currently powers popular chatbot and image generation software. It draws on declassified information from intelligence agencies.

The report highlights the possibility that by 2025, generative AI could be employed to gather knowledge related to physical attacks, including those involving chemical, biological, and radiological weapons by non-state violent actors.

It acknowledges efforts by companies to thwart these threats but points out variations in the effectiveness of such safeguards.

While barriers currently exist to access the knowledge, raw materials, and equipment necessary for attacks, the report suggests that these barriers may diminish, potentially accelerated by AI.

The report also suggests that AI will likely play a role in facilitating “faster-paced, more effective, and larger scale” cyberattacks by 2025. Joseph Jarnecki, a researcher specializing in cyber threats, points out that AI could aid hackers in mimicking official language, a skill they have found challenging to acquire.

Prime Minister Sunak’s speech on Thursday is expected to outline the UK government’s strategy to ensure AI safety and establish the UK as a global leader in AI safety.

In his address, Sunak is expected to acknowledge the benefits of AI, such as new knowledge, economic growth opportunities, advances in human capabilities, and the potential to solve previously insurmountable problems.

However, he will also address the new dangers and fears associated with AI and commit to mitigating these risks to ensure a better future for all.

The speech will set the stage for a government summit next week, which will focus on the regulation of “Frontier AI” systems capable of performing a wide range of tasks surpassing today’s most advanced models. The potential threat posed by such advanced AIs to humanity remains a subject of debate among experts.

Another report published by the Government Office for Science indicates that many experts view the risk of such advanced AIs as having a very low likelihood and limited plausible pathways to realization.

To pose a risk to human existence, these AIs would need control over vital systems, the ability to enhance their own programming, the capacity to evade human oversight, and a sense of autonomy. The timelines and plausibility of when these capabilities might emerge remain uncertain.

While major AI companies generally agree on the need for regulation, some critics question the summit’s focus, suggesting that it prioritizes long-term risks and may not fully align with the technical realities of AI development.

The government reports aim to moderate concerns about futuristic threats and emphasize the gap between political positions and technical realities.

Last Updated: 26 October 2023