This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

UK launches world’s first guidelines to put cybersecurity at AI’s core

01 December 2023

In a significant milestone for the global technology community, the UK has released the first global guidelines aimed at securing artificial intelligence (AI) systems against cyber threats.

(Image: Shutterstock)
(Image: Shutterstock)

Developed by the UK's National Cyber Security Centre (NCSC) in collaboration with the US Cybersecurity and Infrastructure Security Agency (CISA), these guidelines have received endorsements from 18 countries, including all G7 members.

The primary objective is to promote the safe and secure development of AI technology. Offering recommendations for developers and organisations, the guidelines advocate for a "secure by design" approach, emphasising the integration of cybersecurity practices from the initial design phase through development, deployment, and ongoing operations.

The guidelines build on the achievements of the world’s first international summit on AI safety held at Bletchley Park last month. This saw more than 25 countries, including the US, China and the EU, sign the historic ‘Bletchley Declaration’, an agreement to establish a common approach to overseeing AI.

The guidelines focus on four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. Specific recommendations and best practices are outlined for each phase, providing a framework for developers to enhance security behaviours.

The launch event in London, attended by over 100 key industry, government and international partners, featured speakers from Microsoft, the Alan Turing Institute, and various cybersecurity agencies. 

NCSC CEO Lindy Cameron emphasised the proactive nature of security, stating that "security is not a postscript to development, but a core requirement throughout".

The agreement comes on the heels of the executive order signed by US President Joe Biden, which mandated AI developers to disclose safety test results to the US government prior to public release. US agencies were also urged to establish testing standards which would address anticipated chemical, biological, radiological, nuclear, and cybersecurity risks.

US Secretary of Homeland Security Alejandro Mayorkas praised the UK’s new guidelines, stating they provide a “common-sense path to designing, developing, deploying, and operating AI, with cybersecurity at its core”.

The 18 endorsing countries include Australia, Canada, Chile, Czechia, Estonia, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, Poland, the Republic of Korea, Singapore, the United Kingdom, and the United States.

UK Science and Technology Secretary Michelle Donelan claimed that the guidelines serve as a testament to the UK's leadership in the development of AI.

“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” she said.

The guidelines signify a significant step toward establishing a common global understanding of cyber risks and mitigation strategies around AI. The challenge now lies in the widespread adoption of these guidelines by developers worldwide to ensure the safe and secure deployment of AI technology across critical sectors.


Print this page | E-mail this page

Minitec