This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

What would happen if AI fell into the wrong hands?

21 February 2018

The Electronic Frontier Foundation (EFF), academic & civil society organisations, released a report on the risks of malicious use of AI and the steps to prevention.

Artificial Intelligence (AI) and machine learning techniques are growing within many sectors and will one day be seen as a normal part of business. AI will continue to be a buzzword in the coming years, not only because it can deliver valuable insights to many businesses across industry but because of the many reports of it going horribly wrong. Despite its increasing sophistication, we need to stop and think of the risks if this technology is used in the wrong hands.

One area of focus for the EFF is the potential interaction between computer insecurity and AI. At present, computers lack high levels of security which make them a poor platform to host AI technologies. 

The new report, ‘The Malicious Use of Artificial Intelligence’, looks closely at this problem, as well as the implications of AI for physical and political security. 

You can download the report here.

Print this page | E-mail this page

Coda Systems