Fake audio or video content has been identified as the most dangerous use of artificial intelligence, according to a new report from University College London. As part of the preparation of the report, British scientists brought in AI specialists from academia, private business, police, government and public security agencies as additional experts.
The final study, published in the journal Crime Science, identifies the most relevant criminal uses of AI for the next twenty years. The options were ranked in order of their negative effect, that is, depending on the harm they can cause, their potential benefits to the underworld, as well as how easy it can be to implement and how difficult it will be to stop it.
The authors prioritized audio or video content and stated that it would be difficult to detect, and the range of use is incredibly wide – from discrediting a public figure and whipping up tensions, to extracting money. The use of fake content can create widespread distrust of audio and visual evidence, which in itself will cause significant damage to society.
In addition to the creation of fake audio and video content, five other areas of criminal AI have been found to be of serious concern. These include the use of self-driving cars as weapons, specialized phishing messages (targeted phishing), attacks to disrupt the operation of various AI-controlled systems, collection of data for blackmail, and the generation of text-based fake news by artificial intelligence.
UCL professor Lewis Griffin said: “As the capabilities of AI technologies expand, so does the potential for their criminal exploitation. In order to properly prepare for possible threats that AI poses to society, we need to determine what these threats can be and how they can affect our lives. ”
Among the crimes (involving AI) of “medium severity”, which are nevertheless ranked among the most profitable, the British experts attributed the sale of goods and services by fraudulent means, including special, targeted with the use of AI advertising.
Crimes of least concern are considered by experts to be those that, despite the fact that they cause harm to specific people, are difficult to apply on a large scale. These are, for example, smart robber bots – small robots used to enter houses through vents or hinged doors for cats.
According to the authors, the current living environment, in which data is owned and gives its owners a certain power, is ideal for the development of new types of criminal activity using AI. Experts note that, unlike many traditional offenses, digital crimes are extremely easy to disseminate, repeat and even sell, which allows not only broadcasting new criminal methods to the whole world, but also providing them as a service.