The ETSI Securing Artificial Intelligence Industry Specification Group (SAI ISG) last month released its first Group Report, ETSI GR SAI 004, which gives an overview of the problem statement regarding the securing of AI. ETSI SAI is the first standardization initiative dedicated to securing AI.
The Report describes the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to confidentiality, integrity and availability at each stage of the machine learning lifecycle. It also points out some of the broader challenges of AI systems including bias, ethics and ability to be explained. A number of different attack vectors are outlined, as well as several cases of real-world use and attacks.
“There are a lot of discussions around AI ethics but none on standards around securing AI. Yet, they are becoming critical to ensure security of AI-based automated networks. This first ETSI Report is meant to come up with a comprehensive definition of the challenges faced when securing AI. In parallel, we are working on a threat ontology, on how to secure an AI data supply chain, and how to test it,” explains Alex Leadbeater, Chair of ETSI SAI ISG.
To identify the issues involved in securing AI, the first step was to define AI. For the ETSI group, artificial intelligence is the ability of a system to handle representations, both explicit and implicit, and procedures to perform tasks that would be considered intelligent if performed by a human. This definition still represents a broad spectrum of possibilities. However, a limited set of technologies are now becoming feasible, largely driven by the evolution of machine learning and deep-learning techniques, and the wide availability of the data and processing power required to train and implement such technologies.
Numerous approaches to machine learning are in common use, including supervised, unsupervised, semi-supervised and reinforcement learning. Within these paradigms, a variety of model structures might be used, with one of the most common approaches being the use of deep neural networks, where learning is carried out over a series of hierarchical layers that mimic the behaviour of the human brain.
Various training techniques can be used as well, namely adversarial learning, where the training set contains not only samples which reflect the desired outcomes, but also adversarial samples, which are intended to challenge or disrupt the expected behaviour.
While the term ‘artificial intelligence’ originated at a conference in the 1950s at Dartmouth College in Hanover, New Hampshire, USA, the cases of real-life use described in the ETSI Report show how much it has evolved since. Such cases include ad-blocker attacks, malware obfuscation, deepfakes, handwriting reproduction, human voice and fake conversation (which has already raised a lot of comments with chatbots).