Layer I (cybersecurity foundations).
Layer I covers the basic cybersecurity knowledge and practices that need to be applied to all ICT environments that host / operate / develop / integrate / maintain / supply / provide AI systems. Existing cybersecurity good practices presented in this layer can be used to ensure the security of the ICT environment that hosts the AI systems.
Risk management is the basic cybersecurity practice for ensuring that an enterprise is secure, by identifying and evaluating threats and vulnerabilities, potential impacts and by measuring risks.
According to the NIS 2 directive, all essential entities important for the functioning of society need to assess and mitigate their risks. Therefore, the first step in the security of AI systems and the security of their life cycle is to operate in a secure environment, i.e. to secure the ICT infrastructure that hosts the AI systems.
The various types of threats to ICT infrastructures are listed below.
• Adversarial threats. These pose malicious intentions (e.g. denial of service attacks, non-authorised access, masquerading of identity) to individuals, groups, organisations or nations.
• Accidental threats. These are caused accidentally or through legitimate components. Human errors are a typical accidental threat. Usually, they occur during the configuration or operation of devices or information systems, or the execution of processes.
• Environmental threats. These include natural disasters (floods, earthquakes), human-caused disasters (fire, explosions) and failures of supporting infrastructures (power outage, communication loss).
• Vulnerability. This is an existing weakness that might be exploited by an attacker. For the identification of general cybersecurity threats, AI stakeholders wishing to secure their ICT infrastructure can use the annual ENISA Threat Landscape report on the state of the cybersecurity threat landscape, or similar reports such as the annual technical threat reports published by other organisations (e.g. the Open Web Application Security Project or OWASP).
Security management includes two main phases.
• Risk analysis. Threat/vulnerability/impact analyses and risk estimations are conducted on all assets within the perimeter of the assessment (e.g. components of medical devices, cyber assets within a hospital’s infrastructure).
• Risk management. Risks are treated by selecting and implementing appropriate countermeasures (Figure 4). The appropriate selection of controls requires a cost-benefit analysis, to determine the risks the manufacturer is willing to accept and compare the costs of those risks against the benefits.
The major focus lies on enterprises and the identification, analysis and evaluation of threats and vulnerabilities, along with the estimation of risk levels to the respective enterprise assets. The outcome of a risk analysis is a list of threats to all assets of the enterprise ICT system, together with the corresponding risk levels of these threats to all assets.
Threat agents and attackers’ profiles in AI ecosystems.
AI stakeholders need to be aware of their adversaries in the operational environment. Three key components characterise potential adversaries: means, motive and opportunity. An attack occurs if the attacker has the means to execute it, the opportunity to do so and exploit a vulnerability, and a motive to target the victim in question.
AI stakeholders and operators need to analyse potential attackers in order to estimate their risk levels more realistically and accurately and to undertake appropriate countermeasures.
Adversarial threats are mainly caused by people who have a deliberate intention to cause harm. Typically, these threat actors are referred to as attackers or adversaries. In the literature, cyber threat actor lists and taxonomies are still being developed, and most of these lists identify the following intentional threat actors: insider attackers, cyber terrorists, hacktivists / civil activists, organised cybercriminals, script kiddies, state-sponsored attackers, commercial industrial espionage agents, cyberwarriors / individual cyber fighters, cyber vandals and black hat hackers.
Today there is no universally accepted standard for an attackers’ taxonomy and new definitions and proposals for taxonomies are still emerging; 11 attacker types were defined by ENISA in 2021 by consolidating, refining and improving previous taxonomies, which reflect the current threat landscape and can be mapped to other taxonomies in use by MS and EU bodies. The attackers target ICT infrastructures hosting AI systems/products or AI systems at any stage of their life cycle.
Cybersecurity legislation and policies.
The operators of ICT infrastructures need to be aware of and comply with all EU legislation, recommendations and directives, from the cybersecurity strategy in 2013 to the NIS 2 directive and the Cybersecurity Resilience Act in 2022.
Several pieces of legislation and policies have been developed to ensure the most effective responses and the ICT infrastructure needed to comply with these policies. NIS 2 and the EU Cybersecurity Act are considered to be Europe’s two most important and far-reaching pieces of cybersecurity legislation and the general data protection regulation (GDPR) is the key personal data protection act, emphasising supply chain security and privacy respectively, which are most relevant for the life cycle of the AI systems as well.
The EU’s common security and defence policy (CSDP) is another important element, since it is the main instrument of the EU for dealing with new and unconventional security threats and serves to prepare a possible common European defence of the EU. Since AI is considered a technology that will play a crucial role for defending the EU, it is also important that this policy is considered.
The EU Cybersecurity Act establishes a cybersecurity certification framework for products and services. This framework provides EU-wide certification schemes as a comprehensive set of rules, technical requirements, standards and procedures. This way it is possible to ensure the general public trust in the cybersecurity of IT products and services. It is important that we can see that a product has been checked and certified to conform to high cybersecurity standards. AI-related products will gain trustworthiness if they are certified and, in the years to come, various cybersecurity schemes will be developed for AI products to specify the security requirements.
Another important initiative is the European Cybersecurity Competence Centre, which aims to increase Europe’s cybersecurity capacities and competitiveness, working together with a Network of National Coordination Centres to build a strong cybersecurity community. Also, the establishment of national computer security incident response teams (CSIRTs) is an essential step to facilitate the building of cyber capacity both within and across nations and to make it more effective.
The European Cybersecurity Competence Centre will guide AI stakeholders in enhancing the cybersecurity of their products and advance their research efforts and developments. CSIRTs will gain the necessary capabilities to guide stakeholders to respond to AI attacks or using AI technologies to defend their infrastructures.
These are just some of the instruments developed under the EU cybersecurity strategy, which aims to build resilience to cyber threats and ensure that citizens and businesses benefit from trustworthy digital technologies.
In addition, the new legislative framework (NLF) improves market surveillance, introduces rules to better protect both consumers and professionals from unsafe products (of EU or non-EU origin), sets rules for accreditation and establishes a common legal framework for industrial products. The NLF will enhance the security of AI-based products.
The European Chips Act is relevant to AI security because semiconductors are the elements of platform technology of the 21st century that will be used for AI developments and for embedding strong security measures. The EU globalised semiconductor industry will be supported by this proposed act.
The Cyber Resilience Act will set new cybersecurity rules for digital products and ancillary services. This initiative will also promote the security of AI products, since it aims to address market needs and protect consumers from insecure products by introducing common cybersecurity rules for manufacturers and vendors of tangible and intangible digital products.
The EU legislative instruments and policies are mature and embrace AI system trustworthiness. The upcoming challenge is upscaling and embracing the legal and policy requirements to technical requirements, design specifications and concrete testing and assessment of AI systems.
The common cybersecurity practices need to be embraced with additional practices that will meet the security requirements of AI systems. Due to the dynamic and multifaceted nature of these systems, the following additional challenges need to be addressed.
• AI risk assessments should be dynamic and combined with anomaly detection approaches, as for ICT systems in general.
• Measuring AI threats and evaluating AI risks require the development of a widely accepted scaling system that can meet common social and ethical values.
• A taxonomy of AI attackers needs to advance the existing taxonomies, in order to better understand the motives, capabilities, objectives and psychological profiles of the AI adversaries.
• Evaluation of an AI product against a static set of requirements can quickly become outdated, therefore dynamic RM and conformity assessment throughout the entire AI life cycle are required.
• No new standards or legislative instruments are needed, but there is a need for targeted guidelines, best practices and tools that will help the evaluation of AI-cybersecurity and trustworthiness.