Layer III, Framework for AI Cybersecurity Practices (FAICP), from ENISA

Layer III (Sectoral AI).

In Layer III we can find various best practices that can be used by the sectoral stakeholders to secure their AI systems. High-risk AI systems (i.e. those that process personal data) have been identified in the AI Act and they are listed in this layer to raise the awareness of operators to adopt good cybersecurity practices.


The emergence of industrial automation and control systems, AI, smart grids and autonomous devices have made the energy sector a target for cyberattacks, while the existing interconnectivity and the rapidly complexity of the underlying infrastructures increase the security threats and their cascading effects.

The energy sector uses different IoT devices (e.g. miniaturised sensors to monitor transmission pipelines); drilling rigs and robots to inspect and repair infrastructure; virtual power plants, microgrids or cloud management services for solar, building automation; new applications with close integration of demand and response providing unparalleled flexibility; expanded telecommunication infrastructures and networks with increased usage of mobile devices. However, all of these technologies (mostly from foreign manufacturers) have many vulnerabilities and a high number of potential attack points, increasing the cybersecurity challenge in the clean energy industry, as described in the 2019 ENISA report Industry 4.0 Cybersecurity: Challenges and recommendations.

Therefore, operators, stakeholders and networks must urgently focus on security as part of their ICT and IT infrastructure, in order to enhance their information security and privacy practices and address the origins of their main security problems. These may include remote work during operations and maintenance, using technologies with known vulnerabilities, new highly-interconnected services, a limited cybersecurity culture among vendors, suppliers and contractors, data networks between on- and offshore facilities and outdated control systems in facilities.


Many medical devices – from glucose meters, insulin pumps, virtual home assistants and cardioverter defibrillators to smart wearable devices, sophisticated software and hospital equipment, along with medical services and applications – are connected over the network and often use AI technologies. Although new connected medical devices help in fighting the increasing cost of healthcare – by reducing the need for hospitalisation, developing personalised therapies and creating intelligent point-of-care diagnostic tools – they also introduce new cybersecurity risks and their interoperability, security and resilience levels are considered to be low.

There are three primary attack vectors through which connected medical devices might be compromised.

• Devices. Cybercriminals exploit device vulnerabilities that exist in their memory, firmware, physical interface, web interface or network services. Other aspects such as unsecure default settings, outdated components and unsecure update mechanisms can also be exploited. Outdated legacy devices are the main targets, due to their unpatched implemented vulnerabilities.

• Communication channels. A device can be compromised by attacking the channels used to connect it with another device. In this vector, spoofing and denial of service attacks are common. Conventional wireless sensor networks consist of wireless nodes equipped with antennas, which broadcast radio signals in all directions and are consequently prone to eavesdropping attacks. An attacker can use this data to introduce themselves as an authorised member to launch an impersonation attack. Thus, eavesdropping is very simple for an attacker while the patient data is transmitting from the body area network to the caregiver device. Hence, patient privacy is breached.

• Applications and software. Cybercriminals can exploit vulnerabilities in web applications and related software for connected devices. For example, web applications can be targeted to steal user credentials or push malware. There is an urgent need to provide a solution where manufacturers can easily identify, estimate, mitigate and audit by design all cybersecurity risks of connected devices (hardware, software and integrated medical frameworks consisting of various modular components), in order to ensure their security and resilience and progress towards a resilient and trustworthy EU healthcare ecosystem.

Regulators around the globe have increasingly pursued medical device cybersecurity as a policy objective over the past few years. In the EU, the first piece of guidance on cybersecurity on medical devices (MDCG-2019-16) was issued in July 2020 by the EU’s Medical Devices Coordination Group.

The EU has included the health sector among its critical information infrastructures and is developing cybersecurity legislation and directives that impose cybersecurity and privacy RM (e.g. GDPR, NIS), supply chain security (e.g. NIS 2), secure authentication and access of healthcare e-services (e.g. eIDAS) and cybersecurity certification (e.g. CSA, AI liability directive, European Chips Act).


New generations of cars are making use of advances in the field of AI. Autonomous vehicles are systems that rely on autonomous driving capabilities using AI on a perception–planning–control pipeline. Designing an is a challenging problem that requires tackling a wide range of environmental conditions (lightning, weather, etc.) and multiple complex tasks. These include road following, obstacle avoidance, abiding with traffic laws, smooth driving style, manoeuvre coordination with other elements of the ecosystem (e.g. vehicles, scooters, bikes, pedestrians) and control of the commands of the vehicle.

The joint ENISA/JRC report Cybersecurity challenges in the uptake of artificial intelligence in autonomous driving analyses cybersecurity vulnerabilities related to AI, identifies related challenges and provides recommendations for securing autonomous vehicles.

Five hypothetical scenarios are presented to illustrate the exploitation of AI vulnerabilities in an automotive context, using both classical cybersecurity and AI-specific vulnerabilities:

• adversarial perturbations against image processing models for street sign recognition and lane detection;

• man-in-the-middle attacks on the planning module;

• data poisoning attacks on stop sign detection;

• attacks related to large-scale deployment of rogue firmware after hacking backend servers of original equipment manufacturers;

• attacks related to sensor/communication jamming and global navigation satellite system spoofing.

In the 2019 report ENISA Good Practices for Security of Smart Cars, security measures against AI vulnerabilities, such as being tricked by adversarial attacks and data falsification / manipulation, were already identified.

The International Telecommunication Union (ITU) Focus Group on AI for autonomous and assisted driving supports standardisation activities for services and applications enabled by AI systems. The group focuses on the behavioural evaluation of AI responsible for the dynamic driving task in accordance with the 1949 and 1968 Convention on Road Traffic of the UNECE Global Forum for Road Safety. In 2021, the group also published the report FGAI4AD-02 – Automated driving safety data protocol – Ethical and legal considerations of continual monitoring.


While modern networks are becoming more sophisticated, the telecommunications industry can benefit from data recovered from networks, mobile applications, customer insight, profile, technology, billing data and services through the integration of AI and help the industry in self-optimising networks, security and predictive measures. AI use cases related to telecom include the following.

• Network optimisation. Networks are managed by AI systems and ML algorithms that predict and detect network abnormalities. AI is also used to optimise and configure various networks, so that it is easy for end users to leverage the advantage of stable network performance.

• Virtual assistants and chatbots. The telecommunications industry is leveraging the power of AI to implement chatbots and virtual assistants, which can deliver round-the-clock support and assistance to customers without any waiting time.

• Predictive maintenance. AI-enabled predictive analytics is helping the telecom sector to maintain high levels of service and products to customers.

• Security and fraud detection. ML algorithms are used to detect and prevent fraudulent activities. AI-driven alerts can notify customers and telecom operators in real time.

The ITU Focus Group on Machine Learning for Future Networks including 5G has published a technical specification on unified architecture for ML in 5G and future networks93. The presented logical architecture establishes a common vocabulary and nomenclature for ML functions and their interfaces to allow standardisation and interoperability for ML functions in 5G and future networks.

The Dutch Radiocommunications Agency published the report Managing AI use in telecom infrastructures – Advice to the supervisory body on establishing risk-based AI supervision94, which addresses the current and future risks of applying AI in the telecom sector, along with their supervision and ways to mitigate them.

New challenges

Horizontal threats and cybersecurity challenges exist in every economic sector (automotive, energy, health, etc.), independently of how AI is being used. Fragmented recommendations, best practices, solutions and tools for horizontal issues become stumbling blocks for guiding sectoral stakeholders. Collaboration among sectoral stakeholders and information sharing and analysis centres (ISACs) is recommended to best address horizontal challenges. Sector-specific issues and mitigation measures need to be listed and published to serve as ‘lessons learned’ for other sectors.

Cyber Risk GmbH, some of our clients