Introduction to ETSI’s AI Security Standard
The European Telecommunications Standards Institute (ETSI) has introduced the EN 304 223 standard, marking a significant milestone in the global effort to secure artificial intelligence (AI) systems. This European Standard outlines essential security requirements that organizations must embed into their AI governance frameworks, particularly as machine learning technologies become integral to core business operations.
Recognized as the first globally applicable European Standard for AI cybersecurity, ETSI EN 304 223 has received formal endorsement from national standards bodies, reinforcing its authority across international markets. It complements regulatory efforts such as the EU AI Act by addressing unique AI risks often overlooked by traditional cybersecurity approaches.
Addressing AI-Specific Security Challenges
AI systems present distinct security challenges, including vulnerabilities like data poisoning, model obfuscation, and indirect prompt injection attacks. The ETSI standard covers a broad spectrum of AI technologies, from deep neural networks and generative AI to basic predictive models, while excluding only AI strictly confined to academic research.
Clarifying Responsibility in AI Security
A core challenge in enterprise AI adoption is defining accountability for security risks. ETSI tackles this by specifying three primary roles within AI security governance: Developers, System Operators, and Data Custodians.
In practice, these roles may overlap. For example, a financial services company customizing an open-source AI model for fraud detection may act as both Developer and System Operator. This dual role entails rigorous obligations such as securing deployment infrastructure and maintaining thorough documentation of training data provenance and model audit trails.
The introduction of Data Custodians formalizes the responsibility of managing data permissions and integrity, directly impacting Chief Data and Analytics Officers (CDAOs). These custodians are tasked with ensuring that AI system usage aligns with the sensitivity of the training data, effectively embedding a security checkpoint within data management workflows.
Security by Design and Risk Mitigation
ETSI’s standard emphasizes that AI security must be incorporated at the design phase, not appended after deployment. Organizations are required to perform threat modeling that addresses AI-specific attack vectors such as membership inference and model obfuscation.
Developers must also limit system functionality to reduce the attack surface. For instance, if a multi-modal AI model supports text, image, and audio processing but only text is needed, the unused modalities must be disabled or secured to prevent exploitation. This approach challenges the prevalent practice of deploying large, general-purpose foundation models, advocating instead for smaller, specialized solutions where appropriate.
Asset Management and Supply Chain Transparency
The standard mandates comprehensive asset inventories detailing model interdependencies and connections to combat hidden or shadow AI risks. Without full visibility, organizations cannot effectively secure AI systems.
Additionally, supply chain security is a critical concern, especially for enterprises relying on third-party or open-source AI components. System Operators must justify the use of any poorly documented models and maintain documentation of associated security risks.
Procurement teams are now expected to reject ‘black box’ AI solutions. Developers must provide cryptographic hashes for model components to verify authenticity, and publicly sourced training data needs detailed audit trails including source URLs and timestamps. These measures facilitate post-incident investigations, especially when assessing data poisoning during training.
Operational Controls and Lifecycle Management
Enterprises offering AI through external APIs must implement controls such as rate limiting to defend against adversarial attacks aimed at model reverse engineering or data poisoning.
The standard treats major updates like retraining as new deployments, requiring renewed security evaluations. Continuous monitoring extends beyond uptime metrics to include detection of “data drift,” signaling potential security breaches.
At the end of an AI model’s lifecycle, secure decommissioning processes must involve Data Custodians to prevent data leaks from discarded hardware or cloud resources.
Governance and Training Requirements
ETSI EN 304 223 calls for tailored cybersecurity training programs ensuring developers are proficient in secure AI coding, while general staff are made aware of AI-related social engineering risks.
Scott Cadzow, Chair of ETSI’s Technical Committee for Securing Artificial Intelligence, stated, “ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems.”
He added, “With AI increasingly embedded in critical infrastructure, clear and practical guidance that reflects technological complexity and deployment realities is essential. This framework enables organizations to build AI systems that are resilient, trustworthy, and secure by design.”
Implications for Safer AI Innovation
By enforcing clear role definitions, audit trails, and supply chain transparency, the ETSI standard provides a structured approach to mitigating AI risks and supporting compliance with future regulations.
An upcoming Technical Report (ETSI TR 104 159) will focus on applying these security principles to generative AI, addressing challenges such as deepfakes and disinformation.
Overall, ETSI EN 304 223 offers enterprises a comprehensive blueprint for securely integrating AI technologies, supporting innovation while protecting against emerging cyber threats.
Fonte: ver artigo original

DeepSeek Innovates Stable Training for Large AI Models with New Technique
Gridcare Secures $13.3 Million to Unlock Hidden Data Center Capacity Using AI
GPT-5.2 Brings Practical Enhancements to AI Language Capabilities
Perplexity Faces Allegations of Ignoring AI Scraping Blocks on Websites