hash
stringlengths 32
32
| doc_id
stringlengths 7
13
| section
stringlengths 3
121
| content
stringlengths 0
2.2M
|
|---|---|---|---|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6 Approach for Documenting AI-enabled Systems
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.1 Overview
|
Documentation, in general, supports a large variety of needs and is always to be tailored to specific situations: there is no one-fits-all format or method. This flexibility is particularly crucial for AI systems, as they often operate in dynamic environments and serve diverse stakeholders with varying technical expertise and regulatory requirements. The present document sets out an approach based on the following ideas and concepts. What is being documented is considered as the documentation item. This could include an ML model, an algorithm, evaluation results, datasets, processes, or design decisions (see clause 6.2). Documentation is created depending on who is creating it and for whom it is intended. This is determined by the documentation stakeholder and allows for different perspectives on the AI system or different abilities to understand the content of the documentation (see clause 6.3). Moreover, documentation should be seen as an ongoing process, i.e. re-activated whenever the system is retrained, updated, or otherwise modified. The documentation trigger describes when within the system life cycle, documenting should start (see clause 6.4). The documentation method addresses how the documentation will be created. This involves selecting suitable methods and formats that best represent the subject, such as textual descriptions, diagrams, or structured templates. Applying established templates and standards facilitates clear, accurate, and consistent documentation, ensuring it remains accessible and reliable for all stakeholders (see clause 6.5). Finally, the level and quality of documentation encompass both the required level of detail, determined by the complexity and risk associated with the system and the quality characteristics needed to make the documentation accurate, complete, and fit for purpose (see clause 6.6).
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.2 Documentation items (what to document)
|
In the context of AI system documentation, a documentation item defines what is being documented to ensure transparency, accountability, and regulatory alignment across the AI system life cycle. Crucially, the documentation item is not the document itself but the subject of documentation, i.e. something that requires formal representation due to its relevance in process or system performance or compliance. A documentation item should be a distinct workflow, artifact, or component that is part of the engineering, training or operation process of an AI-based system and warrants structured and traceable documentation. EXAMPLE 1: A neural network model, the training workflow used to develop it, or the deployment pipeline supporting its operation are all documentation items. Each of these may require dedicated documentation that captures relevant information for stakeholders like developers, auditors, and regulators. Each documentation item should be considered central to enabling understanding, transparency, traceability and demonstrating conformance to standards and regulations. Documentation items are diverse in nature and evolve over the AI system's life cycle. Proper documentation should ensure that stakeholders can understand how the system was developed, validated, deployed, or monitored. EXAMPLE 2: Documentation items include datasets, data pipelines and workflows, AI models, user interfaces, regulatory compliance artifacts (e.g. audit logs, certifications, or risk assessments, which serve to document compliance-relevant items), as well as environmental conditions and use cases. Each documentation item should serve stakeholders with a specific set of relevant information, whether they are end users, auditors, developers, or regulatory bodies. Information elements are granular units of descriptive or operational detail. These elements should define the specific attributes, characteristics, and context necessary to fully understand the documentation item in question. Information elements are the "about" part of documentation - they detail the attributes, properties, and metadata that provide depth and clarity to the record. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 19 EXAMPLE 3: In a model card, individual information elements might represent the following: - Intended Purpose: The primary function and target use cases of the AI model. - Data Provenance: Specifics on the origin of training data, such as the source or collection method. - Risk Management Details: Descriptions of identified risks and the mitigation measures in place. - Performance Metrics: Quantitative measures such as accuracy, F1 score, or robustness under stress conditions. In high‑risk AI systems, as outlined in regulatory frameworks like the European AI Act, information elements extend to cover critical details such as dataset scope, human oversight protocols, and cybersecurity measures. Each element represents a concrete requirement derived from legal texts, ensuring that every documentation item' documentation meets transparency and compliance standards. Documentation items can be organized into several high‑level categories, each addressing distinct facets of the AI system: • Process Documentation: Records the life cycle processes including development, testing, and operational procedures. • Tools Documentation: Focuses on the software, libraries, and platforms used throughout the AI life cycle. • Data Documentation: Covers all aspects of the data used in the AI system. • Algorithms and Models Documentation: Concentrates on the core AI properties like model architecture, hyperparameters, algorithms used for training etc. • Project and Regulatory Documentation: Encompasses non‑technical records such as requirements specifications, risk and test reports, and compliance files. • System Architecture and Environment Documentation: Describes the technical environment, including hardware, network configurations, and security measures. • User Instructions and Interfaces Documentation: This documentation includes user manuals, interface guides, and training materials that facilitate effective interaction with the system. The documentation for each documentation item should be composed of several nested information elements, arranged in a logical order. For instance, a datasheet documenting training data for a high‑risk AI system might start with an overview (intended purpose and scope) and then drill down into technical specifics such as data provenance, preparation techniques, risk management, and security protocols. This structure not only aids in clarity and ease of access but also supports incremental updates, allowing stakeholders to modify individual elements without having to overhaul the entire document. In summary, by defining documentation items as comprehensive records built from discrete information elements, organizations can ensure that all critical dimensions - from technical design and development to regulatory compliance and user guidance - are transparently and systematically recorded.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3 Documentation stakeholders (roles)
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.1 General
|
Different stakeholders involved in AI development, deployment, and regulation have specific responsibilities that should be supported by transparent, clear documentation. These stakeholders range from those who create and provide AI technologies to those who integrate, use, or are impacted by them. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 20 ISO/IEC 22989 [i.4] offers a framework that defines these roles, outlining the various entities that contribute to AI systems - from developers and producers to customers and regulators. The roles are organized hierarchically in a tree-like structure (see Figure 1) to reflect their relationships and subcategories. This structure helps clarify how broad stakeholder categories break down into more specific roles, which is important for understanding responsibilities and interactions across the AI life cycle. Each stakeholder's role is associated with distinct documentation requirements to support trustworthiness, from ensuring system accuracy and fairness to verifying compliance with data protection laws like the GDPR. Furthermore, documentation requirements for AI providers, producers, and users emphasize transparency, particularly around data handling, algorithmic decision-making, and system monitoring. Figure 1: ISO/IEC 22989 [i.4] Stakeholders Since different stakeholders have different documentation needs/requirements, it is difficult to identify a one-size-fits-all approach, i.e. a single documentation file that meets everyone's needs/requirements at the same time. For this reason, customizing AI documentation to stakeholders is recommended, in line with best practices in technical writing [i.87].
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.2 Audience analysis
|
In ISO/IEC/IEEE 26514 [i.24], Audience analysis is the process of determining who will use the information included in the documentation. The standard requires this process to be conducted taking into consideration factors such as users' background, experience, and education, familiarity with technical language, the ways in which they might use the software, their learning stages (e.g. novice, expert), and how often they use the software. Groups of users who share characteristics and needs constitute an audience. Audience analysis is an important step in planning, writing, and reviewing technical documentation, as it determines the content, structure, and use of the intended information. Consequently, customizing AI documentation on the stakeholders can impact the modality and techniques used for the documentation, as well as the items included in the documentation and how they are presented. While specific customization depends on the results of the audience analysis, a sketch can be provided of how modalities, techniques, and items can be tailored on the needs of stakeholders by assuming they constitute specific audiences. For each type of stakeholder, it is provided, when possible, a reference to standards representing a starting point for structuring an appropriate documentation. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 21
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3 Stakeholder categories and documentation requirements
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.1 AI Provider
|
Documentation Requirements: For AI providers, documentation should ensure transparency about the AI technologies being offered, including detailed descriptions of the AI models, algorithms, and data processing techniques used. Additionally, this documentation should include comprehensive records of testing methods, validation procedures, and ongoing monitoring protocols to maintain system trustworthiness over time. Documentation should also explain how regulatory standards (such as the EU AI Act or GDPR) have been integrated into the system's design and how risk management practices address potential harm or bias. Providers should offer explicit documentation on how updates or modifications are communicated to stakeholders, ensuring continuous compliance. This documentation is crucial for establishing the trustworthiness of the AI solutions, especially when these solutions are integrated into larger systems. ISO/IEC/IEEE 26514 [i.24] provides an analysis of the requirements for designers and developers of user documentation. It includes both approaches to standardization: a) process standards, which specify the way in which documentation products are to be developed; and b) documentation product standards, which specify the characteristics and functional requirements of the documentation. It is addressed to designers and developers of software user documentation. EXAMPLE: An AI service provider might need to supply comprehensive documentation on how their model ensures fairness and accuracy, alongside performance metrics and bias mitigation strategies, to reassure customers and partners of the system's reliability and ethical integrity. In this case, the provider might also need to include detailed logs showing compliance with GDPR requirements for handling personal data and records of any third-party audits conducted to verify the system's fairness and transparency. This added level of detail helps reassure customers and regulators that the AI system operates within the defined legal and ethical boundaries.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.2 AI Producer
|
Documentation Requirements: AI producers require highly detailed documentation throughout the development life cycle. This includes technical specifications, design documents, testing protocols, and deployment records. This documentation should cover the entire development pipeline, from data preprocessing techniques to model selection and training processes. Additionally, it should include comprehensive versioning control and change logs that track adjustments made throughout development, ensuring traceability for auditing purposes. Detailed error analysis, stress testing outcomes, and compliance with industry standards should also be documented, along with mechanisms for post-deployment monitoring and system maintenance. Such documentation is vital for internal audits, ensuring compliance with industry standards, and facilitating external evaluations. Since AI producers manages user documentation, the ISO/IEC/IEEE 26511 [i.27] and ISO/IEC/IEEE 26513 [i.29] standards are relevant in this context. This former supports the interests of software users by driving the realization of consistent, complete, accurate, and usable documentation. It is addressed at managers responsible for the development and production of user documentation. The latter provides documentation requirements for testers and assessors of user documentation. EXAMPLE: A model designer would need to document the entire model creation process, including data preprocessing techniques, model selection rationale, and validation results, to ensure that the AI system can be thoroughly reviewed for trustworthiness by auditors or evaluators. This documentation should also include details on how the system complies with medical regulations, such as Health Insurance Portability and Accountability Act (HIPAA) in the U.S., and should outline procedures for addressing patient privacy and ensuring the accuracy of diagnosis results. In this case, the producer would need to ensure that any updates to the model, such as retraining on new data, are thoroughly documented and retrievable for future audits. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 22
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.3 AI Customer
|
Documentation Requirements: AI customers typically do not create their own documentation but they should ensure that the AI provider has supplied sufficient guidance and trustworthiness assurances. This should include instructions for deployment and use, certification reports, compliance assessments, or summaries of the system's capabilities and limitations to verify that the AI system meets their requirements. Additionally, documentation on how to handle exceptional cases, such as system errors or biased outputs, should be provided to ensure that the customer can implement the system in a controlled and compliant manner. AI customers should deploy the AI system according to the instructions for use supplied by the provider in the technical documentation. Under the AI-Act, this is mandatory for deployers of high-risk AI systems. A list of possible requirements for AI customer documentation can be found in the ISO/IEC/IEEE 26512 [i.28] standard. EXAMPLE: An AI customer might assess an ISO 9001-like certification from the provider, verifying that it aligns with their quality and safety standards without needing deep technical expertise.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.4 AI Partner
|
Documentation Requirements: For AI partners, the documentation should be detailed and precise to support their specialized tasks. AI auditors, for example, require extensive documentation on AI system design, data integrity, and quality and risk management systems, often in a machine-readable format like JSON, to perform thorough audits. Similarly, AI system integrators need detailed integration manuals and API documentation to ensure seamless incorporation of AI components into larger systems. EXAMPLE: An AI auditor conducting a fairness audit of an AI recruitment tool would need access to extensive documentation on the system's training data, bias detection methods, and performance outcomes across different demographic groups. In addition to these details, the auditor may also require documented proof of external audits, certification from third-party evaluators, and records of any bias remediation actions taken. These materials help ensure that the system not only meets ethical standards but also operates within the legal frameworks for fair hiring practices.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.5 AI Subject
|
For instance, a data subject might require documentation that explains how their personal data is processed by an AI system, including information on data retention policies, consent mechanisms, and privacy safeguards, presented in clear, non-technical language. In this case, it is also essential for the documentation to explain how users can revoke their consent for data usage, how long their data will be retained, and the specific measures the system uses to protect their privacy. This level of transparency builds user trust and aligns with privacy laws like GDPR.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.3.3.6 Relevant Authorities
|
Documentation Requirements: Relevant authorities, such as regulators and notified bodies, do not generate their own documentation but instead receive and review documentation to ensure regulatory compliance and policy enforcement. This includes reports on AI system transparency, accountability measures, and ethical considerations submitted by AI providers. Documentation should also include detailed records of transparency measures, accountability protocols, and data protection strategies, particularly for high-risk AI systems. Regulators may also request audits of the AI system's ethical frameworks, such as bias detection and mitigation procedures, and evidence of periodic reviews to ensure continuous compliance. EXAMPLE: A notified body need access to detailed documentation, including performance metrics, algorithmic transparency reports, and conformity assessment records, to verify that high-risk AI systems meet EU requirements before being placed on the market. Similarly, a regulator might review documentation that outlines how an AI system complies with applicable legislation, such as GDPR, including records of data protection impact assessments, to ensure that the system adheres to societal and ethical norms. In this context, the documentation should also outline any corrective actions taken in response to compliance failures, including updates to the AI system or adjustments to its deployment process. This helps regulators ensure that the system remains compliant over time and that any issues are promptly addressed. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 23
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.4 AI-system life cycle
|
An AI system life cycle encompasses the comprehensive series of stages involved in the creation, deployment, and maintenance of an AI-based system. This life cycle helps in structuring the development process to ensure effective, reliable, and ethical AI-based solutions. The life cycle typically includes several phases, each critical to the success of the AI-based system. There are various AI-system life cycle descriptions that have been developed for different purposes, and accordingly, emphasize different aspects and vary in their level of detail. One of the most well-known might be the Cross-Industry Standard Process for Data Mining (CRISP-DM) [i.43] that has become the de facto standard process model for data mining, analytics, and machine learning projects. The focus is on providing a clearly defined, process-orientated framework that enables companies to carry out data mining projects efficiently and effectively. At the same time, however, it lacks the perspective of other stakeholders, like the persons affected. ISO/IEC 22989 [i.4], clause 6, describes a life-cycle process that covers all major aspects from the 'inception stage' (requirements engineering) to the 'retirement' of a system. As a draw-back, it lacks granularity that might help to inspect valuable aspects regarding transparency. ISO/IEC TR 29119-11 [i.23], Figure A.2, describes a detailed machine learning workflow that covers all relevant aspects but omits some crucial feedback loops. To be as generically applicable as possible, an adaption of a generic AI-system life cycle is inspected for the present document, called the long chain of responsibilities [i.42]. This concept emphasizes the necessity of considering a broad spectrum of responsibilities across different stages of AI system development and deployment, from the conceptualization and design phases through to their real-world applications and impacts. It is very similar to the machine learning workflow proposed by ISO/IEC TR 29119-11 [i.23] but uses the common workflow terminology as ISO/IEC 22989 [i.4] does. It is major drawback for the purpose of the present document is lack of taking the requirements engineering phase into account. Therefore, the long chain of responsibility proposed by [i.42] has been extended by [i.39], explicitly to analyse possible mechanisms that provide transparency. The life cycle model that is outlined in the present document distinguishes between a System Life Cycle, a Data Life Cycle, and a Model Life Cycle as depicted in Figure 2 and described below. System Life Cycle: For the system life cycle, the phases are differentiated as Inception, Analysis & Design, Implementation & Integration, and Deployment & Operation. These phases further encompass phases from the data and model life cycle. For documentation purposes, special attention is given to KPI and Requirements Gathering Data life cycle and Model life cycle: • KPI and Requirements Gathering: This phase is the first step in the system development process, involving a systematic approach to identify, specify, and manage both requirements and Key Performance Indicators (KPIs). The aim is to understand the needs of customers, legal obligations, and other relevant factors. The outcome of this process is a set of documents that detail various requirements for different stakeholders, specifying what the AI-based system is expected to achieve. These documents also include informal target criteria, benefit and risk assessments, as well as clearly defined KPIs, which serve as measurable benchmarks to assess the system's performance against its intended goals. Data Life Cycle: Refers to the various stages that a dataset goes through, from its initial collection to its eventual deletion. Below, the different stages in the data life cycle are described: • Data Collection & Extraction: Data can be freshly collected from various sources such as sensors, databases and surveys. However, data could be extracted or created through processes like simulations, experiments or computational models. The process of data collection may underly legal restraints and may also need to satisfy specific requirements. This phase of the data life cycle falls under the Inception, Analysis & Design phase of the system life cycle. • Data Preparation & Processing: The construction of a data set consists of multiple phases that depend on the specific data at hand and task to be performed. This phase falls under the Implementation & Integration phase of the system life cycle. In context of a classification task, the following preparation and processing can be performed: - Data labelling: the output variable to be predicted needs to be identified if it is already part of the data or labelled by hand if it is not part of the data. - Data cleaning: redundant information in data as well erroneous or missing values needs to be dealt with. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 24 - Data transformation: parts of the data may need to be transformed from one format or structure to another. - Data integration: data collected or extracted from multiple sources will have to be combined to create a unified dataset. - Data storage: storing the processed data in a way that ensures it is secure, organized and accessible. • Data Monitoring & Maintenance: it is important to ensure that data remains accurate, up-to-data and usable over time. This phase falls under the Deployment & Operation phase of the system life cycle. This may be achieved through the following activities: - Data updates: regularly updating the dataset with new data or correcting outdated information. - Data quality monitoring: continuously checking data for issues such as errors, inconsistencies, degradation or drifts. - Data deletion and destruction: permanently removing data that is no longer needed or should be deleted for compliance or privacy reasons. Model Life Cycle: Refers to the stages an AI or machine learning model goes through, from its initial design through to its eventual retirement. Below, the different stages of the model life cycle are described: • Experimentation: There is a plethora of choices to be made when settling for a machine learning method. Various model types like an artificial neural network, a support vector machine, a decision tree, etc. might be viable choices. Each type needs to be specified in terms of hyperparameters (in the context of ANN, for example, the number of layers, the number of neurons per layer, the activation function(s), the learning rate, the batch size and the stopping criterion). There are various software packages and tools, that may hide some of the actual complexity behind such approaches and set parameters to default values. One can either choose a method for which the data is suitable and/or which requires little preprocessing or design the details of the preprocessing with the chosen procedure in mind. This phase of the model life cycle falls under the Inception, Analysis & Design phase of the system life cycle. • Model Training & Model Evaluation: involves both training the model and evaluating its performance on the training set to assess its ability to learn patterns. This phase falls under the Implementation & Integration phase of the system life cycle. Below are some activities undertaken in this stage: - Training the model: involves feeding training data to the model to learn patterns and relationships. - Hyperparameter tuning: optimizing hyperparameters (e.g. learning rate, number of layers) to maximize performance. - In-training evaluation: assessing model performance on the training data by measuring metrics like accuracy, loss or error rates. • Model Validation: has to do with testing the model on a separate validation or test dataset to ensure it generalizes well to unseen data. This phase falls under the Implementation & Integration phase of the system life cycle. Some activities involved in model validation include: - Cross-validation: techniques like k-fold cross-validation ensure that the model doesn't overfit the training data and works well with new data. - Bias and fairness checks: examining the model for potential biases in its predictions to ensure fair outcomes, especially in critical applications. • Model Integration & Deployment: in this stage, the trained model is deployed into production environment and integrated into real-world systems to perform an intended task. This phase falls under the Implementation & Integration phase of the system life cycle. Below are some activities undertaken at this stage of the life cycle: - Infrastructure setup: ensuring computational resources are adequate for production. - Security: implementing robust security protocols to protect the model and the underlying data. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 25 • Model Monitoring & Maintenance: involves continuously tracking the model's performance and maintaining its quality over time. This phase falls under the Deployment & Operation phase of the system life cycle. The following activities are involved in this: - Performance monitoring: tracking real-time performance metrics to detect issues like data drift. - Error handling: managing and addressing any performance drops or issues detected post-deployment. - Model retraining: regularly updating models with new data to keep it relevant. Figure 2: High level overview on system, model and data life cycle phases While procedures for each phase can and should be documented once they are decided upon, they may be adapted during execution, which makes it important to revise the documentation after its execution, in case of recurrent procedures, each time. In case of continuous procedures, the documentation needs to be revised on a regular basis. 6.5 Documentation Techniques for Effective Information Management
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.1 General
|
An AI documentation approach serves as a high-level strategy or framework that outlines how documentation is created, organized, and maintained. It defines the what (the content and scope) and why (the purpose and goals) of documentation. In contrast, a documentation technique refers to the specific methods or tools used to create, organize, or present documentation. Techniques represent the how of documentation - they are the practical steps, formats, or tools employed to bring the broader documentation approach to life. Because of this complementary relationship, an AI documentation approach frequently incorporates one or more documentation techniques to achieve its objectives. For example, a high-level approach like Model Cards (used for documenting machine learning models) might employ techniques such as questionnaires, templates, and visual documentation to implement the strategy effectively. In this way, the approach provides the overarching framework, while the techniques offer the practical means to execute it. This clause provides a comprehensive overview of key documentation techniques, their advantages, and challenges, helping teams choose the right tools and methods to create clear, accessible, and effective AI documentation. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 26
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.2 (Motivation and) Overview
|
A documentation technique is a combination of a specific technique to represent information (e.g. text) and a specific format (e.g. a list). Documentation techniques are the foundation of specific documentation approaches and methodologies (e.g. Model Cards). Each approach is based on at least one documentation technique. While the documentation item determines which documentation techniques might be applicable, the fitting documentation approaches can be chosen to address the specific needs of stakeholder groups. This clause provides an overview of common modalities and formats, as well as known documentation approaches, and how they are related to the different stakeholders, documentation items, and regulatory requirements. The relationships can serve as a guidance on choosing suitable options tailored to specific needs. The relationships also provide an overview of compatible and complementary approaches. If one approach covers only a subset of the documentation items, other suitable approaches can be identified to complement it in order to cover the remaining items. Additionally, the relationships help identify which items may need to be requested in addition to those provided by a supplier relying on a given approach. This ensures that specific stakeholder needs or regulatory requirements are fully accommodated. The relationships can also indicate which items can be represented in different approaches in case the documentation needs to be converted or integrated from one approach into another. Existing standards that can be considered with regard to the modalities, approaches, stakeholders, as well as quality aspects, are summarized as well. Documentation techniques enable to depict both, how quality requirements and legal obligations for AI systems have been met. In research, legislative practice, and organizational collaboration, documentation techniques can be differentiated with regard to text, image and interactive paradigms. On the one hand, text documentation ensures that the architecture and processes of the system are transparent and compliant with industry standards, supporting consistent performance. Datasheets for data sets, on the other hand, verify the quality of the data, e.g. to ensure that a data set is accurate, representative and free from unwanted bias, which is critical for compliance with ethical and legal standards. Additionally, process flowcharts and diagrams increase system clarity, make performance issues assessable and ensure continuous quality improvement.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.3 Questionnaires
|
Questionnaires are structured sets of questions designed to gather specific information in a systematic way. They are often used to collect metadata, feedback, or details about datasets, models, or processes. • Advantages: - Ensures consistency in data collection. - Easy to distribute and analyse. - Useful for large teams or standardized processes. • Challenges: - Limited flexibility in capturing nuanced or domain-specific details. - May require follow-up for clarification.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.4 Information Sheets
|
Information sheets are static documents that provide detailed information in a narrative or report-like style. They are often used to communicate key details about a system, model, or dataset. • Advantages: - Comprehensive and detailed. - Highly customizable for specific audiences. - Useful for regulatory compliance and formal reporting. • Challenges: - Static nature limits real-time updates. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 27 - Can become lengthy and difficult to maintain.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.5 Checklists
|
Checklists are lists of items, tasks, or requirements that need to be completed or verified. They ensure consistency and completeness in processes. • Advantages: - Simple and easy to use. - Ensures no steps are missed. - Useful for compliance and quality assurance. • Challenges: - May oversimplify complex processes. - Requires regular updates to remain relevant.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.5 Templates
|
Templates are predefined structures or formats for documenting information. They ensure consistency across documents and make it easier to create new documentation. • Advantages: - Saves time and effort. - Ensures uniformity across documents. - Easy to customize for different use cases. • Challenges: - May not fit all documentation needs. - Requires initial setup and maintenance.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.6 White Papers
|
White papers are authoritative reports that provide in-depth information on a specific topic, often used to explain methodology, results, and implications. • Advantages: - Highly detailed and formal. - Useful for communicating complex ideas to a technical audience. - Builds credibility and authority. • Challenges: - Time-consuming to produce. - May not be accessible to non-technical stakeholders. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 28
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.7 Knowledge Graphs
|
Knowledge graphs are network representations of information that show relationships between different entities. They help in understanding complex systems and their interconnections. • Advantages: - Provides a holistic view of complex systems. - Can be queried programmatically for insights. - Useful for organizing and visualizing relationships. • Challenges: - Requires expertise in graph theory and knowledge representation. - Possibly oversized for simple systems.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.8 Visual Techniques (Diagrams, Flowcharts, Infographics)
|
Visual documentation uses elements like diagrams, flowcharts, and infographics to communicate complex information in an intuitive way. • Advantages: - Simplifies complex systems and processes. - Improves accessibility for non-expert stakeholders. - Enhances understanding through visual representation. • Challenges: - May oversimplify details. - Requires design skills to create effective visuals.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.9 Interactive Techniques
|
Documentation that allows users to interact with the content, such as running code, exploring data, or navigating through dynamic elements. • Advantages: - Provides hands-on learning experiences. - Encourages exploration and experimentation. - Supports real-time updates and collaboration. • Challenges: - Requires technical infrastructure and expertise. - May not be accessible to non-technical users.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.10 Domain-Specific Language (DSL)
|
A specialized programming or markup language designed for a particular application domain. Used to create structured, machine-readable documentation. • Advantages: - Ensures standardization and precision. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 29 - Easy to integrate into automated pipelines. - Tailored to the specific needs of a domain. • Challenges: - Requires domain expertise to create and understand. - Limited usability for non-technical stakeholders.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.5.11 Summary
|
A documentation approach and documentation techniques work hand in hand. The approach defines the overall strategy and goals, while the techniques provide the practical tools and methods to implement that strategy. For example, a Model Card (approach) might use questionnaires, templates, and visual documentation (techniques) to achieve its goal of providing transparent and standardized documentation for a machine learning model. By understanding this relationship, teams can effectively combine high-level strategies with practical tools to create documentation that is both comprehensive and accessible. In Tables 1 and 2, these documentation techniques are mapped to the existing AI documentation approaches as listed in Annex D. The techniques that are best aligned with the specific needs and responsibilities of the relevant stakeholders are proposed. Table 1: Mapping of Documentation Techniques to AI Documentation Approaches Documentation Approaches Documentation Technique/Format Datasheet for Datasets (see clause D.4.3) Questionnaire-Based Model Facts Label (see clause D.4.1) Model Cards (see clause D.2.1) Method Card (see clause D.2.2) Risk Cards (see clause D.4.2) FactSheets (see clause D.3.1) Information Sheet (Static Document) Dataset Nutrition Lable (see clause D.1.3) Data Cards (see clause D.1.4) Interactive Techniques DescribeML (see clause D.1.2) Domain-Specific Language System Cards (see clause D.3.2) Visual Techniques ETSI ETSI TR 104 119 V1.1.1 (2025-09) 30 Table 2: Documentation Techniques Aligned with Stakeholder Needs and Responsibilities Stakeholder Suitable Documentation Technique/Format Reason for Mapping AI Provider Interactive Techniques Visual Techniques Reason: AI providers need to ensure transparency for a wide range of stakeholders, including customers, regulators and partners. Web-based and Google Docs formats allow for real-time updates, collaboration, and version control, which are critical for maintaining compliance with evolving regulations like the EU AI Act and GDPR. Also, Visual formats simplify complex technical details for non-technical audiences. Example: An AI provider offering a facial recognition system could use a web-based dashboard to document model performance metrics, bias mitigation strategies, and GDPR compliance. Infographics could summarize how the system handles data privacy and user consent. AI Producer Domain Specific Language Interactive Techniques Visual Techniques Reason: AI producers require highly detailed and technical documentation to track the development life cycle, including design, testing, and deployment. DSL ensures precision in documenting technical specifications, while web-based formats support traceabilty and version control. Visual formats help communicate testing outcomes and compliance records to internal and external auditors. Example: A producer developing a medical AI system could use DSL to document data preprocessing techniques, model selection and validation results. Web-based formats could be used to document updates and retraining processes, while infographics could be used to illustrate preprocessing pipelines and model architecture for clarity during audits. AI Customer Information Sheet Interactive Techniques Visual Techniques Reason: AI customer needs clear, concise and user-friendly documentation to understand how to integrate and operate AI systems within their workflows. Static documents and infographics provide easy-to-digest summaries of system capabilities, limitations, and compliance certifications. Web-based formats ensure access to the latest updates and operational guides. Example: A customer using an AI-powered recruitment tool could receive a static document summarizing the system's fairness metrics and compliance with hiring regulations. A web-based portal could provide step-by-step instructions for integrating the tool in the HR systems. AI Partner Domain Specific Language Questionnaire-based Reason: AI partners, such as system integrators and auditors, require detailed and structured documentation to perform their specialized tasks. DSL ensures precision in integration manuals and API documentation, while questionnaires help auditors gather specific information for compliance assessments. Example: An AI auditor evaluating a recruitment tool could use a questionnaire to document details on training data, bias detection methods, and performance outcomes. AI Subject Information Sheet Visual Techniques Reason: AI subjects, such as data subjects or end-users, need transparent and accessible documentation to understand how their data is used and the implications of AI-driven decisions. Static documents and infographics simplify complex concepts and ensure compliance with transparency requirements under regulations like GDPR. Example: A data subject using a healthcare AI app could receive a static document explaining how their data is processed, their rights to opt-out, and the measures in place to protect their privacy. Infographics could illustrate the data life cycle and anonymization techniques. Relevant Authorities Interactive Techniques Visual Techniques Reason: Regulators and policymakers require comprehensive and accessible documentation to verify compliance with legal and ethical standards. Web-based and Google Docs formats facilitate the submission, review, and updating of compliance reports. Visual formats help present transparency measures, accountability protocols, and ethical considerations in a clear and concise manner. Example: A notified body assessing a high-risk AI system could review web-based documentation detailing performance metrics, algorithmic transparency, and conformity assessments. Infographics could summarize bias mitigation strategies and data protection measures. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 31
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6 Quality aspects of documentation
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.1 General
|
The information contained in the technical documentation should follow established principles of information quality. ISO/IEC/IEEE 26514 [i.24], clause 7, identifies six key principles: correctness, consistency, comprehensibility, conciseness, minimalism, and accessibility.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.2 Correctness
|
The information provided in the technical documentation should accurately reflect the AI systems' actions and expected results for the specific version being documented. This includes details on functionalities, limitations, and behaviour. Any updates or changes made to the AI system (e.g. new features, bug fixes) should be reflected promptly and correctly in the corresponding documentation.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.3 Consistency
|
The technical documentation should maintain a consistent structure and layout. Consistency applies to all elements including screens, pages, text formatting (headings, spacing, fonts), graphics, icons, colours, signal words, and audio-visual elements. Additionally, consistent terminology should be used for user interface elements, data, fields, tasks, pages, and processes within the documentation and the AI system itself.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.4 Comprehensibility
|
The technical documentation should be easily understood by all relevant stakeholders. Information should be readily understood by the least experienced stakeholder within the expected audience. This is particularly important when serving a diverse user base with varying levels of experience, skills, training and knowledge. Terminology selection plays a vital role in achieving comprehensibility. Technical documentation should opt for terms commonly used within the stakeholder's environment or the application domain. For instance, it is preferable that documentation for a medical AI system employs medical terminology readily understood by healthcare professionals, rather than complex technical terms related to the underlying algorithms. Usability testing can be employed to validate the comprehensibility of the documentation.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.5 Conciseness
|
Information within the technical documentation should be presented concisely, both in terms of format and media, avoiding unnecessary repetition or duplication. While repetition can be a useful tool for educational purposes, technical documentation should prioritize clarity and efficiency.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.6 Minimalism
|
The technical documentation should be minimal, containing only essential information needed for stakeholders to understand concepts, perform tasks, and troubleshoot issues. Technical documentation should avoid including content that is not strictly necessary for accomplishing these objectives. A minimalist approach ensures stakeholders are not overwhelmed by extraneous information and can focus on the core functionalities of the AI system. At the same time, however, the minimalism should not compromise the completeness of the technical documentation. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 32
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.7 Accessibility
|
The technical documentation should be accessible to all expected stakeholder groups, considering factors like language, format, and accessibility needs and regardless of their abilities or environments. This includes ensuring technical availability, legibility, and findability of the information. For example, documentation for visually impaired stakeholders may require alternative formats such as screen reader compatible text. Websites and mobile applications containing the documentation should adhere to accessibility guidelines outlined in relevant standards and established good practices. Guidelines concerning accessibility are for example provided in ETSI EG 204 061 [i.25]. The principles of information quality may need to be implemented differently depending on the target audience, the risk level of the AI application, and the domain. In addition, trade-offs between principles should be considered, e.g. conciseness vs. comprehensibility, correctness vs. accessibility, comprehensibility vs minimalism. Finally, EN 301 549 [i.26] provides some hints on documenting accessibility and compatibility features, as well as making documentation accessible (clause 12.1 in particular), relationships to Directive 2016/2102 on Web Accessibility [i.51], and assessment criteria for determining conformance.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.6.8 Systematic understanding
|
A systematic understanding refers to a structured and comprehensive approach to comprehending complex systems, processes, or subjects. In the context of AI systems, it involves an organized knowledge of how various components such as data, algorithms, and the corresponding infrastructure is in interaction.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
6.7 Documentation Approach
|
To derive a structured documentation approach, the present document describes the basic understandings and prerequisite considerations for this, as described in clause 4, 6 and 7. Already existing approaches are discussed in clause 5. From this foundation the following structured documentation approach is compiled in three main steps (see also Figure 3 for a visualization): Step 1: Understand and identify the purpose of the documentation artifacts In this step an understanding of the motivation and purpose for the documentation activity (see clause 4) is developed and defined. Especially the applicable requirements from regulatory obligations (see clause 7.3 with reference to the EU AI Act) should be selected. Step 2: Identify the selected documentation aspects per document During this step one or more documents (documentation artifacts) should be identified. To structure this identification process, several aspects of the intended document(s) should be selected. Each of the following aspects depends on the identified documentation purpose of step 1: • Identification of the documentation item (i.e. the subject of documentation) - see clause 6.2. • Identification of the documentation stakeholders (audience, authors, involved) - see clause 6.3. • Identification of the phase(s) within the AI-system life cycle when the document is to be created/maintained - see clause 6.4. • Identification of the documentation technique(s) which serves best the intended purpose for the identified stakeholders - see clause 6.5. The above listed aspects can be considered specific for each identified document (documentation artifact) and furthermore will have cross-dependencies among each other. E.g. the identified audience (stakeholders) may suggest particular documentation techniques fitting to their skills of understanding, or the type of document in focus can only be created within a specific phase of the AI-life cycle. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 33 Step 3: Identify the document contents (information elements) and create/assemble the document In this step each of the identified document is detailed with reference to its contents (information elements, see clause 6.2) and then finally created or compiled. To support the process of deriving the documents contents, the following activities are suggested to follow: • For high-risk AI systems and the need to comply with the EU AI Act: Consideration of recommended documentation approaches as described within clauses 7.3.2 to 7.3.8. • Selection of an already existing documentation scheme (see clause 5.1 and Annex D), which fit to the defined documentation aspects of step 2. • For technical documentation, required by EU AI Act for high-risk AI systems (see clause 7.3.8): Selection of applicable documentation templates. • For other documentations: Identification of the information elements according to the defined documentation aspects of step 2. • Creation of the documents by describing or filling-in the information elements. Figure 3: Documentation approach depicted as structured workflow
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7 Guidance for EU AI Act Compliant Documentation
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.1 Introduction to the EU AI Act
|
The EU Artificial Intelligence Act (AI Act), officially Regulation (EU) 2024/1689 [i.2] , is a comprehensive regulatory framework aimed at ensuring trustworthy, human-centered AI in Europe. Its goal is to protect health, safety, fundamental rights, and EU values while encouraging AI innovation and implementation. The AI Act applies to any AI system marketed or used within the EU, regardless of the provider's geographical location. It adopts a risk-based approach to regulation, classifying AI systems into four categories: prohibited, high-risk, limited-risk, and minimal-risk. These are often illustrated as a pyramid (see Figure 4), where regulatory obligations increase with risk severity. Identify documentation purpose(s) (e.g. clause 7.3 for obligations of EU AI Act for High-Risk AI systems) Identify documentation item (clause 6.2) Identify document stakeholders and audience (clause 6.3) Identify phase of document creation (clause 6.4) Identify documentation technique (clause 6.5) Identify information elements and/or selection of documentation template (clauses 5.1, 7.3, etc.) Identify document(s) to create create document(s) Step 1 Step 2 Step 3 ETSI ETSI TR 104 119 V1.1.1 (2025-09) 34 At the top of the pyramid are prohibited AI practices (Art. 5), which are banned outright due to their inherent threat to fundamental rights or safety. These include systems for real-time biometric identification in public by law enforcement (with narrow exceptions), social scoring, and manipulative or exploitative AI targeting vulnerable groups. Such systems cannot be placed on the market under any conditions. High-risk AI systems (Chapter III) form the core of the regulatory framework. These include AI used in critical domains such as law enforcement, critical infrastructure, employment, education, and health. The EU AI Act requires that providers of high-risk systems comply with stringent obligations, including conformity assessment and comprehensive documentation. These systems are the focus of the documentation guidance in this chapter. Below that are AI systems subject to transparency obligations (Art. 50), such as chatbots, deepfake generators, or biometric categorization tools. While not high-risk, the EU AI Act requires to inform users about their AI nature to ensure minimal transparency. These fall under the limited-risk tier. At the base of the pyramid are minimal-risk systems, including most AI used for personal, recreational, or low-impact applications. These systems are not regulated under the Act, though voluntary codes of conduct and good documentation practices are encouraged. Obligations for General Purpose AI (GPAI) models, especially those with systemic risk (Art. 51), are addressed in a dedicated subsection, in line with Chapter V of the Act. For a concise overview of the foundational pillars and operationalization of Trustworthy AI in alignment with the EU AI Act, see Annex B. Figure 4: EU AI Risk Pyramide The predominant regulatory responsibilities under the AI Act are imposed on providers, entities that develop or market AI systems, especially concerning high-risk AI systems. This encompasses providers based outside the EU when their systems or outputs are utilized within the EU. Although deployers (professional users) have responsibilities, these are more limited in scope. The AI Act delineates the necessary compliance components in legal terminology, but it fails to provide recommendations on how requirements should be documented. This clause examines the responsibilities of high-risk AI system providers, offering a comprehensive analysis of the documentation requirements mandated by Art. 9 to 15. The present document offers practical guidance for organizing documentation in compliance with international standards (ISO/IEC 22989 [i.4], ISO/IEC 24028 [i.1] and ISO/IEC 42001 [i.10]) and incorporates prominent documentation techniques and approaches evaluated in the present document. The objective is to assist providers in generating thorough, verifiable proof of compliance. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 35
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.2 Mapping of EU AI Act Stakeholder
|
The EU AI Act introduces a legal framework that closely maps to the roles given in clause 6.3, assigning specific responsibilities to each stakeholder to ensure AI systems meet safety, transparency, and ethical standards. Both the ISO and the EU AI Act frameworks aim to clarify who is accountable for different aspects of AI systems' development and usage, promoting trust through detailed record-keeping and compliance documentation. While ISO/IEC 22989 [i.4] focuses on functional roles (who does what in practice), the AI Act focuses on legal accountability (who is responsible under the law). Thus, some EU AI Act roles, such as provider, map to multiple ISO roles, depending on whether the stakeholder is creating, modifying, integrating, or deploying the AI system. Regulatory bodies under the AI Act (market surveillance authorities, notified bodies, etc.) are clearly represented in the ISO under "Regulators" and "Policy Makers." Figure 5: Mapping of ISO/IEC 22989 [i.4] Stakeholders to EU AI Act Roles Figure 5 illustrates how the stakeholder roles defined in the ISO/IEC 22989 [i.4] standard correspond to the roles established under the EU AI Act. It provides a visual mapping of these stakeholders, showing overlaps and distinctions, and uses colour coding to highlight specific responsibilities, such as "Providers," "Distributors," "Deployers", and "Relevant Authorities." This visual helps clarify how the ISO standard and EU AI Act terminology align to support accountability across the entire AI life cycle. Table 3 provides additional explanations regarding the mapping. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 36 Table 3: Mapping of EU AI Act Roles to ISO/IEC 22989 [i.4] Stakeholders EU AI Act Roles Corresponding ISO/IEC 22989 [i.4] Stakeholder(s) Explanation Provider • AI Provider • AI Producer • AI Partner (Auditor/Evaluator, when ensuring conformity) • AI Customer (in internal deployment scenarios) The entity placing the AI system on the market or putting it into service under their name. The ISO AI Provider is the most direct match. If the system is developed in-house, the AI Producer or AI Customer can also act as the Provider. Importer • AI Provider (when importing and assuming compliance responsibilities) If the importer places the system on the EU market under their own name or brand, they functionally become an AI Provider under ISO. Distributor • AI Partner • AI Provider (if distributing under their own name or modifying the system) If they distribute unchanged systems, they act more as a commercial partner. If they modify or rebrand, they align with the ISO AI Provider. Authorized Representative • AI Partner Represents non-EU Providers for regulatory compliance. Acts as an intermediary stakeholder in ISO but typically supports Provider obligations. Deployer • AI Customer • AI Producer • AI Partner • (e.g. system integrator) Entities that use AI systems under their authority. In ISO, AI Customer is the closest equivalent. AI Producer or AI Partner may also deploy systems internally or as part of integration. Affected Person • AI Subject Individuals impacted by the AI system's outputs or decisions. This is a direct mapping to the ISO's AI Subject. Market Surveillance Authority • Regulator (ISO) Ensures marketplace compliance. Directly aligns with the ISO role of Regulator. Data Protection Authority • Regulator (ISO) Oversees data governance and privacy compliance, particularly for systems processing personal data. Notified Body • AI Partner (Evaluator, Auditor) • Regulator • (in conformity roles) Performs third-party conformity assessments under the EU AI Act. ISO refers to these stakeholders as either evaluators (AI Partner) or part of the regulatory oversight framework. European Commission / AI Office • Policy Maker Coordinates the implementation and governance of the EU AI Act across the EU.Directly matches ISO's Policy Maker. NOTE: Although ISO roles are used almost throughout the present document, the EU AI Act nomenclature is used in this clause to simplify the reference to the AI Act.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3 Documentation Guidance for High-Risk AI Systems
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.1 General
|
High-risk AI systems are permitted on the EU market only if they comply with a series of essential context requirements set out in Chapter III, Section 2 of the AI Act [i.2]. These requirements span risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and performance (accuracy, robustness, cybersecurity). Each context requirement corresponds to an Article (Art. 9 through 15). Providers of conforming high-risk AI systems document the compliance with each of these obligations. Below, each Article's context requirement is analysed, the needed specific documentation tasks, and recommend documentation approaches or techniques are identified. The goal is to guide providers in creating documentation that not only meets the legal minimums but is organized and effective for compliance demonstration. The mandated context requirements for AI providers are presented in Figure 6 and further detailed in the following clauses. The identification and specification of these contextual requirements directly forms the structure and contents of the technical documentation, ensuring traceability from system objectives and constraints to documented design decisions, risk controls, and life cycle artifacts. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 37 Figure 6: Overview of context requirements for technical documentation of High-Risk AI Systems
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.2 Risk Management System (Art. 9)
|
Context Requirement: According to EU AI Act providers of high-risk AI systems establish a risk management system and operate it throughout the AI system's life cycle. This is a continuous, iterative process of identifying, analysing, and mitigating risks. The provider document the risk management and keep it up-to-date. In practice, the AI Act's Art. 9 requires providers to perform a thorough hazard and risk assessment before market release and to update it based on post-market monitoring. Key steps include identifying reasonably foreseeable risks to health, safety, and fundamental rights; estimating and evaluating their severity and probability; implementing measures to mitigate or eliminate those risks; and testing the AI system to validate risk mitigations. The identified risks cover not just the intended use but also reasonably foreseeable misuse of the AI system. Documentation Tasks: The provider establishes a comprehensive risk management documentation by adhering to a structured documentation process (see clause 6.7). At a minimum, the following documents are included [i.2]: • Risk identification: The document lists identified risks to health, safety, or fundamental rights (e.g. discriminatory bias, technical malfunctions), including intended purpose, context of use, and vulnerable groups affected. (Art. 9(2)(a) and Art. 9(9)) • Risk analysis and evaluation: The document describes each identified risk under both intended purpose and reasonably foreseeable misuse scenarios. Additionally, the document sets out other risks possibly arising, based on data from post-market monitoring. (Art. 9(2)(b), Art. 9(2)(c)) • Risk mitigation: The document describes the mitigation measures implemented for each risk that cannot be eliminated (e.g. design modifications, safeguards, training data improvements, warnings in the user instructions, etc.). It should map each mitigation to the corresponding risk and indicate the resulting residual risk. (Art. 9(4), Art. 9(5)(a)-(c)) • Residual risk justification: The document justifies why the residual risk is judged acceptable. (Art. 9(5)) • Risk-based testing: The document includes a description of the testing carried out to identify the most appropriate and targeted risk management measures, ensuring that the high-risk AI system performs consistently for its intended purpose. This involves testing in real-world conditions, throughout development and in any event prior to placing on the market or putting into service. Additionally, the document reflects metrics and probabilistic thresholds related to the testing procedures. (Art. 9(6), Art. 9(7), Art. 9(8), Art. 60) Data and Data Governance (see clause 7.3.3) Establish governance for and use high quality training, validation and test data ([i.2] Art. 10) Risk Management System (see clause 7.3.2) Establish and document iterative process to manage reasonably foreseeable risks ([i.2] Art. 9) Record-Keeping (see clause 7.3.4) Design and implement recording capabilities incl. autom. event logging ([i.2] Art. 12) Transparency and Information to Deployers (clause 7.3.5) Develop and document systems for easy understanding by users ([i.2] Art. 13) Human Oversight (clause 7.3.6) Design facilities for human supervision during use by suitable interfaces ([i.2] Art. 14) Accuracy, Robustness and Cybersecurity (clause 7.3.7) Attain appropriate level of quality in accordance with the intended purpose ([i.2] Art. 15) Technical Documentation (see clause 7.3.8) ([i.2] Art. 11 and Annex IV) High-Risk AI Systems ETSI ETSI TR 104 119 V1.1.1 (2025-09) 38 Recommended Documentation Approaches: Providers are encouraged to adopt structured, standardized documentation methods to effectively document risk management practices. Recommended approaches include: • Risk Management Standards (ISO 31000 [i.13], ISO 14971 [i.15]): Adapt established safety engineering and medical device risk management frameworks for structured risk plans and comprehensive logging in AI-specific contexts. • Datasheets for Datasets: Link data-related risk mitigations directly into Datasheets, clearly documenting data quality, representativeness, and bias reduction measures. • Model Cards: Employ Model Card templates (see clause D.2.1) to document system performance, robustness checks, and bias evaluation results, providing clear evidence for risk-based testing. • Assurance Cases: Develop structured safety arguments (goal → argument → evidence) to comprehensively integrate risk documentation (see clause D.4.4). Assurance Cases should reference Risk Cards, Model Cards, and test reports, systematically demonstrating how identified risks are mitigated and safety objectives are achieved.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.3 Data and Data Governance (Art. 10)
|
Context Requirement: According to EU AI Act high-risk AI systems that use data for training, validation, or testing meet strict data quality and governance requirements as under Art. 10. The used datasets are relevant, representative, complete, and as accurate as possible for the AI system's intended purpose. They have appropriate statistical properties and do not introduce unjust bias, especially toward demographic groups. Data collection and processing follow clear governance procedures and comply with data protection laws and ethical standards. The goal is to ensure that the AI system relies on high-quality, well-managed data that supports fair and reliable outputs. Documentation Tasks: The provider establishes a comprehensive data documentation by adhering to a structured documentation process (see clause 6.7). At a minimum, the documentation includes the following key information elements for all datasets used in training, validation, and testing [i.2]: • Data collection and origin: The document describes design choices made during the development and management of datasets that affect how the AI system is trained, validated, and tested. Furthermore, the document demonstrates the data's origin and how data is collected, including a description of the data collection protocols (e.g. web scraping, public datasets, or user-generated data) Additionally, the document lists annotation, labelling, cleaning, updating, enrichment and aggregation procedures, and any data augmentation or synthesis techniques used. Moreover, the assessment results of the availability, quantity and suitability of the data sets are included. (Art. 10(2)(a), 10(2)(b), 10(2)(c), 10(2)(e)) • Representativeness and relevance: The document describes the data's intended purpose, relevance, errors, statistical properties and representativeness. The documentation reflects alignment with the AI system's intended purpose and context of use. It specifies data context, including properties specific to the contextual, geographical, behavioural, and functional setting of the AI system's intended use. (10(3), 10(4)) • Data Preparation: The document demonstrates that datasets are, "to the best extent possible, free of errors and complete." Additionally, the document includes summary statistics (e.g. class distributions, missing data rates, label error checks), data-preparation procedures (e.g. updating, labelling, annotation, enrichment, aggregation, and cleaning via outlier removal), and any known limitations. Providers document the resolution of any existing quality issues. (Art. 10(2)(c), 10(3)) • Bias Assessment and Mitigation: The document reflects potential dataset bias and the respective detection, mitigation and prevention measures. Additionally, the document includes findings of negative impact on the health and safety of persons, causes of discrimination as well as fundamental rights, and corrective actions such as data augmentation, training adjustment or targeted data collection. Where personal data is processed, the documentation indicates reasons why the processing of special categories of personal data was strictly necessary as well as demonstration of compliance with applicable data protection laws. (Art. 10(2)(f), 10(2)(g), 10(3), 10(5) with cross-references to GDPR (Reg. 2016/679), Reg. 2018/1725, and Dir. 2016/680)) ETSI ETSI TR 104 119 V1.1.1 (2025-09) 39 Recommended Documentation Approaches: To systematically fulfil these documentation requirements, providers are advised to employ the following proven techniques: • Datasheets for Datasets: Utilize structured templates or questionnaires to capture detailed metadata on data origin, composition, quality metrics, and bias mitigation efforts. Datasheets provide comprehensive evidence supporting regulatory reviews (Gebru et al. [i.19], clause D.1.1). • Data Statements or Data Cards: Offer concise yet thorough documentation focusing particularly on ethical aspects, data representativeness, and bias considerations, suitable especially for NLP and sensitive-data scenarios (clause D.1.4). • Data Nutrition Labels: Present key data quality indicators concisely, offering quick readability and clarity on dataset characteristics and representativeness (clause D.1.3). • Bias Mitigation Logs: Maintain explicit records of bias assessments, adjustments, and corrective actions. Such logs enhance transparency and support risk management documentation aligned with ISO/IEC TR 24028 [i.1]. All data governance and quality documentation feed directly into the Technical Documentation (Annex IV), explicitly fulfilling the requirement of the EU AI Act to document dataset characteristics. Providers substantiate all claims regarding data quality and representativeness with quantitative evidence, such as demographic analyses or statistical breakdowns. This structured documentation not only ensures compliance with Art. 10 but also provides crucial evidence for risk management purposes (Art. 9), particularly regarding bias mitigation and data integrity, strengthening the overall AI system's transparency, trustworthiness, and regulatory compliance.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.4 Record-Keeping (Art. 12)
|
Context Requirement: High-risk AI systems which comply with the EU AI Act are designed to facilitate the recording of events ("logs") during operation, as appropriate for their intended purpose. The logs facilitate traceability, allowing for the reconstruction of system functionality, especially during instances of failure or unexpected behaviour. Providers establish an automatic logging mechanisms and guarantee the retention of logs for subsequent review. Documentation Tasks: The provider establishes comprehensive record-keeping and logging documentation by adhering to a structured documentation process (see clause 6.7). At a minimum, the documentation includes the following key information elements [i.2]: • Logging Specifications: The document includes the events logged during the AI system's operation. For biometric systems, the provider indicates at least how the period of use of the AI system, the input data compared with a reference database, the matches found during the comparison and the identification of the natural persons involved in the verification of the results are recorded. (Art. 12(1), 12(2), 12(3), Annex III (1), Art. 79(1)) • Log Access and Analysis: If a competent authority requests generated logs, the document describes access methods and analyses tools provided to access the generated logs. This includes any available interfaces, such as administrative dashboards or APIs, as well as tools provided for audit or incident response purposes. If logs are encrypted or require special handling, particularly in cases involving personal data, those procedures are described. Providers ensure that sufficient information is included to enable competent authorities, auditors, or incident responders to obtain and interpret the logs effectively. (Art. 12(2), Art. 21(2)) • Data Protection Considerations: If logs include personal data, the document addresses compliance with applicable data protection laws (e.g. GDPR), including lawful basis, storage security, and access restrictions. (Art. 19(1), GDPR (Reg. 2016/679)) Recommended Documentation Approaches: Providers are advised to adopt the following documentation techniques: • Log Structure and Sample Entries: Clearly document log formats with illustrative examples, such as: "[2025-05-01 10:30:15] INPUT ID=abc123, Decision=Approved, Score=0.87". These entries clarify how logs directly support traceability and accountability. • Integration with Risk Management: Demonstrate the linkage of logging mechanisms with risk mitigation strategies. For example, logs can provide auditability for explainability-related risks, reinforcing oversight and compliance with ISO/IEC 42001 [i.11]. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 40
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.5 Transparency and Information to Deployers (Art. 13)
|
Context Requirement: High-risk AI systems which comply with the EU AI Act are transparent enough to allow deployers to understand and use their outputs correctly. This includes clear, accurate, and accessible instructions for use and information to interpret systems output and behaviour. Providers also ensure deployers receive adequate documentation and training. Documentation Tasks: The provider establishes comprehensive Instructions for Use (User Manual) as part of structured documentation process. At a minimum, the documentation includes the following key information elements [i.2]: • Intended Purpose: The document states the intended purpose of the AI system and reasonably foreseeable misuse, which may lead to risks to the health, safety, or fundamental rights. (Art. 13(3)(b)(i), (iii)) • Installation, Maintenance and Updates: The document specifies required computational hardware resources, expected lifetime of the AI system, and essential maintenance measures including software updates. (Art. 13(3)(e)) • Instructions for use: The document includes concise, complete, correct, clear and accessible information guiding the use of the system, addressed to deployers, e.g. in digital format. (Art. 13(2)) • Output Interpretation Guide: The document explains the meaning of outputs and behaviour using AI system's capabilities and human oversight. (Art. 13(3)(d), b(iv); Art. 14(4) (b)-(d)) • Performance: The document sets out the AI system's level of accuracy, robustness and cybersecurity. If persons are affected, the document describes the AI system's performance regarding the affected parties, on which the system is intended to be used. (Art. 13(3)(b)(ii))(v)) Recommended Documentation Approaches: Providers are advised to adopt established user documentation standards and techniques, including: • Adopt ISO/IEC/IEEE 26511:2018 [i.27]: Use established standards for structured, clear, and user-oriented documentation. • Include practical Aids: Add quick reference guides, threshold tables, flowcharts, and explainability summaries to support usability. • Ensure Documentation Consistency: Align user instructions with technical documentation, clearly reflecting system features, limitations, and oversight mechanisms.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.6 Human Oversight (Art. 14)
|
Context Requirement: High-risk AI systems which comply with the EU AI Act are designed to allow effective human oversight to prevent or reduce risks to health, safety, or fundamental rights. When designed appropriate human overseers are able to understand the system, interpret its output, and intervene or shut it down when necessary. Human oversight is documented in two ways: (1) in the design documentation, showing what oversight measures are built into the system, and (2) as guidance in the user instructions, detailing how the oversight is to be performed. As user guidance is addressed in clause 7.3.5, this clause focuses on documenting the design rationale, mechanisms, and technical implementation of human oversight in accordance with Art. 14. Documentation Tasks: The provider establishes comprehensive human oversight documentation as part of structured documentation process (see 6.7). At a minimum, the documentation includes the following key information elements [i.2]: • Oversight Measures: The document specifies the specific technical and organizational human oversight measures, including into the AI system built-in measures and measures to be implemented by the deployer. (Art. 14(3)) • Risk mitigation: The document explains how the oversight measures effectively mitigate risks to health, safety, and fundamental rights. (Art. 14(2)) • Interface for Oversight: The document describes user interface elements that enable interpretation of system behaviour, monitoring and controlling. (Art. 14(1), 14(4)(c), 14(4)(d)) ETSI ETSI TR 104 119 V1.1.1 (2025-09) 41 • Override and Stop Controls: The document defines how human operators can override or reverse the system's outputs and safely interrupt its operation. This includes the design, logic, and accessibility of override functions, as well as the implementation and operation of the stop function. (Art. 14(4)(d), (e)) • Automation Bias Mitigation: The document describes measures implemented to reduce the risk of automation bias, where human overseers may over-rely on AI outputs. This can include interface design choices such as requiring human confirmation for critical decisions, displaying confidence levels, or using alerts to encourage the operator's involvement. (Art. 14(4)(b)) Recommended Documentation Approaches: Providers are advised to adopt established human-system interaction standards and practices, including: • Oversight Scenarios: Use real-world use cases to illustrate oversight procedures. • Human Factors Standards: Apply ergonomic and usability documentation methods (e.g. ISO 9241-210 [i.14]) to ensure interface accessibility and interpretability. • Quality Management Linkage: Reference quality management system procedures (Art. 17), such as post-market oversight reviews and operator feedback loops. • Training Annex: Reference or annex any training resources developed for human overseers to support operational readiness and compliance. • Oversight Plan Template: Include a reusable plan outlining deployer oversight tasks (e.g. monitoring frequency, response actions, warning signs). Consistency is maintained between design documentation and user instructions, such that oversight features described in system design (e.g. override buttons, alert systems) are also reflected in the Instructions for Use.
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.7 Accuracy, Robustness, and Cybersecurity (Art. 15)
|
Context Requirement: Art. 15 of the EU AI Act mandates that high-risk AI systems achieve and maintain an appropriate level of accuracy, robustness, and cybersecurity throughout their life cycle. These characteristics are essential to ensure the system operates reliably under expected conditions, withstands disturbances, and is protected against manipulation or misuse. Documentation Tasks: At a minimum, the provider includes the following information elements as part of a structured documentation process [i.2]: • System Performance Summary: The document provides an overview of how the AI system ensures accuracy, robustness, and cybersecurity throughout its life cycle. It declares achieved accuracy levels, along with relevant metrics, benchmark and measurement methodologies. (Art. 15(1), Art. 15(2)) Art. 15(3)) • Robustness Report: The document demonstrates the AI system's resilience to internal faults, environmental variations, and interaction with users or other systems. It describes technical design (e.g. redundancy solutions, fail-safes, alerts, recovery modes) and organizational measures (Art.15(4)) • Learning Feedback: If AI systems continue to learn after being placed on the market, the document specifies mitigation measures to prevent biased feedback loops. (Art.15(4)) • Vulnerability Mitigation: The document includes implemented measures to prevent, detect, and respond to, resolve and control for attacks and inputs designed to cause the AI model to make a mistake. (Art.15(5)) Recommended Documentation Approaches: • Model Cards: Use Model Cards to document accuracy metrics, robustness considerations, intended use conditions, and evaluation results (e.g. precision, recall, calibration). These facilitate clarity and consistency across system documentation (clause D.2.1). • Validation Reports: Maintain detailed test reports and performance logs covering both standard and stress conditions. • Cybersecurity Standards Integration: Align mitigation documentation with best practices from AI-specific cybersecurity frameworks (e.g. ISO/IEC 27001 [i.12], ISO/IEC TR 24028 [i.1]) to strengthen conformity. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 42 Documented evidence of system accuracy, resilience, and security directly supports the technical documentation required under Annex IV and strengthens the system's conformity assessment and certification under Art. 43-44 of the EU AI Act [i.2].
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.3.8 Technical Documentation (Art. 11)
|
High-risk AI systems which comply with the EU AI Act have a comprehensive and up-to-date technical documentation prior to market placement. The technical documentation builds upon the contextual requirements described above, operationalizing them into structured evidence required by Art. 11 and Annex IV of the AI Act. This documentation provides sufficient information to authorities to verify conformity (Art. 11, Annex IV). Annex IV lists a detailed but minimum set of required documentation items. SMEs and start-ups may utilize a simplified EU-prescribed documentation form (Art 11 (1)). Well formed technical documentation are clear, comprehensive, and up to date, are retained for 10 years (Art. 18) and updated to reflect any system changes. Documentation Requirements: According to Annex IV, at a minimum, the technical documentation includes [i.2]: • General AI System Description: The document defines the intended purpose of the AI system, provider details, system version, interactions with external hardware/software or other AI systems, and deployment formats (e.g. embedded hardware, APIs). It also includes user-interface descriptions, hardware environment specifications, and, where applicable, illustrations of physical products containing the AI. (Art. 11(1), Annex IV(1)(a-h); see also clause 7.3.5) • Design and Development: The document details the process for the AI system's development, including methods and procedural steps, usage and integration of pre-trained systems, system logic, algorithms, key assumptions, classification strategies, optimization objectives, and any significant technical trade-offs. It also describes the AI system's architecture, component interactions, and computational resources utilized. (Art. 11(1), Annex IV(2)(a-c, f); see also clauses 7.3.6 and 7.3.7) • Data Documentation: The document describes the datasets used for training, detailing their provenance, selection, representativeness, labelling, cleaning, and enrichment methodologies. It provides evidence supporting data quality, representativeness, and suitability to the intended purpose, including data protection measures applied. (Art. 11(1), Annex IV(2)(d); see also clause 7.3.3) • Human Oversight Measures: The document presents an assessment of technical and organizational measures enabling human oversight as defined in Art. 14. It describes how the system facilitates human interpretation and appropriate responses to system outputs. (Art. 11(1), Annex IV(2)(e); see also clause 7.3.6) • Validation and Testing Reports: The documents summarizes all validation and testing procedures (incl. data), accuracy and robustness metrics as well as cybersecurity and bias assessments. They refer to acceptance criteria on quality characteristics from e.g. ISO/IEC standards on software quality [i.6], [i.7], or [i.8]. It includes signed and dated test reports along with test logs. (Art. 11(1), Annex IV(2)(g-h); see also clauses 7.3.3 and 7.3.7) • Cybersecurity Measures: The document describes cybersecurity protocols implemented to protect the AI system against AI-specific vulnerabilities, such as adversarial attacks and data manipulation. (Art. 11(1), Annex IV(2)(h); see also clause 7.3.7) • AI System Monitoring and Control: The document describes the system's operational capabilities and limitations, highlighting accuracy across targeted user groups, foreseeable unintended outcomes, and risks to health, safety, fundamental rights, or discrimination. It also specifies input data requirements clearly. (Art. 11(1), Annex IV(3); see also clauses 7.3.5 and 7.3.6) • Performance Metrics Appropriateness: The document provides a justification and rationale for selecting specific performance metrics, demonstrating their suitability for evaluating the AI system's intended functionalities and outputs. (Art. 11(1), Annex IV(4); see also clause 7.3.7) • Risk Management System: The document summarizes the risk management procedures implemented in compliance with Art. 9, including identified risks, applied mitigations, and justification for residual risk acceptability. (Art. 11(1), Annex IV(5); see also clause 7.3.1) • Life cycle Changes Record: The document maintains an ongoing record of all significant modifications and updates made to the AI system throughout its life cycle. (Art. 11(1), Annex IV(6); see also clause 7.3.3) ETSI ETSI TR 104 119 V1.1.1 (2025-09) 43 • Standards and Compliance Declaration: The document includes references to all fully or partially applied harmonised standards or an alternative measure employed for compliance. It also contains a copy of the official EU Declaration of Conformity. (Art. 11(1), Annex IV(7-8); see also clause 7.3.8) • Post-Market Performance Evaluation System: The document details the processes established for ongoing post-market performance monitoring and evaluation of the AI system, including a monitoring plan as required by Art. 72(3). (Art. 11(1), Annex IV(9); see also clause 7.3.4) Information elements: Through an analysis of the AI Act, and in particular Annex IV, the following list identifies necessary information that the AI Act requires to be included in the technical documentation. Such information is referred to as "information elements". Given the substantial number of these elements, Figure 7 presents a visual overview for a more efficient navigation. To enhance clarity and facilitate the understanding of technical documentation obligations under the AI Act, the documentation items and corresponding information elements are structured into three primary categories: 1) AI System information: This category encompasses details pertaining to the AI system itself. It includes, but is not limited to, information about the system's architecture, its development life cycle, and its intended purpose. 2) Data information: This category focuses on the data utilized in the training, validation, testing, and operation of the AI system. It covers details such as data collection methods, processing procedures, and test reports, among others. 3) Controls information: This category addresses the safeguards (i.e. controls) implemented to mitigate risks associated with the AI system. These controls can also be defined as risk mitigation measures and apply to various stages and components of the AI system. For instance, controls may target the system itself (e.g. human oversight, accuracy), the underlying data (e.g. data transparency, quality), or the development process (e.g. risk management, quality assurance). It is important to note that this categorization is not explicitly presented in the AI Act, nor are the documentation items and information elements presented hierarchically in the legislative text as suggested in Figure 7. However, this hierarchical structure is very useful for understanding the documentation requirements in the AI Act, as it provides a structured approach compared to the simple list of items in Annex IV. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 44 Figure 7: High-Risk: Documentation Items and information Elements required by the AI Act Recommended Documentation Approaches: While the AI Act provides an extensive list of documentation items and corresponding information elements to be documented, it does not offer technical details on how to do so. Therefore, it is crucial to have documentation schemes that provide more detailed guidance on how to document each of these items. Clauses 7.3.2 to 7.3.7 offer structured guidelines for documentation of AI Requirements set forth in Art. 9 to 15. • Structured Technical Dossier: Maintain a clearly structured, regularly updated master technical file explicitly aligning documentation elements to the Art. 11 and Annex IV requirements. • Compliance Traceability Matrix: Provide a clear mapping matrix linking each AI Act requirement (Art. 9 to 15) directly to corresponding evidence sections within the technical documentation. • Use of Simplified Forms for SMEs: Small and medium-enterprises may opt to fulfil the Annex IV documentation requirements via a simplified form developed by the European Commission. When used, this form is accepted by notified bodies for the purposes of conformity assessment, in accordance with Art. 11(1). • Reference to Documentation Techniques: Specific documentation techniques suitable for each requirement category (e.g. risk management, data governance, human oversight, robustness) are further discussed in the corresponding clauses of clause 7. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 45
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.4 Documentation Requirements for GPAI models
| |
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.4.1 General
|
Under the AI Act, General-Purpose AI (GPAI) models are defined as models that display significant generality and are capable of competently performing a wide range of distinct tasks (Art. 3(63)). All GPAI providers should comply with obligations in Art. 53, including keeping up-to-date technical documentation, publishing a summary of the training data used, establishing a policy to respect copyright and providing documentation to downstream deployers (Art. 53). Models released under a free and open-source license, with publicly available weights and architecture, are exempt from certain obligations unless they are classified as systemic-risk GPAI (Art. 53(1)). A GPAI model is presumed to pose systemic risk if it has high-impact capabilities based on benchmarks (e.g. when it was trained using a total computational power greater than 10²⁵ FLOPs or is designated by the Commission as such (Art. 52). Systemic-risk GPAI model providers should comply with additional obligations under Art. 55, including model evaluation and adversarial testing, conducting a model-specific systemic risk assessment and mitigation, reporting serious incidents, and maintaining adequate cybersecurity. Providers should notify the Commission within two weeks if they determine a model meets the systemic-risk criteria (Art. 52(1)).
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.4.2 GPAI Models without Systemic Risk
|
Under Art. 53 of the AI Act, all GPAI model providers, regardless of systemic risk status, should maintain the following technical documentation and provide sufficient information to deployers, at a minimum [i.2]: • Technical Documentation: The document describes the model's architecture, intended tasks, training process, computational and energy resources used, evaluation results, known limitations, technical means to integrate the GPAI model. (Art. 53(1)(a); see also Annex XI) • Training Data Summary: The document includes a public summary, published by the provider, describing the datasets used to train the model. (Art. 53(1)(d)) • Copyright Compliance Policy: The documentation explains how the provider complies with copyright related to Art. 4(3) of Directive (EU) 2019/790. (Art. 53(1)(c)) • Downstream Documentation: The document includes technical documentation for downstream providers. (Art. 53(1)(b); Annex XII)
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.4.3 GPAI Models with Systemic Risk
|
In addition to the baseline requirements listed in clause 7.3.2 GPAI models with systemic risk should meet the following requirements [i.2]: • Model Evaluation and Adversarial Testing: The document includes test and evaluation results, including adversarial testing. (Art. 55(1)(a)) • Systemic-Risk: The document describes the systemic risks, the origins and the mitigation measures applied. (Art. 55(1)(b)) • Serious Incident Reporting: In case of serious incidents and possible corrective measures, the document describes the procedures in place for tracking and documenting serious incidents. (Art. 55(1)(c)) • Cybersecurity Protections: The document details the cybersecurity protection for the GPAI model as well as its infrastructure. (Art. 55(1)(d)) ETSI ETSI TR 104 119 V1.1.1 (2025-09) 46
|
be84c6a7bf8e9c83db98d9703e663ebd
|
104 119
|
7.4.4 Documentation
|
Figure 8 gives an overview of information elements for GPAI models without and with systemic risk: Figure 8: GPAI models: Documentation Items and information Elements required by the AI Act Recommended Documentation Approaches: Providers are encouraged to adopt structured, standardized documentation methods to efficiently comply with the technical and risk-based obligations under Art. 53 and 55 of the AI Act. Recommended approaches include: • Risk Management Frameworks (ISO 31000 [i.13], ISO 14971 [i.15]): Use general and sector-specific risk management standards to structure risk identification, mitigation, and documentation processes. These frameworks help produce a comprehensive risk file in line with Annex IV(5), covering known risks, mitigation measures, and residual risk justifications. • Datasheets for Datasets: Integrate datasheets into documentation workflows to capture dataset origin, representativeness, data processing methods, and bias mitigation strategies. These are directly relevant for fulfilling the public training data summary under Art. 53(1)(d) and Annex IV(2)(d). • Model Cards: Use standardized Model Card formats to document model purpose, architecture, limitations, performance metrics, and robustness evaluations. Model Cards support both technical documentation (Art. 53(1)(a)) and transparency obligations toward deployers (Annex XII). ETSI ETSI TR 104 119 V1.1.1 (2025-09) 47 Annex A: Sample Documentation Scenarios A.1 Healthcare Use Case A.1.1 Use Case Description Health Educational Conversational Agent (HECA) v2 Virtual Assistant (VA) is a Generative AI-powered conversational agent that specializes in providing information about medical products and services. The medical assistant undergoes training in the comprehension of medical documentation using advanced Natural Language Understanding (NLU) techniques implemented by Large Language Models (LLMs). The primary objective is to effectively comprehend user inputs by leveraging NLU principles and provide precise and contextually relevant responses related to the field of health and medicine. In addition, this agent has a fallback system using LLMs trained on peer-reviewed medical articles. This ensures that users have a seamless experience, even when they ask unclear or unidentified questions. The HECA v2 Virtual Assistant is specifically designed to manage confidential medical information with utmost confidentiality. HECA v2 is applied within Horizon Europe AI4HF project to support the development of a comprehensive and standardized methodological framework for trustworthy and ethical provision of personalized risk assessment and care plans for individuals living with Chronic Heart Failure (https://www.ai4hf.com/about-ai4hf). According to the EU Artificial Intelligence Act (Art. 6 and Annex III), AI systems used in healthcare for diagnosis, prognosis, or clinical decision support are classified as high-risk [i.3]. As AI4HF falls under this category, HECA v2, being an integral part of the framework, should be fully documented to ensure compliance with the AI Act requirements for high-risk AI systems. A.1.2 Documentation Approach A.1.2.1 Key Documentation Requirements High-risk AI systems should maintain comprehensive and up-to-date technical documentation, as required in clause 7.3.8. This clause provides illustrative examples for documenting the first two key documentation requirements: • General AI System Description. • Design and Development Documentation. A.1.2.2 Example 1: General AI System Description The General AI System Description is mandatory documentation element derived from Art. 11 and Annex IV of the AI Act. The following steps outline the application of the proposed documentation approach described in clause 6.7: Step 1: Understand and identify the purpose of the documentation artifacts Purpose: To communicate essential general information about the AI System to downstream users and stakeholders (researchers, patients, clinicians), enabling a common understanding of system capabilities, intended use, and limitations. Step 2: Identify the selected documentation aspects per document Documentation Item (clause 6.2): AI System Documentation Stakeholder (clause 6.3): Provider (AI Developer) Phase of Documentation (clause 6.4): Monitoring & Maintenance (living document, updated as required) Documentation Technique (clause 6.5): Structured tabular format ETSI ETSI TR 104 119 V1.1.1 (2025-09) 48 Step 3: Identify the document contents (information elements) and create/assemble the document Document: Table "General AI System Description" Information items are presented in the following structured table. Table A.1: General AI System Description Purpose System Name HECA V2 Virtual Assistant Provider Name CERTH/ITI Version Current version of the system is HECA Version B: Generative AI-powered conversational agent with two-LLM architecture for healthcare applications https://heca.iti.gr/e086a922-812c-4f9c-91bc-94e22d431c1f Previous versions: Generation A / CERTH Intelligent Personal Agent (CIPA): Focused on natural language interaction (text/voice) for indoor tasks, without LLMs. Generation B / CIPA Educational Virtual Assistant (EVA): Smartphone-based conversational agent using RASA framework, NLU, dialogue flow, and AI planning. Deployed in Smart Home for energy and health domains. HECA Version A / Health Education Agent v1: Educational Virtual Assistant for diabetes management and patient education. Integrated into self-management app, based on NLP/NLU models (Chrodis Plus JA). Intended Purpose The primary objective of the LLM-based virtual medical assistant is to enhance the management, accessibility, and reuse of healthcare data, particularly for critical chronic conditions such as rare cancers and heart failure. The assistant is designed to respond to user inquiries in an intuitive and user-friendly manner, supporting improved data governance and providing information to aid clinical decision-making, while not performing autonomous clinical decisions. Use Case HECA v2 serves as a core component within the AI4HF project, supporting the development of a standardized framework for ethical and trustworthy AI. It provides a conversational interface for personalized risk information retrieval and data interpretation, aligned with FUTURE-AI guidelines [i.86]. Target Audience Researchers, Patients, Health Professionals Scope of the application The system is intended for public use in the healthcare domain as an informational tool to support research, patient education, and clinical decision-making processes. System Description General functionality HECA v2 is an AI-powered virtual medical assistant that provides accurate, domain-specific responses to user queries about chronic heart failure. It uses a two-LLM architecture: the first LLM retrieves relevant Question-Answer (QA) pairs from pre-stored embeddings in ChromaDB, while the second LLM generates the final answer based on these retrieved pairs. The system operates on pre-collected and processed medical data, with no live interaction with IoT devices or external sensors. A specialized scoring mechanism ensures that responses are contextually relevant and semantically aligned with the user's query, enabling trustworthy and precise conversational support. Scale of deployment The system is designed for global use and is accessible without user registration, supporting anonymized interactions. As such, there is no predefined limit on the number of users or connected endpoints, enabling broad scalability across diverse geographic regions. No geographic, sectoral, or institutional restrictions are imposed by the system architecture. Interaction with external systems The AI system interacts with external software components through standardized communication protocols, specifically HTTP requests and WebSockets. It relies on an internal ChromaDB instance for data retrieval and is currently deployed in an in-house environment, with planned migration to cloud infrastructure. The system incorporates multiple integrated Large Language Model (LLM) components that interact internally to support its core functionality. The system does not interface with external hardware (e.g. IoT devices) or third-party AI systems beyond its defined architecture. Software and firmware details The system is accessed via a web browser and operates through standard HTTP and WebSocket protocols. It does not rely on specific client-side software or firmware versions. Updates are applied to the server-side components as needed, with no required user-side updates, ensuring compatibility across common web platforms. Deployment formats Web-based service (browser interface); no packaged deployment required. Update and maintenance Updates are performed as needed, in alignment with European regulatory requirements and applicable contractual obligations. Updates may include enhancements to LLM components, QA content, and supporting infrastructure. Updates are deployed to the server environment through controlled processes; no user-side downloads or actions are required. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 49 System Description Hardware requirements The system is accessed via client devices (PC or mobile) equipped with a standard web browser; no specific hardware is required on the user side beyond browser compatibility (Chrome recommended). On the server side, the system operates on infrastructure with GPU capabilities (minimum 48 GB GPU RAM), with optimal performance achieved using multiple GPUs to support LLM inference and database retrieval operations. User Interface The system provides a React-based web front end accessible through standard web browsers. It presents a conversational chatbot interface developed within the framework of the Horizon Europe AI4HF project. The interface allows users to enter free-text queries and receive natural language responses based on curated medical knowledge. The interaction is focused on guided question-answer dialogue to support informational needs and understanding of chronic heart failure; the system does not generate autonomous clinical recommendations or decisions. A.1.2.3 Example 2: Design and Development Documentation The Design and Development Documentation, as required by Art. 11(1) and Annex IV(2)(a-c, f) of the AI Act, constitutes a broad and complex obligation that encompasses multiple documentation artifacts spanning the entire development life cycle of the AI system, as outlined in clause 7.3.8. In this example, a proposed documentation approach (see clause 6.7) is illustrated for one core component of the system architecture: the QA model. To support clarity, first a high-level component diagram of the overall system architecture is provided, and subsequently it is demonstrated how the QA component can be documented using the Model Card technique. The component diagram is illustrated in Figure A.1. Figure A.1: HECA V2 Virtual Assistant system architecture The HECA V2 Virtual Assistant is an AI-powered web-based system designed to provide reliable, domain-specific conversational support in the healthcare domain. Its architecture consists of a modular backend pipeline and an interactive frontend interface. Conceptual Workflow: • The system is initially trained on a curated set of healthcare-related documents. • Once trained, users can interact with the assistant via a web-based frontend, which communicates with the backend to deliver accurate, context-aware responses. System Components: • Frontend Interface: Provides an intuitive, web-based conversational interface for users to submit queries and receive responses. Additionally, it manages communication between the user and backend services. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 50 • Backend Architecture: - Text Processing Module: Analyses the input documents during system training, performing preprocessing steps to extract and structure relevant information. - OpenOrca LLM Module: Utilizes the OpenOrca Large Language Model to automatically generate Question-Answer (QA) pairs from the pre-processed documents. - QA Generation: Refines and prepares the generated QA pairs for storage and retrieval. - FAISS Database (Vector Encoding Database): Stores vectorized representations of the QA pairs using Facebook AI Similarity Search (FAISS), enabling efficient semantic retrieval during user interactions. - Textgen Module: Generates dynamic responses to user queries, leveraging relevant QA pairs retrieved from the FAISS database. Mistral 7B LLM: Responds to queries directly related to information stored in the FAISS database. MedAlpaca LLM: Handles queries that fall outside the trained knowledge base, providing responses to broader medical questions. The following steps outline the application of the proposed documentation approach described in clause 6.7 for the documentation of the "QA Generation" component. Step 1: Understand and identify the purpose of the documentation artifacts Purpose: To communicate essential technical and ethical information about the QA model to downstream users (researchers, patients, and health professionals), enabling safe and effective use of the AI assistant. Step 2: Identify the selected documentation aspects per document Documentation Item (clause 6.2): QA Model (based on two LLMs with Chroma DB) Documentation Stakeholder (clause 6.3): Provider (AI Developer) Phase of Documentation (clause 6.4): Implementation & Integration (Model Training, Evaluation, and Deployment) Documentation Technique (clause 6.5, D2.1): Model Card in structured tabular format Step 3: Identify the document contents (information elements) and create/assemble the document Document: Model Card for QA Model Information items (1-7) are presented in the Model Card template below. Table A.2: Model Card for QA Model Model Overview Name HECA V2 QA Model Version HECA Version B Description The QA Model is a core component of the HECA V2 Virtual Assistant. It enables the system to generate domain-specific, context-aware answers to healthcare-related queries by retrieving and processing QA pairs derived from curated medical documents. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 51 Purpose Intended Use The QA Model component is designed to automatically generate high-quality Question-Answer (QA) pairs from curated healthcare-related documents, enabling efficient semantic retrieval and explainable conversational support within the HECA V2 Virtual Assistant. Realizable Capabilities Sense Process Knowledge Act Communicate Visual Factual Physical Visual Auditory Procedural Non- physical (Agents) Auditory Olfactory Conceptual Olfactory Gustatory Metacognitive Tactile Tactile Textual … Gestural Primary Users Researchers, health professionals, patients Use Cases Conversational support for healthcare education, research assistance, clinical support (non-decision-making). Domain Healthcare, with a focus on chronic heart failure. Usage Scope Public-facing web application, accessible globally via browser. Model Details Architecture Hybrid retrieval-generation architecture using: 1) OpenOrca LLM to generate QA pairs during training; 2) FAISS-based semantic retrieval engine; 3) Textgen module with Mistral 7B and MedAlpaca LLMs to generate responses. Data Sources Curated healthcare documents related to chronic heart failure. Data is pre-approved and processed internally. No live connection to clinical systems. Training Process A retrieval-augmented generation method is applied to generate Q/A pairs from the source documents using OpenOrca LLM. These Q/A pairs are post-processed and then stored to FAISS vector database. The process is repeated iteratively as new documents or updates become available. Fallback Mechanism If no matching QA pairs are found in FAISS, MedAlpaca LLM provides a general fallback answer to medical queries (with disclaimers for limitations). Evaluation & Performance Metrics Retrieval precision, response relevance (manual review), semantic similarity, user satisfaction ratings. Validation Manual validation by medical domain experts and NLP engineers during testing phase; iterative refinement based on test results. Limitations Model does not cover all possible medical questions. No real-time clinical data integration. Responses may not reflect most recent medical guidelines. Model cannot replace professional medical advice. Ethical Considerations Fairness Training data is curated to ensure diversity of medical content and representation across genders and demographic groups where applicable. Regular audits of QA pair generation are conducted to identify and mitigate potential biases in content or language. Explainability The architecture supports traceability; responses are generated via QA pairs retrieved from an indexed database (FAISS Database). Mistral 7B answers are based on retrievable inputs, and fallback answers (from MedAlpaca LLM) are flagged to indicate their more general nature. Transparency The system provides a clear description of its intended purpose, its two-LLM architecture, and how it generates responses based on curated medical knowledge. All updates are logged and versioned in internal technical system documentation, with major changes recorded in a public changelog for transparency. Accountability The system's compliance with the EU AI Act requirements and alignment with FUTURE-AI guidelines establish a framework for accountability, supported by comprehensive documentation and internal governance processes by CERTH/ITI. Privacy & Security User interactions are anonymized, and no personally identifiable information is stored. The model is trained and operates solely on pre-approved and processed internal medical data, with no live connection to clinical systems or direct use of patient data in training or inference. This design aims to prevent privacy intrusions related to data usage consent. Maintenance & Updates Update Frequency Planned quarterly updates to QA pair database and model tuning. Emergency updates as required (e.g. guideline changes). Monitoring System activity monitored through server logs; user feedback mechanisms planned; regular review of QA pairs and LLM outputs. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 52 Maintenance & Updates Changelog All updates logged and versioned in internal system documentation; major changes recorded in public changelog for transparency. Contact & Governance Maintainer CERTH/ITI AI Development Team. Updates Updates reviewed and approved by CERTH/ITI team leader and AI4HF coordinator. Compliance Aligned with requirements of the EU AI Act Art. 11 and Annex IV and with the FUTURE-AI guidelines. Note The capabilities listed in the section 'Intended Use' are an illustrative example for additionally provided information, to provide required information on AI systems capabilities to support transparency (see clause 7.3.5 and [i.89], [i.90], [i.91]). A.2 AI-Based Person Detection for Construction Machinery A.2.1 Use Case Description This use case focuses on the creation and preparation of a domain-specific image and video dataset that can later be used for training and testing AI systems designed for detecting persons around construction machinery. The current stage of the project is centered on data collection and annotation under varied environmental and operational conditions. The dataset includes labelled frames extracted from videos recorded using Zed 2i stereo cameras in realistic construction scenarios. Although no AI model has been fully trained or deployed within this use case, preliminary use of YOLOv8 models has supported pre-annotation and evaluation workflows. The dataset itself is being designed to serve as a foundation for the future development and validation of AI-based person detection systems. These systems may ultimately be integrated into mobile machinery to improve safety during reverse or swing operations. This initiative is conducted in collaboration with the Construction Future Lab and the Federal Institute for Occupational Safety and Health (BAuA) and places emphasis on ethical data use, traceability, annotation quality, and regulatory preparedness. The resulting dataset and processes support the reproducibility of AI safety research and align with the EU AI Act requirements (see clause 7.3.3). A.2.2 Documentation Approach A.2.2.1 Key Documentation Requirements High-risk AI systems should maintain comprehensive and up-to-date technical documentation, as detailed in clause 7.3.8, and should fulfil the obligations of documenting data governance. This clause provides illustrative examples for documenting the first two key documentation requirements: • General AI System Description • Data collection and origin A.2.2.2 Example 1: General AI System Description AI-based Person Detection for Construction Machinery refers to a potential AI-driven safety application that could use computer vision to identify individuals in hazardous zones around construction equipment. Although no such system has been developed within the scope of this project, the present documentation is intended to serve as a conceptual framework for how such systems might be described and assessed in the future. The General AI System Description is a mandatory documentation element derived from Art. 11 and Annex IV of the AI Act. The following steps outline the application of the proposed documentation approach described in clause 6.7. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 53 Step 1: Understand and Identify the purpose of the documentation artifacts Purpose: to provide a clear and comprehensive overview of the AI system, including its intended use, functionality, and limitations. It explains how the system operates within its deployment context and outlines key aspects of its life cycle - such as data handling, integration, and safety relevance. This enables stakeholders, including researchers, industry partners, and regulatory authorities, to develop a shared understanding of the system's design, purpose, and implications for safe deployment in construction environments. Step 2: Identify the selected documentation aspects per document Documentation Item: AI System Documentation Stakeholder: Provider (AI Developer) Phase of Documentation: Monitoring & Maintenance (living document, updated as required) Technique (clause 6.5): Structured tabular format Step 3: Identify the document contents (information elements) and create/assemble the document Information items are provided when such a model exists. It is recommended to structure the information as done in Table A.1. A.2.2.3 Example 2: Data Documentation The data documentation describes a curated and fully annotated visual dataset developed to support the development of AI systems for person detection around construction machinery in real-world construction site environments. Commissioned by the Federal Institute for Occupational Safety and Health (BAuA), the dataset was created as part of a research project carried out in collaboration with Construction Future Lab. Its purpose is to enable the evaluation and future implementation of AI-based person recognition systems tailored to the construction domain. The dataset includes over 100 GB of video data, covering approximately 100 videos and 10 000 labelled images. These were collected using Zed 2i stereo cameras in diverse construction site conditions, encompassing scenarios such as wheel loader reversing, swivel and reversing area monitoring on excavators, and static field views using tripod-mounted cameras. The primary object class is "person," with data captured under varied lighting, weather, and body posture conditions to ensure representativeness. With data collection and annotation now complete, the dataset is ready for use in training and evaluating AI models. Although no operational AI system has yet been built within this project, the dataset provides a robust and application-specific foundation for the future development and validation of safety-related AI solutions. It is intended for internal research, prototyping, and testing purposes, not for direct deployment or commercial use, and supports the transparent, standards-aligned advancement of high-risk AI systems focused on occupational safety. Step 1: Understand and identify the purpose of the documentation artifacts Purpose: To explain how data is collected, processed, and managed across its life cycle, providing stakeholders - including researchers, industry partners, and regulatory authorities - with a clear understanding of the system's capabilities, deployment context, and safety implications within construction environments. Step 2: Identify the selected documentation aspects per document Documentation Item: Conducted data collection and pre-processing steps Documentation Stakeholder: Provider (AI Developer who uses the data for model training, etc.) Phase of Documentation: Data Preparation & Processing (see clause 6.4) Documentation Technique: Structured tabular format Step 3: Identify the document contents (information elements) and create/assemble the document Information items are presented in the following structured table. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 54 Table A.3: Data Documentation Purpose Name Annotated Dataset for Person Detection in Construction Machinery Environments Provider Name Construction Future Lab GmbH Version 1.0 Intended Purpose To provide a curated dataset for training and evaluating AI-based person detection systems aimed at improving occupational safety in construction sites. Use Case Supports the development and validation of person recognition systems for use in reversing and swivel operations of mobile machinery such as excavators and wheel loaders. Target Audience Researchers, AI system developers, occupational safety experts, and regulatory auditors. Scope of the application The dataset is intended for internal research, prototyping, and validation phases of AI development. It is not designed for direct integration or commercial deployment, but as a foundational resource to support the safe and transparent development of high-risk AI systems. Dataset Description Intended Use Collection of annotated video/image data for training/testing AI models for person detection. Content Volume Over 100 GB of video data, approx. 100 video files, and 10 000 annotated images representing real-world construction site scenarios. Relevant Object Classes Person. Collection Methods Stereo video recordings with Zed 2i cameras under varying lighting, weather, and operational conditions (e.g. standing, walking, reversing zones). Annotation Process Pre-annotation using YOLOv8 models; manual verification and refinement to ensure labelling quality. Scale of deployment Experimental setup with stereo cameras; data stored on local devices and shared via secure academic platforms. Interaction with external systems Use of TU Dresden cloud, GitLab for versioning, local storage (HDDs). Regulatory Context The dataset is intended to support future development of AI systems that align with applicable regulatory and industry standards. These include: • AI Act [i.2] (classifying person detection as a high-risk application) • GDPR [i.85] (ensuring lawful and transparent data use) • ISO/IEC 42001:2023 [i.10] (AI management systems) • ISO/IEC 23894:2023 [i.10] (AI risk management) • ISO/IEC TR 5469:2024 [i.9] (AI and functional safety) • ISO 13849-1 [i.16] (safety-related control system performance) • DIN EN ISO 16001 [i.18] (object detection in earth-moving machinery) • ISO 21815-1:2022 [i.17] (collision avoidance and interface protocols) • Machine Regulation 2023/1230 [i.88] (mandating third-party testing for machine learning-based safety components). Software and firmware details Label Studio, YOLOv8 variants (n-x), Python scripts for evaluation. Deployment formats Scripts, labelled datasets, and HTML/PDF reports for internal validation. Update and maintenance Manual updates by developers; version control via GitLab. Hardware requirements Zed 2i stereo camera, GPU-enabled workstation, minimum 1 TB HDD per session. User Interface Web-based annotation interface (Label Studio); role-based access for annotators and reviewers. Data Privacy and Ethical Concerns All individuals recorded in images/videos provided informed consent for data use, application in AI systems, and possible publication. No biometric or identifying data is used (in accordance with GDPR). ETSI ETSI TR 104 119 V1.1.1 (2025-09) 55 Annex B: Trustworthy AI: Definition and core characteristics B.1 Definition of Trustworthy AI This annex provides a concise overview of Trustworthy Artificial Intelligence (AI), outlining foundational concepts, operational requirements, core characteristics, essential frameworks, and explicit alignment with the European Union Artificial Intelligence Act (EU AI Act). Trustworthy AI, as defined by the High-Level Expert Group on AI (AI HLEG) in their Ethics Guidelines for Trustworthy AI, is built upon three pillars that form the foundation of trustworthy AI as indicated in Figure B.1, and necessitate adherence throughout the entire AI system life cycle: • Lawful: AI systems rigorously comply with all applicable legal and regulatory frameworks. This encompasses adherence to national, international, and European Union legislation, including but not limited to the General Data Protection Regulation (GDPR) and relevant sector-specific directives. This adherence ensures AI operations remain within established legal parameters, safeguarding fundamental rights and societal values. • Ethical: Beyond strict legality, AI systems are required to embody and uphold established ethical principles and values. This component is instantiated through four core ethical principles: - Respect for Human Autonomy: AI systems should augment human capabilities, facilitate informed decision-making, and preserve human control. - Prevention of Harm: AI systems are designed to preclude the infliction of physical, psychological, or economic detriment. Proactive identification and mitigation of potential negative impacts are imperative - Fairness: AI systems operate equitably, actively mitigating unjustifiable bias and discrimination, thereby ensuring impartial treatment across individuals and groups. - Explicability: The processes, functionalities, and decision-making mechanisms of AI systems exhibit transparency, interpretability, and comprehensibility to relevant stakeholders, thereby enabling scrutiny and accountability. • Robust: AI systems are required to possess both technical and societal robustness. This necessitates that they be reliable, secure, and resilient, capable of consistent and safe operation within diverse real-world environments, while also adapting responsibly to evolving societal contexts. Technical robustness pertains to attributes such as accuracy, dependability, and cybersecurity, whereas societal robustness encompasses broader ethical considerations and societal impact. For the operationalization of these three fundamental pillars, the AI HLEG introduces seven key requirements. Through a detailed analysis, the needed characteristics for each of these requirements, as proposed by the AI HLEG, have been identified and are indicated in Figure B.1. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 56 Figure B.1: Trustworthy AI pillars, requirements and characteristics (adopted based on [i.84]) B.2 Relevant frameworks and guidelines A variety of international and European frameworks offer prescriptive guidelines and principles for the development and deployment of trustworthy AI. These frameworks collectively underscore the global consensus on the ethical and practical considerations necessary for trustworthy AI. The AI HLEG framework serves as a foundational basis, and elements from other prominent frameworks demonstrate a significant alignment, mapping onto its core structure. Table B.1 summarizes salient aspects, principles, or qualities derived from selected prominent frameworks. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 57 Table B.1: Relevant frameworks and guidelines for AI trustworthiness Framework/Guidelines Overview Ethics Guidelines for Trustworthy AI [i.5] Defines seven key requirements for Trustworthy AI derived from four ethical principles: Human Agency and Oversight; Technical Robustness and Safety; Privacy and Data Governance; Transparency; Diversity, Non-discrimination and Fairness; Societal and Environmental Well-being; Accountability. OECD Principles on AI [i.3] Articulates five value-based principles for responsible AI stewardship: Inclusive Growth, Sustainable Development and Well-being; Human-centred Values and Fairness; Transparency and Explainability; Robustness, Security and Safety; Accountability. ISO/IEC TR 24028:2020 [i.1] Surveys methods for establishing and assessing AI trustworthiness, covering: transparency, explainability, controllability, engineering risks, mitigation techniques, and qualities like availability, resiliency, reliability, safety, security, and privacy. AI4People [i.52] Proposes five ethical principles (Beneficence, Non-maleficence, Autonomy, Justice, Explicability) for ethical AI, focusing on opportunities and risks. Requirements (SQuaRE) - Quality model for AI systems [i.6] Extends the SQuaRE framework for AI systems, focusing on AI-specific characteristics: user controllability, functional adaptability, robustness, and societal/ethical risk mitigation. B.3 Operationalization of Trustworthiness in the EU AI Act The EU AI Act directly operationalizes the principles of Trustworthy AI, particularly for high-risk AI systems, by translating ethical and robust considerations into concrete legal obligations. The Act reflects the "Lawful," "Ethical," and "Robust" components as follows: • Lawful: The EU AI Act itself constitutes a foundational legal framework. It mandates compliance with extant legislation (e.g. GDPR) through provisions such as data governance requirements (Art. 10) and comprehensive documentation obligations (Art. 11). • Ethical: - Human Agency and Oversight (AI HLEG): Addressed by the Act's mandate for human oversight mechanisms (Art.14), ensuring continued human control and intervention capabilities. - Privacy and Data Governance (AI HLEG): Directly paralleled in the stringent requirements for data quality and robust data governance practices (Art. 10) to prevent discriminatory outcomes and safeguard privacy. - Transparency (AI HLEG): Ensured through obligations pertaining to transparent operation, clear instructions for use (Art. 13), and meticulous record-keeping (Art. 12). - Diversity, Non-discrimination, and Fairness (AI HLEG): Addressed through the emphasis on preventing bias within training datasets and model outputs (Art. 10), thereby promoting equitable outcomes. - Societal and Environmental Well-being (AI HLEG) & Accountability (AI HLEG): Supported by requirements for robust risk management systems (Art. 9) and post-market monitoring (Art. 61), which collectively aim to identify, assess, and mitigate broader societal impacts and assign responsibility. • Robust: - Technical Robustness and Safety (AI HLEG): Explicitly encompassed by the Act's provisions concerning accuracy, robustness, and cybersecurity (Art. 15), mandating resilience against errors, faults, and malicious interventions to ensure safe operational performance. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 58 Annex C: Risk Mitigation by Documentation As stated in the publication Best Practices in AI Documentation [i.31], to build trustworthy AI-based systems, it is necessary to consider a variety of risks associated with the availability of poor documentation about their structure and building methodology. Connected with this statement, some researchers claim that the potential of such AI-based systems can be largely overestimated, having virtually no documentation demonstrating an actual trustworthiness. Others have raised concerns regarding potential adverse consequences of such systems, including person harm, technical, and socio-ethical risks [i.60], [i.80], [i.81], [i.82] and [i.83]. These works paved the way to the awareness that low quality, or absence, of documentation can lead to seven categories of risks: • Human harm due to AI errors. • Misuse of AI tools. • Risk of bias in AI and perpetuation of inequities. • Lack of transparency. • Privacy and security issues. • Gaps in AI accountability. • Obstacles to implementation in real-world scenarios. These risks could result in harm to individuals, which results in the reduction of the level of trust in AI-based systems by the society at large. Therefore, the development, review, and deployment stages of an AI-enabled system should include risk assessment and management as core components for establishing trustworthiness. Human harm due to AI errors: • Why documentation is important to avoid human harm. • Impact of low-quality documentation on stakeholders. • Main stakeholders affected: AI Customer, AI Subject. • Main quality aspects: technical robustness/safety AI systems are sometimes linked to malfunctions that might ultimately lead to safety issues for their users despite ongoing advancements in data accessibility and machine learning. The effects of such issues with AI tools in sensitive domains (e.g. healthcare) include, among others (i) false negatives concerning missed classifications concerning life-threatening conditions; (ii) excessive optimistic/pessimistic behaviour because of erroneous false positives (i.e. healthy people mistakenly regarded as ill by the AI algorithm); and, (iii) inappropriate interventions due to imprecise classification (e.g. inaccurate prioritization of interventions in emergency rooms). Hence, to avoid end-users' harm, AI engineers should document errors and adjustments during AI deployment to support transparency. Furthermore, AI solutions should be dynamic, as such, they should include features that continuously learn from new scenarios and from errors detected in actual use. Still, in order to detect issues as they arise there is the need for some human management and oversight, which consequently may lead to higher expenses and a loss in the early advantages of AI. Misuse of AI tools • The importance of having high quality documentation to improve the appropriate usage of AI systems and, at the same time, to increase trust among end users. • Main stakeholders: AI Customer, AI Subject. • Main concern (quality aspect): reliability. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 59 There is always a danger of human mistakes and human misuse in the context AI systems' usage. As a matter of fact, even though AI systems are accurate and robust, the efficacy and reliability of such tools depend on how the end users will utilize them in practice. AI technologies are vulnerable to incorrect usage or human mistakes due to a variety of issues. They have commonly been created and developed by computer/data scientists with little input from end users, which can lead to complex and unnatural interactions that require the users to become accustomed to the new technology to learn how to use it. To decrease human mistakes or improper usage of AI systems, an effective documentation strategy should be used. To improve the knowledge and abilities of AI users and thereby decrease human mistakes, a complete and effective documentation of AI systems should be established and broadly distributed throughout society at large. Risk of bias in AI and perpetuation of inequities • Data catalogues used to build an AI model may contain biases. • Sometimes, biases cannot be avoided, the documentation may provide details about known biases, mitigation actions, and/or motivation about their presence. • Main stakeholders: AI Customer, AI Subject, Relevant Authorities. • Main quality aspect: reliability. Although there are constant advancements in the research and treatments of data biases within AI systems, significant inequities and prejudice still exist throughout the majority of the world's countries, which inherently influence how AI technologies function. Sex and gender, age, ethnicity, wealth, education, and geography are the primary causes of these disparities. Additionally, even though some of these injustices are institutional, because of factors like socioeconomic disparities and discrimination, personal biases still play a significant part. For instance, if the medical domain is considered as test- bed for this type of analysis, research surveys in the United States have shown that doctors do not treat Black patients' complaints of pain as seriously or as promptly as they do White patients' ones [i.64], [i.53], [i.57] and [i.61]. Gender-based bias is another illustration of a widespread prejudice that is prevalent in most nations throughout the world's healthcare systems, somewhat to varied degrees. Therefore, there exists the fear that, if not appropriately developed, assessed, and controlled, future AI-enabled systems might entrench and even magnify the widespread imbalances and human biases that lead to general disparities. Lack of transparency • Low-quality, or absence, of documentation affects the overall transparency of the AI system. • Main stakeholders: AI Customer, AI Subject, Relevant Authorities. • Main quality aspect: transparency, explainability, accountability. Despite ongoing developments in AI-powered solutions, people as well as professionals still see existing algorithms as intricate and obscure technologies that are challenging to completely understand, trust, and accept. Lack of transparency is frequently cited as a significant problem with the creation and application of AI solutions. Such and issue particularly affect high-stakes fields like healthcare and finance. This may lead to a serious lack of trustworthiness in AI, particularly in delicate fields like medicine, finance, transportation that are concerned with the life of humans. Likewise, a low level of trustworthiness will undoubtedly affect how extensively stakeholders embrace new AI algorithms. A crucial component of trustworthy AI is traceability, which refers to the comprehensive documentation of the complete AI development process and monitoring of how the AI model performs in actual use after deployment [i.68], [i.56] and [i.59]. Whereas traceability focuses on the transparency of the AI algorithm, explainability is crucial for ensuring transparency for each prediction and decision made by an AI system [i.74] and [i.79]. Thus, the lack of explainability makes it challenging to determine the cause of AI failures and establish accountability when things go wrong. Therefore, lack of transparency hinders stakeholders from applying AI solutions to their everyday jobs since, in order to employ a given AI solution, a user should be able to comprehend the underlying ideas that underlie each choice and/or prediction, even if the algorithm itself has the potential to increase its productivity [i.66]. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 60 Privacy and security issues • Gaps in documentation may cause issues in the management of both the privacy and the security aspects of an AI systems. • Main stakeholders: AI Customer, AI Subject, Relevant Authorities. • Main quality aspect: security, privacy, confidentiality. The creation of AI-based solutions raised significant hazards for a lack of data privacy, confidentiality, and protection, which might result in serious repercussions, including the release and use of private information that violates people's rights or the reusing of people data for purposes other than the ones for which the AI solution has been developed. These problems are connected to informed consent, which is the provision of sufficient information to users allowing them to make informed decisions, such as whether to share personal data. With the advent of digital technology into daily lives and the formalization of informed consent in the Helsinki Declaration, informed consent has become an increasingly important and fundamental aspect of the users' experience [i.71]. Moreover, according to [i.72], informed consent is related to a number of ethical concerns, such as safeguarding against damage, upholding autonomy, protecting privacy, and preserving property rights over data tissue. The amount of autonomy and the potential of collaborative stakeholders decision-making is nonetheless constrained by the introduction of obscure AI algorithms and confusing informed consent procedures [i.77]. Users are finding it ever more challenging to comprehend the decision-making process, the many uses for which their data may be put to, and the precise procedures for choosing not to share their data. The interest reader can find more details and several examples in the literature [i.54], [i.60], [i.62], [i.63], [i.65], [i.67] and [i.70]. Gaps in AI accountability • High-quality documentation is essential to trace the accountability of information sources used to build the AI models. • Researchers and groups working to address the legal implications of the introduction and use of AI algorithms in various facets of human life have given the term "algorithmic accountability'' greater attention. • Main stakeholders: AI Customer, AI Subject. • Main quality aspect: Accountability. The expression "algorithmic accountability'' may seem to relate to the attempt to keep the algorithm itself responsible, but, it means the exact opposite. Indeed, it highlights that algorithms are developed using a combination of machine learning and human configuration and that errors in algorithms are caused by the people who develop, implement, or use the machines, particularly considering that AI systems cannot be held morally or legally accountable by themselves [i.73]. Accountability is crucial in AI for several fields since it will help the technology gain acceptance, credibility, and eventual adoption in the society [i.62] and [i.76]. AI developers and engineers typically operate within ethical guidelines, whereas the end users need to be accountable for their acts, according to regulatory obligations, as a necessary part of their professional activity [i.78]. Additionally, the ethical codes and accountability standards that several private corporations employ have frequently come under fire for being ambiguous and challenging to implement in reality [i.73]. As a result, the end users who are unable to explain their actions and choice process are at risk of losing the ability to practice their work. Whereas, in the same circumstances the repercussions for a technician are far less severe. Also, even if an AI developer is determined to be at fault because numerous different engineers and researchers collaborate on any single AI system, it can be challenging to place the responsibility for the error on a single individual. Obstacles to implementation in real-world scenarios • Low-quality, or absence, of documentation can impede the deployment of AI-based solutions. • Poor documentation affects integration with existing systems, limiting practical applicability. • Main stakeholders: AI Customer, AI Subject. • Main quality aspects: technical robustness, reliability. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 61 Over the past 10 years, several algorithms for AI have been created and suggested to be used in a variety of applications [i.58] and [i.69]. Nevertheless, the deployment, integration, and adoption of AI technologies are still paved reality with unique challenges, even when the technologies have gone through the validation process and have been found to be reliable and secure, morally upright and compliant [i.75], and interoperable [i.55]. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 62 Annex D: Documentation Schemes and Gap Analysis to the EU AI Act D.1 Data-Focused Documentation Approaches D.1.1 Datasheet for Datasets In 2018, Gebru et al. [i.19], proposed Datasheets for Datasets which was designed to document the creation and use of datasets, making them a valuable resource for the following group of stakeholders: • Dataset creators. • Dataset consumers. • Policymakers. • Consumer advocates. • Investigative journalists. • Individual whose data is included in datasets. • Individuals impacted by models trained or evaluated using datasets. This documentation approach spans the following key stages of the dataset life cycle: • Motivation. • Composition. • Collection process. • Processing/cleaning/labelling. • Uses. • Distribution. • Maintenance. It is produced using a questionnaire, with the aim of enhancing transparency and accountability in dataset handling. However, while Datasheets for Datasets provide in-depth documentation, they can be resource-intensive to create and maintain, especially for large and evolving datasets. Also, it focuses exclusively on documenting datasets, which limits its scope of application. D.1.2 DescribeML DescribeML was proposed by Giner-Miguelez et al. [i.20] for documenting the structure, data provenance and social concerns of ML datasets. It intends to meet the needs of the following stakeholder: • Dataset creators. • Dataset consumers. This proposed approach spans the following stages of data creation: • Gathering. • Labelling. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 63 • Design. This documentation approach employs a Domain Specific Language in documenting datasets. While DescribeML emphasizes the ethical and social dimensions of data usage, it is also limited in its focus which makes it unapplicable in documenting technical performance aspects of an AI system. D.1.3 Dataset Nutrition Label The Dataset Nutrition Label framework was proposed by Holland et al. [i.32] to enhance data quality standards by providing a clear and standardized way to describe datasets. Inspired by nutritional labels on food, these labels offer detailed information about datasets, including their provenance, composition, and any potential biases. This framework is intended to help researchers and practitioners make more informed decisions about the datasets they use, ultimately leading to more reliable and ethical AI systems. The methodology is aimed at the following stakeholders: • Data specialist. • Dataset builders and publishers. Prior to model development, Dataset Nutrition Label is used to document dataset 'ingredients' at the following stages of the ML development pipeline: • Dataset collection. • Dataset preprocessing. It uses a web-based application as its documentation approach. Despite Dataset Nutrition Labels comprehensive documentation of dataset 'ingredients', it may be difficult to apply this documentation approach to build a label for sensitive or proprietary data as such data might be accessible only to those who created the dataset and not to the public. D.1.4 Data Cards Data cards [i.35] are introduced as a documentation tool to promote transparency and responsibility in AI dataset usage. They provide detailed descriptions of datasets, including their creation, intended use, and potential biases. The purpose is to help users understand the data's characteristics and limitations, ensuring more ethical and effective application of AI technologies. The methodology is aimed at the following stakeholders: • Producers (dataset creators). • Agents (stakeholders who read transparency report and have the authority to use or decide how to datasets will be used). • End users. This documentation approach documents key information about ML dataset across the dataset's life cycle, employing Google Docs as its documentation template. While using Google Docs facilitates collaboration among multiple stakeholders, it limits the way input could be provided and may also cause template fragmentation as multiple changes are made to an individual field. D.1.5 Dataset Development Life Cycle Documentation Framework This paper [i.34] explores methods to improve accountability in Machine Learning (ML) datasets by drawing parallels with practices from software engineering and infrastructure. The authors propose frameworks and guidelines to document the provenance, characteristics, and usage of datasets, emphasizing the importance of version control, issue tracking, and Continuous Integration/Deployment (CI/CD) pipelines. The goal is to enhance transparency, reproducibility, and accountability in ML dataset management. For convenience, their proposed approach as the Dataset Development Life Cycle Documentation Framework (a term introduced by the present document to capture their documentation-based methodology) is referred to here. The methodology is aimed at the following stakeholders: • Domain experts. • Data creators/labeller. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 64 • Data scientists. • Adversarial testers. This documentation approach is applied at each stage of the dataset life cycle: • Requirement analysis. • Design. • Implementation. • Testing. • Maintenance. It is created using an information sheet. Although it offers detailed documentation for each stage of the dataset development life cycle, its focus is limited, similar to other data-focused documentation methods, and it does not apply to the entire ML development life cycle. D.2 Model-And-Method-Focused Documentation Approaches D.2.1 Model Cards Researchers at Google® published Model Cards [i.40] for Model Reporting which focuses on documenting the characteristics of trained models, including their performance, intended use cases, and any relevant attributes for which performance may vary. This documentation approach serves a diverse group of stakeholders: • ML and AI practitioners. • Model developers. • Software developers. • Policy makers. • Organizations. • ML-knowledgeable individuals. • Impacted Individuals. Model cards ensure that key information about a model is documented across the following stages of the AI-system life cycle (see clause 6.4): • Development. • Deployment. It employs an information sheet as its documentation technique. While Model cards provide a detailed documentation of ML models, it fails to provide documentation coverage for the broader context of data provenance and life cycle management as comprehensively as Datasheets for Datasets. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 65 D.2.2 Method Card In 2022, Method Cards was proposed by Adkins et al. [i.21] to support robust auditing and evaluation of ML systems through the documentation of both ML models and non-ML components like data acquisition and human-in-the-loop interfaces. These cards are primarily intended for expert stakeholders such as: • Model developers (engineers). • External model reviewers (auditors). Their documentation process spans various stages of ML development like: • Training. • Testing. • Debugging. It is produced using information sheets. Method Cards can be highly technical and may not be beneficial to non-expert stakeholders. D.3 System-Focused Documentation Approaches D.3.1 FactSheets In 2019, Arnold et al. [i.22], introduced a documentation approach called FactSheets for documenting AI services. An AI service according to [i.22] can be defined as an amalgam of many models trained on many datasets. This documentation approach targets the needs of multiple stakeholders: • AI Service suppliers. • AI Service consumers (developers). • Standard bodies. • Civil society. • Professional organizations. FactSheets cover the entire AI-service life cycle, specifically: • Service development. • Testing. • Deployment. • Maintenance. FactSheets use an information sheet as its documentation technique and offer a broader perspective by providing documentation coverage for both, models and datasets, within a service. Furthermore, they play a vital role in providing a structured documentation framework that facilitates transparency and helps in regulatory compliance. Although FactSheets inform consumers about AI service intent and construction, they cannot prevent unintended or malicious uses of AI services. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 66 D.3.2 System Cards In 2022 researchers at Meta AI researched into the importance of system-level transparency in ML systems [i.33]. They proposed the System Card as a documentation approach to document and communicate various aspects of ML systems, including data, models, and decision-making processes. The aim is to enhance user trust and understanding by providing clear and accessible information about how ML systems work and their potential impacts. The methodology is aimed at the following stakeholders: • Model developers. • Reviewers. • Users of ML systems. System Cards documentation spans across the entire AI-system life cycle. However, it is more focused at providing insight into the system architecture of an ML-based system. In as much systems-level transparency, creating Systems cards may be tedious as it relies heavily on manual work, including crafting system diagrams and user interfaces, which requires substantial expertise to simplify technical information effectively. D.4 Domain Specific Documentation Approaches D.4.1 Model Facts Label Model Facts Label [i.38] were proposed in 2020 to specifically document a sepsis prediction model for clinical settings, highlighting model name, performance and uses. This documentation approach is designed by an interdisciplinary team of: • Developers. • Clinicians. • Regulatory experts. However, the target stakeholders are: • Clinicians. The documentation, according to Model Facts Labels, is created, when a system with integrated ML Model is brought into operation in a clinical environment. It employs an information sheet as a documentation technique, to ensure that critical model information is accurately conveyed to the end-users in the healthcare domains. As Model Facts Label are highly specialized and tailored for clinical use, their narrow focus on a specific type of model limit their generalizability to other domains. There also remain many unanswered questions about their design and how to ensure they are accessible, intelligible, and assessable to clinicians. D.4.2 Risk Cards In 2023, Risk Cards [i.36] were proposed by Derczynski et al., to focus on structure assessment and the documentation of risks associated with language model applications. They address the need of: • Inspection Organizations such as Auditors. • AI trainers. • Researchers. • Policy makers. • End users. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 67 This documentation is carried out during the development and deployment phases of language models, using an information sheet. Risk Cards are instrumental in identifying and mitigating potential risks, enhancing the transparency of language model usage. Also, they rely on manual evaluation for detailed risk assessment, but this process is costly and may hinder adoption, especially by low-resource teams and organizations. D.4.3 Datasheet for Subjective and Objective Quality Assessment Datasets Barman et al. [i.37] also proposed a datasheet template to document the Quality of Experience (QoE) for 2D video streaming, addressing both subjective and objective assessments. The primary stakeholders are: • Dataset creators. • End users. The documentation is facilitated through multiple formats such as Google Sheets and PDFs across the dataset life cycle. This approach ensures that QoE parameters are transparently reported, aiding in the evaluation and improvement of video streaming services. Nonetheless, its applicability is limited to multimedia contexts. In Table D.1, a list of other existing documentation approaches is listed that were not covered in this clause. Table D.1 Documentation Approaches Focus Data Statements [i.44] Data Data Card and Model Card for NLP [i.45] Model and Data Dataset Development Lifecycle Documentation Framework [i.34] and [i.46] Data CrowdWorkSheets [i.46] Data Value Cards [i.49] Model and Method Consumer Labels for ML Models [i.50] Model and Method Reward Reports for Reinforcement Learning [i.41] and [i.47] System Robustness Gym [i.48] System ABOUT ML [i.48] System D.4.4 Assurance Cases to document the reasoning behind other documented artifacts Assurance Cases [i.30] are a framework to provide a structured argumentation of why a selection of evidences are considered appropriate to imply that a system is good enough to be used. It can address any requirement and is especially suitable for addressing non-functional requirements that are difficult to operationalize. Currently they are frequently used in the automotive domain to provide sufficient evidence for safety claims, but the framework is applicable to any soft requirements, like fairness or even ethics in general. A main claim, for example a given system is fair, is decomposed into sub-claims that are either also based on the fulfilment of hierarchically structured sub-claims or that can be directly induced from evidence. Each decomposition of a claim is made explicit by an argument or reasoning step that explains the idea behind a decomposition. Furthermore, all relevant assumptions for concluding that the sub-claims imply the claim are made explicit and connected to the argument. To ease the understanding of an argument, contextual information can be attached to it as well. In its details, the framework can be used as a pragmatic approach to come to a well-documented argument about when and under which assumptions a system is deemed good enough to be used in any terms of interest. An Assurance Case can address: • Auditors/Reviewers • Public authorities • Compliance manager ETSI ETSI TR 104 119 V1.1.1 (2025-09) 68 By modelling the argumentation about why the evidence confirms the achievement of the objectives as an Assurance Case, the argumentation for decisions can be documented and disclosed for an external review or audit. By employing the approach before development and by automating the tests and documenting the results, this process yields the potential to provide a long-term protection against unwanted changes, for example through further training or errors when changing the code. Additionally, if similar applications have to be audited again and again, for example, in the context of banking audits, with the help of Assurance Cases, best practices can develop over time to help making well-reasoned decisions in the context of AI based applications. D.5 Gap Analysis to EU AI Act A comprehensive gap analysis of widely recognized AI documentation approaches with respect to the documentation requirements outlined in the EU AI Act (refer to clause 7.3.8) is discussed in the present clause. The purpose of this analysis was to assess the extent to which each documentation approach addresses the specific documentation needs prescribed by the EU AI Act, with a particular focus on coverage gaps. Selection of Documentation Approaches: The twelve documentation approaches were selected based on their prominence and usage within the AI community, as well as their relevance to AI governance and accountability. These approaches include well-established frameworks like Datasheets, DescribeML, Model Card, Factsheets, and others, ensuring a diverse representation of documentation practices across the AI landscape. Mapping Information Elements: The core of the methodology involved mapping the information elements stipulated in the EU AI Act to documentation template of each of the twelve documentation approaches. To achieve this, the relevant documentation templates associated with each approach were compiled. These templates were either sourced directly from academic literature or retrieved from publicly available GitHub repositories (if applicable). Where templates were unavailable, the official documentation, provided by the creators of the respective approaches, was referenced. Documentation Coverage Evaluation: Once the templates were gathered, they were systematically analysed by evaluating the inclusion or omission of each specific information element defined by the EU AI Act. The evaluation focused on three main categories of documentation requirements: 1) Data-related documentation. 2) System and model-related documentation. 3) Control-related documentation. For each documentation approach, a binary indicator system in the analysis table was used: • An "X" was used to denote that the approach either fully or partially addresses the corresponding information element. • A "-" was used to indicate that the element was not addressed by the approach at all. This binary classification allows to clearly differentiate between covered and entirely uncovered requirements, providing a straightforward overview of how well each approach aligns with the EU AI Act. ETSI ETSI TR 104 119 V1.1.1 (2025-09) 69 Table D.2: Assessment of State-of-the-Art Documentation Approaches in Relation to the Information Elements Defined by the EU AI Act Datasets Information Elements Datasheet for Datasets Dataset Nutrition Label Data Cards DescribeML Model Cards Method Card Factsheet System Card Dataset Development Life Cycle Documentation Framework Model Facts Label Risk Cards QoE Datasheet Provenance x x x x x x x x x x - x Scope x x x x x x x x x x - x Characteristics x x x x x x x x x x - x Collection x x x x - - x - x - - x Preprocessing x - x x x x x x x - - x Validation procedures - - x x - x x - x - - - Impact assessment x - x x - - x - x - - x Table D.3: Assessment of State-of-the-Art Documentation Approaches in Relation to the Information Elements Defined by the EU AI Act AI system Information Elements Datasheet for Datasets Dataset Nutrition Label Data Cards DescribeML Model Cards Method Card Factsheet System Card Dataset Development Life Cycle Documentation Framework Model Facts Label Risk Cards QoE Datasheet Intended purpose - - - - x x x x - x - - Risks - - - - x x x x - x x - Version history - - - - x x - x - x - - Interaction details - - - - - - - - - - - - Version and version update requirements - - - - - - x - - - - - Hardware - - - - - - - - - - - - User interface - - - - - - - - - - - - Instruction for use - - - - x x x x - x - - Development process - - - - - - x x - x - - Design specifications - - - - x x x x - x - - System architecture - - - - - x x x - x - - Life cycle changes - - - - - - x - - - - - ETSI ETSI TR 104 119 V1.1.1 (2025-09) 70 Table D.4: Assessment of State-of-the-Art Documentation Approaches in Relation to the Information Elements Defined by the EU AI Act Controls Information Elements Datasheet for Datasets Dataset Nutrition Label Data Cards DescribeML Model Cards Method Card Factsheet System Card Dataset Development Life Cycle Documentation Framework Model Facts Label Risk Cards QoE Datasheet Human oversight - - - - - - x - - - - - Monitoring and control - - - - - - x - - - - - Accuracy - - - - x - x x x x - - Robustness - - - - x x x x x x - - Cybersecurity measures - - - - - - - - - - - Performance - - x - x x x x x x - - Risk management x - x - x x x x x x x x Post-market evaluation - - - - - - x - - - - - Testing - - - - x x x x x x - - Privacy x - - x - x x - - - - x ETSI ETSI TR 104 119 V1.1.1 (2025-09) 71 History Document history V1.1.1 September 2025 Publication
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
1 Scope
|
The present document describes the structure of the Indoor Fibre Distribution Network (IFDN) Hybrid Cabling System, main functional elements and their characteristics, deployment details and acceptance items.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
2 References
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
2.1 Normative references
|
Normative references are not applicable in the present document.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
2.2 Informative references
|
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication ETSI cannot guarantee their long-term validity. The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's understanding, but are not required for conformance to the present document. [i.1] IEC 60228: "Performance characteristics and calibration methods for digital data acquisition systems and relevant software". [i.2] IEC 60332-1-2: "Tests on electric and optical fibre cables under fire conditions - Part 1-2: Test for vertical flame propagation for a single insulated wire or cable - Procedure for 1 kW pre-mixed flame". [i.3] IEC 60512-1-1: "Connectors for electronic equipment - Tests and measurements - Part 1-1: General examination - Test 1a: Visual examination". [i.4] IEC 60512-1-2: "Connectors for electronic equipment - Tests and measurements - Part 1-2: General examination - Test 1b: Examination of dimension and mass". [i.5] IEC 60512-2-1: "Connectors for electronic equipment - Tests and measurements - Part 2-1: Electrical continuity and contact resistance tests - Test 2a: Contact resistance - Millivolt level method". [i.6] IEC 60512-2-5: "Connectors for electronic equipment - Tests and measurements - Part 2-5: Electrical continuity and contact resistance tests - Test 2e: Contact disturbance". [i.7] IEC 60512-3-1: "Connectors for electronic equipment - Tests and measurements - Part 3-1: Insulation tests - Test 3a: Insulation resistance ". [i.8] IEC 60512-4-1: "Connectors for electronic equipment - Tests and measurements - Part 4-1: Voltage stress tests - Test 4a: Voltage proof". [i.9] IEC 60512-6-3: "Connectors for electronic equipment - Tests and measurements - Part 6-3: Dynamic stress tests - Test 6c: Shock". [i.10] IEC 60512-6-4: "Connectors for electronic equipment - Tests and measurements - Part 6-4: Dynamic stress tests - Test 6d: Vibration (sinusoidal)". [i.11] IEC 60512-11-4: "Connectors for electronic equipment - Tests and measurements - Part 11-4: Climatic tests - Test 11d: Rapid change of temperature". [i.12] IEC 60512-11-9: "Connectors for electronic equipment - Tests and measurements - Part 11-9: Climatic tests - Test 11i: Dry heat". ETSI ETSI TR 104 097 V1.1.1 (2025-10) 7 [i.13] IEC 60512-11-10: "Connectors for electronic equipment - Tests and measurements - Part 11-10: Climatic tests - Test 11j: Cold". [i.14] IEC 60512-11-12: "Connectors for electronic equipment - Tests and measurements - Part 11-12: Climatic tests - Test 11m: Damp heat, cyclic". [i.15] IEC 60512-13-2: "Connectors for electronic equipment - Tests and measurements - Part 13-2: Mechanical operation tests - Test 13b: Insertion and withdrawal forces". [i.16] IEC 60754-2: "Test on gases evolved during combustion of materials from cables - Part 2: Determination of acidity (by pH measurement) and conductivity". [i.17] IEC 60793-1-40: "Optical fibres - Part 1-40: Attenuation measurement methods". [i.18] IEC 60794-1-101: "Optical fibre cables - Part 1-101: Generic specification - Basic optical cable test procedures - Mechanical tests methods - Tensile, method E1". [i.19] IEC 60794-1-104: "Optical fibre cables - Part 1-104: Generic specification - Basic optical cable test procedures - Mechanical tests methods - Impact, method E4". [i.20] IEC 60794-1-111: "Optical fibre cables - Part 1-111: Generic specification - Basic optical cable test procedures - Mechanical tests methods - Bend, method E11". [i.21] IEC 60794-1-21: "Optical fibre cables - Part 1-21: Generic specification - Basic optical cable test procedures - Mechanical tests methods". [i.22] IEC 60794-1-201: "Optical fibre cables - Part 1-201: Generic specification - Basic optical cable test procedures - Environmental test methods - Temperature cycling, method F1". [i.23] IEC 60874-14-5: "Connectors for optical fibres and cables - Part 14-5: Detail specification for fibre optic connector type SC-PC untuned terminated to single-mode fibre type B1". [i.24] IEC 61034-2: "Measurement of smoke density of cables burning under defined conditions - Part 2: Test procedure and requirements". [i.25] IEC 61300-2-1: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-1: Tests - Vibration (sinusoidal)". [i.26] IEC 61300-2-2: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-2: Tests - Mating durability". [i.27] IEC 61300-2-4: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-4: Tests - Fibre or cable retention". [i.28] IEC 61300-2-5: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-5: Tests - Torsion". [i.29] IEC 61300-2-6: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-6: Tests - Tensile strength of coupling mechanism". [i.30] IEC 61300-2-9: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-9: Tests - Shock". [i.31] IEC 61300-2-17: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-17: Tests - Cold". [i.32] IEC 61300-2-18: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-18: Tests - Dry heat". [i.33] IEC 61300-2-19: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-19: Tests - Damp heat (steady state)". [i.34] IEC 61300-2-22: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-22: Tests - Change of temperature". ETSI ETSI TR 104 097 V1.1.1 (2025-10) 8 [i.35] IEC 61300-2-26: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-26: Tests - Salt mist". [i.36] IEC 61300-2-44: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 2-44: Tests - Flexing of the strain relief of fibre optic devices". [i.37] IEC 61300-3-3: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 3-3: Examinations and measurements - Active monitoring of changes in attenuation and return loss". [i.38] IEC 61300-3-4: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 3-4: Examinations and measurements - Attenuation". [i.39] IEC 61300-3-6: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 3-6: Examinations and measurements - Return loss". [i.40] IEC 61300-3-28: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 3-28: Examinations and measurements - Transient loss". [i.41] IEC 61300-3-34: "Fibre optic interconnecting devices and passive components - Basic test and measurement procedures - Part 3-34: Examinations and measurements - Attenuation of random mated connectors". [i.42] IEC 61753-1: "Fibre optic interconnecting devices and passive components - Performance standard - Part 1: General and guidance". [i.43] IEC 61754-4:2022: "Fibre optic interconnecting devices and passive components - Fibre optic connector interfaces - Part 4: Type SC connector family". [i.44] IEC 62368-1:2023 RLV: "Audio/video, information and communication technology equipment - Part 1: Safety requirements". [i.45] IEC 63294-2021: "Test methods for electric cables with rated voltages up to and including 450/750 V". [i.48] IEEE 802.3af™: "IEEE Standard for Information Technology -Telecommunications and Information Exchange Between Systems - Local and Metropolitan Area Networks - Specific Requirements - Part 3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications - Data Terminal Equipment (DTE) Power Via Media Dependent Interface (MDI)". [i.49] IEEE 802.3at™: "IEEE Standard for Information technology - Local and metropolitan area networks - Specific requirements - Part 3: CSMA/CD Access Method and Physical Layer Specifications Amendment 3: Data Terminal Equipment (DTE) Power via the Media Dependent Interface (MDI) Enhancements". [i.50] IEEE 802.3bt™: "IEEE Standard for Ethernet Amendment 2: Physical Layer and Management Parameters for Power over Ethernet over 4 pairs". [i.51] Recommendation ITU-T G.9940 (ex G.fin-SA): "High speed fibre-based in-premises transceivers - system architecture". [i.52] UL 94: "Test for Flammability of Plastic Materials for Parts in Devices and Appliances". [i.53] EN 55032: "Electromagnetic compatibility of multimedia equipment - Emission Requirements", (produced by CENELEC). [i.54] EN 55035: "Electromagnetic Compatibility of Multimedia equipment - Immunity Requirements", (produced by CENELEC). ETSI ETSI TR 104 097 V1.1.1 (2025-10) 9
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
3 Definition of terms, symbols and abbreviations
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
3.1 Terms
|
For the purposes of the present document, the terms given in Recommendation ITU-T G.9940 [i.51] (ex G.fin-SA) and the following apply: hybrid cabling system: system that supports the connection of information technology equipment and transmission of optical signal and power supply and usually consist of active distribution units (optical & electrical splitter), optical & electrical hybrid cable, and optical & electrical hybrid connector Indoor Fibre Distribution Network (IFDN): point-to-multipoint optical fibre infrastructure for fibre-based in-premises network NOTE: An IFDN can be entirely passive, constructed by one or multiple interconnected optical splitter and contains other passive optical components, like combiners, filters, and possibly other passive optical components. The IFDN can also provide remote power feed functionality to Sub FTTR unit (SFU) by using optical and electrical hybrid cables.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
3.2 Symbols
|
Void.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
3.3 Abbreviations
|
For the purposes of the present document, the following abbreviations apply: ADU Active Distribution Unit CE Conducted Emission CS Injected currents EMC ElectroMagnetic Compatibility ESD Electro-Static Discharge FDT Fibre Distribution Terminal IFDN Indoor Fibre Distribution Network LSZH Low Smoke Zero Halogen material MFU Main FTTR Unit PoHC Power over Hybrid Cable RE Radiated Emission RS RF electromagnetic field SFU Sub FTTR Unit
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
4 Structure of IFDN Hybrid Cabling System
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
4.1 General
|
An Indoor Fibre Distribution Network (IFDN) is mainly composed of optical splitter or optical & electrical splitter, optical cable or optical &electrical hybrid cable, fibre connector or optical & electrical hybrid connector. The position of the IFDN cabling system is shown in Figure 4.1. Traditional optical fibre networking uses optical cables and corresponding power supply facilities for networking. A hybrid cabling system can effectively obtain local power, which can maximize utilization of existing infrastructure to optimize the cost of deployment and maintenance. The present document focuses on the IFDN hybrid cabling system. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 10 Figure 4.1: Position of IFDN cabling system Based on the user density, the IFDN hybrid cabling system can be divided into four typical network structures with different splitting ratio and splitter layout: one-level even splitting structure, one-level uneven splitting structure, multi-level uneven splitting cascade structure and multi-level even splitting cascade structure. The IFDN cabling structure is advisable to be determined based on the use scenarios. In low-density user scenarios, one-level even ratio or uneven ratio networking structures as showed in Figure 4.2 are usually deployed. The even splitting ratio can be 1:4, 1:5, 1:8 or 1:16, and the uneven splitting ratio can be 1:9. Figure 4.2: Structure of one-level splitting IFDN hybrid cabling system In medium-density user scenarios, the two-level even ratio optical cascade network can be used to increase the number of Sub FTTR Units (SFUs). The uneven splitter ratio can be 1:5, as showed in Figure 4.3. Figure 4.3: Structure of 1:5 uneven ratio cascading IFDN hybrid cabling system In high-density user scenarios both the multi-level uneven ratio and even ratio optical cascade network can be used to increase the number of SFU. For uneven ratio optical cascade network, the uneven ratio can be 1:9 as showed in Figure 4.4. For even ratio optical cascade network, as showed in Figure 4.5, the first level even ratio can be 1:4, and the second level even ratio can be 1:16. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 11 Figure 4.4: Structure of 1:9 uneven ratio cascading IFDN hybrid cabling system Figure 4.5: Structure of even ratio cascading IFDN hybrid cabling system
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
4.2 Power over Hybrid Cable (PoHC)
|
Power over Hybrid Cable (PoHC) is a method of providing both power and data to a device by hybrid cable. Figure 4.6: PoHC system Figure 4.7: Common power and optical cabling system To solve the problem of compatibility between power supply and power receiving devices from different manufacturers and provide safety of use, ADU and SFU should support IEEE 802.3af [i.48], IEEE 802.3at [i.49] or IEEE 802.3bt [i.50] standard. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 12 4.3 Function elements In the IFDN hybrid cabling system, the key function units mainly include active distribution units (hybrid splitter), hybrid cable and hybrid connector. 1) Active Distribution Unit: The Active Distribution Unit (ADU), which is an active optical device containing splitter, is the key part to connect the MFU and the SFU and to provides optical signals and power input for the SFU. It can be installed in an indoor information box or on a wall. The functions include: Photoelectric composite interface. Internal integrated optical splitter. Each port with electrical security protocol. 2) Hybrid cable: It is a composite cable that integrates optical fibre and power transmission copper wire. It can provide data transmission and remote power supply for terminal equipment, and the data transmission rate is no less than 1 Gbit/s. 3) Hybrid connector: It is a double-ended prefabricated photoelectric composite connector. The typical products include type XC and type SC hybrid connector. Type XC hybrid connector has small outer diameter, light weight and small occupied space. Type SC hybrid connector has good compatibility with the interface same as type SC optical connector.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5 Hybrid cable
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5.1 Structure
|
The hybrid cable is typically composed of optical fibre elements and current carrying elements, strength member (possible), filler (possible), yarn (possible) tape (possible), ripcord (possible), sheath, etc. The hybrid cable should meet the requirements of the application and operating environment. Typical FTTR (IFDN) hybrid cables include the round-type, bow-type, and flat-type. Sectional views of typical structures are shown in Figures 5.1 to 5.3. Figure 5.1: Bow-type Hybrid cable Figure 5.2: Flat-type Hybrid cable ETSI ETSI TR 104 097 V1.1.1 (2025-10) 13 Figure 5.3: Round-type Hybrid cable
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5.2 Optical fibre elements
|
The optical fibre elements can be composed of one or more optical fibres, tight or semi-tight buffered fibres, fibre ribbons, buffer tubes, or other optical core structures, or independent optical fibre cables (such as loose tube cable). The optical fibre elements should be in accordance with the following: a) For ease of identification, all the optical fibre and optical fibre elements should be identified by colour coding, ring marking, printing or any other ways as agreed between the customer and the supplier. If the primary coated fibres are coloured for identification, the coloured coating needs to be readily identifiable throughout the lifetime of the cable. b) The material of the optical elements' sheath or loose tube could be polyethylene, polypropylene, Polybutylene Terephthalate (PBT), Low Smoke Zero Halogen (LSZH) material, polyvinyl chloride or other materials suitable to the application. 5.3 Current carrying elements The design of the conductor cross-sections needs to be in according with the rated voltage, transmission distance and consumed power of the powered device. Current carrying elements could be copper or other conductive material. The conductor needs to be continuous without joints through the length of the hybrid cable. 5.4 Strength member Strength members should be made of aramid yarn or other material, and be placed in a suitable position according to the structure of the hybrid cable. 5.5 Outer sheath Polyethylene, polypropylene, PVC, polyurethane, flame retardant low smoke polyolefin and other suitable materials can be used.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5.3 Optical transmission performance
|
Optical transmission performance for cabled optical fibre elements need to conform to Table 5.1. Table 5.1: Optical transmission performance for cabled optical fibre elements Parameter Test procedure Requirements/Remarks Attenuation coefficient at 1 550 nm IEC 60793-1-40 [i.17] ≤0,30 dB/km for B-652.D ≤0,30 dB/km for B-657 Attenuation coefficient at 1 300 nm IEC 60793-1-40 [i.17] ≤1,5 dB/km for A1-OM1 to A1-OM5 ETSI ETSI TR 104 097 V1.1.1 (2025-10) 14
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5.4 Electrical performance
|
Electrical performance for current carrying elements need to conform to Table 5.2. Table 5.2: Electrical performance for current carrying elements No Test Severity Requirement 1 Conductor DC resistance IEC 60228 [i.1], Annex A Test equipment: A current source in conjunction with a voltmeter Sample length under test: Not less than 1 m The measured value needs to be recalculated to the standard temperature of 20 ºC and needs to conform to relevant specification 2 Dielectric withstand voltage IEC 63294-2021 [i.45], clause 5.3 1,5 KV (AC),1 min No breakdown of the insulation 3 Insulation resistance IEC 63294-2021 [i.45], clause 5.4 Test in ambient temperature 20 °C and higher temperature 70 °C or according to relevant detail specifications. 80 V DC to 500 V DC The measured value needs to conform to relevant specification 5.5 Mechanical performance The mechanical tests will affect all the elements of the cable to some degree. The cable should be tested as a whole, rather than discrete elements. Tests on single mode fibre cables should be carried out at 1 550 nm. Multimode fibre cables should be tested at 1 300 nm. Measurements at other wavelengths or wavelength range may be agreed upon between the customer and the supplier. Mechanical performance tests should conform to Table 5.3. Table 5.3: Mechanical performance tests No. Test Severity Criteria 1 Tensile performance IEC 60794-1-101 [i.18] Length of the cable under tension: ≥ 50 m Diameter of test pulleys: ≥ 20 D (D is diameter of finished cable). Long-term tensile load (TL): 80 N for cables with strength member (e.g. Aramid yarn), 60 N for cables without strength member Short-term tensile load (TM): 150 N for cables with strength member (e.g. Aramid yarn), 120 N for cables without strength member Duration: 10 min for long term tensile force,1 min for short term tensile force Rate of tension increase: 100 mm/min O (see note 3), E (see note 4), V The axial fibre strain should be less than 60 % of the fibre proof strain while the cable is under short-term tensile load While the cable is under the long-term tensile load, the axial fibre strain should be less than 20 % of fibre proof test, for fibre proof tested to ≤ 1 % strain (e.g. 0,69 GPa, 0,2 % absolute strain) See note 1. 2 Crush IEC 60794-1-21 [i.21], Method E3A Not less than 3 pieces of the sample, each separated 500 mm The load needs to be applied on the wider side of the cable Long-term load: 1,1 KN Short-term load: 2,2 KN Duration: 10 min for long term load, 1 min for short term load O (see note 5), E, V 3 Impact IEC 60794-1-104 [i.19] Impact energy: 1 J or agreed between customer and supplier 3 impact points, impact one time per point, every point spaced not less than 500 mm apart Radius of striking surface:12,5 mm O, E, V (see note 6) ETSI ETSI TR 104 097 V1.1.1 (2025-10) 15 No. Test Severity Criteria 4 Repeated bending IEC 60794-1-21 [i.21], Method E6 Mass of the weight tensile load: Adequate to assure specimen uniform contact with the mandrel Bending radius: 20 D (D is diameter of round cable or the short axis length of bow-type and flat-type cable) Number of cycles: 25 O, E, V 5 Torsion IEC 60794-1-21 [i.21], Method E7 Tension load: Adequate to assure the specimen to be straight Length under test: 1 m Rotating angle (see note 6) : ±180° Number of cycles: 10 O, E, V 6 Bend IEC 60794-1-111:2023 [i.20] Diameter of mandrel: 20 D (D is diameter of finished cable) Number of cycles: 10 Number of turns: 6 Test temperature: Ambient (unless specifically requested otherwise) O, E, V NOTE 1: For fibres proof tested at levels above 1 % strain, the safe long-term load will not scale linearly with proof strain, so a lower percentage of the proof stain is applicable. NOTE 2: • 'O' means no change in attenuation as defined in IEC 60794-1-21 [i.21] after the test. • 'E' means dielectric withstand voltage needs to conform to Table 5.2 after the test. • 'V' means visual examination, no damage to the sheath or to the cable elements. NOTE 3: The change in attenuation during the test needs to be no more than 0,1 dB at 1 550 nm. NOTE 4: The maximum increase in attenuation during the test with a long-term force needs to be specified in the product specification. NOTE 5: The imprint of the striking surface on the sheath is not considered a mechanical damage. NOTE 6: If the specified twist angle applied to the cable results in a high torsional torque that is not suitable for the cable type, then the rotating angle should be lowered as specified by the manufacturer.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
5.6 Environmental performance
|
Environmental performance tests should conform to Table 5.4. Table 5.4: Environmental performance tests No. Test Test Method Severity Criteria 1 Temperature cycling IEC 60794-1-201 [i.22] Length under test: finished cable length, not less than 1 000 m - 10 °C~60°C Duration at extreme temperatures: 8 h Rate of temperature changing: 1°C/min Number of cycles: 2 Attenuation measurements should be taken at 1 550 nm for single-mode fibre and 1 300 nm for multimode fibre after the test. During the test, change in attenuation should be no more than 0,4 dB/km at 1 550 nm. The dielectric withstand voltage should comply with Table 5.2. Under visual examination without magnification no damage to the sheath or to the cable elements. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 16 No. Test Test Method Severity Criteria 2 Flame test Flame propagation IEC 60332-1-2 [i.2] or other methods agreed between customer and supplier Pass the IEC 60332-1-2 [i.2] single cable vertical flame propagation test. The distance between the lower edge of the top support and the onset of charring is greater than 50 mm. The distance from the lower edge of the top support to the lower onset of charring is less than 540 mm. Emission of smoke (for cables with LSZH material) IEC 61034-2 [i.24] Transmittance ≥ 60 % Emission of corrosive gases (for cables with LSZH material) IEC 60754-2 [i.16] Acidity PH ≥ 4,3, Conductivity ≤ 10 μ S/mm
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
6 Hybrid Connector
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
6.1 Structure
|
A Hybrid connector in the IFDN Hybrid Cabling System is a miniaturized plug-in connector that consists of a single-core plug and an adapter, with optical and electrical connection characteristics. It is used to transmit signals and remotely supply power to low-power terminals such as WLAN devices and cameras. At present, single mode hybrid connectors are mainly used in IFDN Hybrid Cabling System, and the most widely used hybrid connectors include type XC hybrid connector and type SC hybrid connector in IFDN scenarios. The dimensions should be measured according to IEC 60512-1-2 [i.4]. The plug interface of type XC hybrid connector is shown in Figure 6.1, and the dimensions should comply with Table 6.1. Figure 6.2 and Figure 6.3 show the interface for type XC hybrid connector adapters mounted on printed circuit boards and connection adapters respectively. The dimensions should comply with Table 6.2. The plug interface of type SC hybrid connector is shown in Figure 6.4, and the dimensions should comply with Table 6.3. Figure 6.5 and Figure 6.6 show the interface for type SC hybrid connector adapters mounted on printed circuit boards and connection adapters respectively. The dimensions should comply with Table 6.4. Other optical interface dimensions and requirements should comply with IEC 60874-14-5 [i.23]. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 17 Figure 6.1: Interface for type XC hybrid plug connector Table 6.1: Dimensions of type XC hybrid plug connector interface Reference Unit Minimum Maximum Notes A mm 2,4985 2,4995 _ B mm 5,95 6,15 _ C mm 6,25 6,45 _ Da mm 11,3 11,6 _ E mm 0,6 0,8 _ F mm 1,45 1,65 _ G mm 1,2 1,4 Radius H mm 2 2,2 Radius I mm 0,6 0,8 _ J mm 0,9 1,2 _ L mm 5,95 6,05 _ M mm 6,15 6,25 _ N mm 3,6 3,7 _ O mm 2,6 2,7 _ P mm 2,05 2,25 _ Q ° 5 8 _ NOTE: Dimension D is based on the plug connector end face when unmated. The ferrule is moved in the direction of the contact face by the central axial pressure, so the dimension D is variable. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 18 Figure 6.2: Interface for type XC hybrid connector adapters mounted on printed circuit boards Figure 6.3: Interface for type XC hybrid connection adapters ETSI ETSI TR 104 097 V1.1.1 (2025-10) 19 Table 6.2: Dimensions of type XC hybrid connector adapters and connection adapters Reference Unit Minimum Maximum A mm See note See note AI mm 2,7 2,8 B mm 2,3 2,5 C mm 3,1 3,3 D ° 38 45 E ° 35 48 F mm 2,8 3,2 G mm 1,2 1,5 H mm 11 11,2 I mm 3,6 3,8 J mm 2,6 2,7 K mm 6,5 6,7 L mm 2,35 2,45 M mm 6,1 6,2 N mm 3,8 4,2 NOTE: See table 6 in IEC 61754-4:2022 [i.43]. Figure 6.4: Type SC hybrid plug connector interface Table 6.3: Dimensions of type SC hybrid plug connector interface Reference Unit Minimum Maximum Notes W mm 7,29 7,39 Height dimension of the contact area after the installation of electric contacts X mm - 2 Electric contact frontend relative to the mechanical datum plane Y mm 6,55 - Electric contact backend relative to the mechanical datum plane YB mm 0,8 - Width of the electric contact YD mm 4,2 4,3 Center distance between two electric contacts Z mm 1,3 - Width of the through hole NOTE: Other interface dimensions and requirements should comply with the requirements of IEC 61754-4 [i.43]. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 20 Figure 6.5: Interface for type SC hybrid adapters mounted on printed circuit boards Figure 6.6: Interface for type SC hybrid connector adapters Table 6.4: Dimensions of type SC hybrid adapters Reference Unit Minimum Maximum Notes E mm 4,2 4,3 Center distance between two electric contacts F mm 0,8 1,2 Width of the electric contact W mm 12,4 - Contact area frontend relative to the mechanical datum plane X mm - 10,6 Contact area backend relative to the mechanical datum plane Y mm 3,3 3,4 Height dimension of the contact area plane relative to the optical datum axis YA ° 10 15 Bending angle 6.2 Optical Transmission performance Optical transmission performance for Hybrid connectors should conform to Table 6.5. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 21 Table 6.5: Optical transmission performance of hybrid connectors Test Requirement Attenuation (with reference connector) IEC 61300-3-4 [i.38] ≤ 0,50 dB Return loss (with reference connector) IEC 61300-3-6 [i.39] ≥ 50 dB for XC/UPC hybrid connector and SC/UPC hybrid connector ≥ 60 dB for SC/APC hybrid connector Attenuation of random mated connector IEC 61300-3-34 [i.41] for single fibre connector Attenuation grades Attenuation at 1 310 nm, 1 550 nm and 1 625 nm Grade A Not specified Grade B ≤ 0,12 dB mean, ≤ 0,25 dB max. for ≥ 97 % of the connections Grade C ≤ 0,25 dB mean, ≤ 0,50 dB max. for ≥ 97 % of the connections Grade D ≤ 0,50 dB mean, ≤ 1,0 dB max. for ≥ 97 % of the connections Random mated return loss: IEC 61300-3-34 [i.41] Return loss grades Return loss at 1 310 nm, 1 550 nm and 1 625 nm Grade 1 ≥ 60 dB (mated) and ≥ 55 dB (unmated) Grade 2 ≥ 45 dB Grade 3 ≥ 35 dB Grade 4 ≥ 26 dB Active monitoring of changes in attenuation and in return loss (multiple path) IEC 61300-3-3 [i.37] Change in attenuation during test: δ ≤ 0,2 dB at 1 310 nm and 1 550 nm and δ ≤ 0,3 at 1 625 nm for pigtails (1 connection) δ ≤ 0,5 dB at 1 310 nm, δ ≤ 0,6 dB at 1 550 nm and δ ≤ 0,8 dB at 1 625 nm for patch cords (= 2 connections) Change in attenuation after test: δ ≤ 0,2 dB at 1 310 nm, 1 550 nm and 1 625 nm for pigtails (1 connection) δ ≤ 0,4 dB at 1 310 nm, 1 550 nm and 1 625 nm for patch cords (= 2 connections) Transient loss: IEC 61300-3-28 [i.40] Change in attenuation during test: δ ≤ 0,5 dB at 1 550 nm per connection δ ≤ 1,0 dB at 1 625 nm per connection Change in attenuation after test: δ ≤ 0,2 dB at 1 550 nm and 1 625 nm per connection
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
6.3 Electrical performance
|
Electrical performance for Hybrid connectors needs to conform to Table 6.6. Table 6.6: Electrical performance of hybrid connectors Test Requirement Insulation resistance IEC 60512-3-1 [i.7] ≥ 500 MΩ Voltage proof IEC 60512-4-1 [i.8] Between contacts r.m.s. withstand voltage DC 1 000 V Between contacts and housing DC 1 500 V Contact resistance IEC 60512-2-1 [i.5] Initial: ≤ 30 mΩ after test: rise in relation to initial values 20 mΩ max Active monitoring duration of discontinuities during test IEC 60512-2-5 [i.6] ≤ 1 μs 6.4 Mechanical performance Mechanical performance for Hybrid connectors needs to conform to Table 6.7. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 22 Table 6.7: Mechanical performance of hybrid connectors No. Test Severity Requirement 1 Insertion and withdrawal force IEC 60512-13-2 [i.15] (For SC hybrid connector) Total insertion force: ≤ 30 N Total withdrawal force: ≤ 30 N 2 Fibre/cable retention IEC 61300-2-4 [i.27] Load: 50 N for 60 s O (see note 2), V 3 Tensile strength of coupling mechanism IEC 61300-2-6 [i.29] Load: 40 N for 60 s O (see note 2), V 4 Flexing of the strain relief of fibre optic devices IEC 61300-2-44 [i.36] Load: 2 N cables Cycle: ±90° Number of cycles: 50 O (see note 2), V 5 Torsion IEC 61300-2-5 [i.28] Load: 10 N 25 cycles,±180° Fibre/cable clamping distance: 25 cm ± 5 cm O (see note 2), V 6 Mating durability IEC 61300-2-2 [i.26] 200 cycles No less than 3 s between engagements O (see note 2), V NOTE 1: • 'O 'includes change of attenuation and return loss after the test. • 'E' represents change of contact resistance after the test. • 'V' represents visual examination according to IEC 60512-1-1 [i.3]. NOTE 2: Active monitoring change of attenuation during test, after the load has reached its maximum level and been stable. NOTE 3: Active monitoring duration of discontinuities during test, after the load has reached its maximum level and been stable. NOTE 4: Active monitoring of transient loss during test.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
6.5 Environmental performance
|
Environmental performance for Hybrid connectors needs to conform to Table 6.8. Table 6.8: Environmental performance of hybrid connectors No. Test Severity Criteria 1 Cold IEC 60512-11-10 [i.13] Temperature: -10 °C Duration: 96 h O, E, V 2 Dry heat - High temperature endurance IEC 60512-11-9 [i.12] Temperature: +60 °C Duration: 96 h O, E, V 3 Change of temperature IEC 60512-11-4 [i.11] Temperature: -10 °C to +60 °C Duration: 60 min at extremes Rate of temperature change: 1 °C/min 5 cycles O (see note 2), E, V 4 Damp heat (steady state) IEC 60512-11-12 [i.14] Temperature: +40 °C Humidity: 93 % RH Duration: 96 h O, E, V 5 Salt mist IEC 61300-2-26 [i.35] Salt solution 5 % NaCl (pH: between 6,5 and 7,2) Temperature: 35 °C Duration: 48 h O, E, V 6 Vibration (sinusoidal) IEC 60512-6-4 [i.10] Frequency range: 10 Hz to 55 Hz Number of sweeps: 15 sweeps, (10 - 55 - 10) Hz per axis Rate of frequency change: 1 octave/min Number of axes: 3 mutually perpendicular axes Amplitude: 0,75 mm O c , E b , V 7 Shock IEC 60512-6-3 [i.9] Wave form: half sine Duration: 11 ms Acceleration: 150 m/s2 Axes: 3 mutually perpendicular axes Number of shocks: 3 shocks per axis and per direction of axis, 18 shocks in total O c , E b , V NOTE 1: • 'O' includes change of attenuation and return loss measurements after the test; • 'E' includes Insulation resistance, voltage proof and change of contact resistance after the test; ETSI ETSI TR 104 097 V1.1.1 (2025-10) 23 No. Test Severity Criteria • 'V' represents visual examination according to IEC 60512-1-1 [i.3]. NOTE 2: Active monitoring change of attenuation during test.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7 ADU
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.1 Structure
|
The structure of the ADU usually consist of the active electrical port (for power supply), optical input port (receive optical signals from the upper level), hybrid output port (transmit optical signal and electrical power together with the connected SFU), and the optional cascading optical port (transmit optical signals to the next-level ADU). The optical port and the hybrid port are connected with type SC or XC hybrid connectors. According to the demand of application scenarios, 1:4, 1:8, and 1:16 ADU can be used in even splitting networks while 1:5 and 1:9 ADU can be used in uneven splitting networks. Typically, the ADU for uneven splitting networks contains a cascading optical port where the next level ADU can be connected, but the ADU for even splitting network only contains hybrid output port without a cascading optical port.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.2 Operating Environment
|
ADU for indoor cabling system should be able to operate at following environment, according to IEC 61753-1 [i.42] category OP (Outdoor protected environment): • Operating temperature: -25 ℃ ~ + 70 ℃ • Relative Humidity:5 % ~ 95 %; • Atmospheric pressure:86 kPa ~ 106 kPa.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.3 Optical Transmission Performance
|
Optical transmission performance for ADU is advised to conform to Table 7.1. Table 7.1: Optical Transmission performance of ADU Item Splitter Ratio 1:5 1:9 1:4 1:8 1:16 Wavelength 1 310 nm and 1 550 nm Insertion Loss Cascading optical port:≤ 11 dB Hybrid output port: ≤ 11 dB Cascading optical port:≤ 2,4 dB Hybrid output port: ≤ 16,3 dB ≤ 8,2 dB ≤ 11,1 dB ≤ 14,1dB Return Loss ≥ 50 dB
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.4 Electrical Performance
|
7.4.1 Input power The ADU operates properly with an input voltage of 45 V ~ 57 V with DC power supply. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 24 7.4.2 Output power The ADU output power should comply with the following requirements according to IEEE 802.3af [i.48], IEEE 802.3at [i.49], or IEEE 802.3bt [i.50] standard: • Output voltage should be 44 V ~ 57 V DC. • The maximum output power should depend on the number of output ports and the power consumption of SFU. • Rated current of each output port should not be less than 0,25 A, and the rated power should not be less than 14 W. Table 7.2: Output power performance of ADU Level Rated power(P) POE (IEEE 802.3af [i.48]) P ≤ 15,4 W POE+ (IEEE 802.3at [i.49]) P ≤ 30 W POE++ (IEEE 802.3bt [i.50]) P ≤ 90 W 7.5 Power source protection ADU should support the following power source protection performance: • ADU should have indicator lamp to display the working status of each port. • ADU should support under voltage protection function: When the input voltage is less than the voltage threshold value, ADU should turn off the output power. When the input voltage is back to the operating value, ADU should resume the working state automatically. • ADU should support over voltage protection functions: When the input voltage is greater than the voltage threshold value, ADU should turn off the output power. When the input voltage is back to the operating value, ADU should resume the working state automatically. • Each output port of ADU should have individual short-circuit current protection and over current protection: When the ADU detects short-circuit or over current, on one of the output ports, it should turn off the power supply on this single output port. Other ports should maintain power supply. After troubleshooting, the affected output port should resume the working state automatically. • The ADU should be equipped with a total power protection function, such that when the total power consumption of the connected SFUs exceeds the product's maximum power capacity, the system will power off lower-priority ports based on their pre-configured port priorities, ensuring that higher-priority ports and their connected SFUs continue to operate normally.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.6 Electrical performance
|
Electrical performance for ADU need to conform to Table 7.3. Table 7.3: Electrical performance for current carrying elements No Test Severity Requirement 1 Temperature Rise Test IEC 62368-1 [i.44] Ambient temperature: 40 °C ≤ 30 °C 2 Insulation resistance IEC 62368-1 [i.44] Between contacts r.m.s. withstand voltage:500 V DC Duration: 60 s ≥ 500 MΩ Between contacts and housing: 500 V DC Duration: 60 s ETSI ETSI TR 104 097 V1.1.1 (2025-10) 25
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.7 Mechanical performance
|
Mechanical performance for ADU should conform to Table 7.4. Table 7.4: Mechanical performance for ADU No Test Severity Requirement 1 Shock IEC 61300-2-9 [i.30] • ≤ 0,125 kg: 5 000 m/s2 • 0,125 kg < Mass of the sample ≤ 0,225 kg: 2 000 m/s2 • 0,225 kg < Mass of the sample ≤ 1 kg: 500 m/s2 Waveforms: Half sine waveform Duration time: 1 ms Direction: Three vertical directions Times: Twice per direction, total 12 times No mechanical damage to the appearance. Such as deformation, cracks and slackness. The optical fibre needs not to be broken, pulled out, faulty at the end or damage of the sealing. Insertion loss variation before and after test needs to be no more than 0,5 dB, and other optical performances need to conform to Table 7.1 after the test. After the test, electrical performance of ADU needs to conform to Table 7.3 when powered on. 2 Vibration (sinusoidal) IEC 61300-2-1 [i.25] Frequency: 10 - 55 Hz Frequency scanning speed: 45 times per minute Vibration amplitude: 0,75 mm Duration: 0 min for X, Y and Z direction
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.8 Environmental performance
|
Environmental performance for ADU needs to conform to Table 7.5. Table 7.5: Environmental performance for ADU No Test Severity Requirement 1 Dry heat-High temperature IEC 61300-2-18 [i.32] 85 °C (±2 °C),duration: 96 h No corrosion for the metal parts. No mechanical damages on the appearances, such as deformation, cracks and slackness. The optical fibre needs not to be broken, pulled out, faulty at the end or damage of the sealing. Insertion loss variation before and after test should not be no more than 0,5 dB, and other optical performances should conform to Table 7.1 after the test. After the test, electrical performance of ADU should conform to Table 7.3 when power on. 2 Cold IEC 61300-2-17 [i.31] -40 °C (±2 °C),duration: 96 h 3 Damp heat (steady state) IEC 61300-2-19 [i.33] 85 °C (±2 °C) 85 %(±5 %)RH, duration time: 96 h 4 Salt mist IEC 61300-2-26 [i.35] Salt solution 5 % NaCl (pH: between 6,5 and 7,2) Temperature: 35 °C Duration: 96 h 5 Temperature Cycling IEC 61300-2-22 [i.34] -40 ℃ ~ 85 °C 8 h for one-time cycle, hold highest and lowest temperature for 1 h, all cycling time is 21. 7.9 Electromagnetic Compatibility (EMC) ADU should support basic Electromagnetic Compatibility (EMC) function including Radiated Emission (RE), Conducted Emission (CE), Electro-Static Discharge (ESD), RF electromagnetic field (RS), Injected Currents (CS), Voltage Dips, Electrical surge, and Electrical fast transient/burst immunity. ADU should comply with the minimum EMC requirements applicable to its intended usage scenarios according to EN 55032 [i.53] and EN 55035 [i.54].
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
7.10 Fire safety performance
|
For the shell made of plastic materials, the V0 requirements should comply with UL 94 [i.52]. Fire safety tests are conducted to confirm whether the product poses fire hazards or equipment burnout risks under abnormal conditions such as abnormally high voltage or water dripping. Fire safety performance for ADU should conform to Table 7.6. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 26 Table 7.6: Fire safety performance for ADU No Test Severity Requirement 1 Water leakage test The following tests are conducted on the ADU's power input port, output port, heat dissipation vents, and other openings susceptible to foreign object intrusion: 1. Connect the ADU to the main power, with all output ports connected to the SFUs via hybrid cables, ensuring the equipment operates under normal working conditions; 2. Using a syringe, slowly drip 3 ml of liquid (test separately with pure water, tap water, 5 % saline solution, and saturated saline solution) into accessible openings. Continuously monitor each location for 10 minutes to observe whether the SFU powers down, or exhibits visible flames, of a melting of the shell, etc. If such phenomena occur, halt testing; if not, proceed to the next step; 3. Drip 1 - 3 drops of liquid onto the test area, continue observation for another 10 minutes. Check again for a SFU power down or visible flames, melting of the shell, etc. If such phenomena occur, halt testing; if not, proceed to the next step; 4. Repeat Step 3 until a total testing duration of 40 minutes is completed. No flame out of the shell; The shell does not melt; The product shell does not exhibit carbonization. 2 Abnormal high voltage test for external power adapter Test Procedures: 1. Connect the ADU and external power adapter to a step-up transformer, with an initial voltage of 230 V. All output ports are connected to SFUs via hybrid cables, ensuring the equipment operates under normal working conditions; 2. Adjust the step-up transformer to gradually increase the input voltage from 230 V to 253 V (the upper limit of mains power voltage fluctuations). Check whether the ADU continues to function properly. 3. At 253 V, simulate poor wiring in the household distribution box (live wire not securely connected). Repeatedly reconnect at a frequency of 1 time/second. Observe the external power adapter for shell melting, shell carbonization, or flames emitting from the shell. 4. Further adjust the step-up transformer to gradually increase the input voltage to 438 V (simulating incorrect connection to three- phase power). Power on and run the device. Check whether the ADU continues to function properly. 5. At 438 V, repeat the test described in Step 3 to observe shell melting, shell carbonization, or flames emitting from the shell under poor wiring conditions. Termination Conditions: The test stops if: 1. The housing temperature stabilizes without further increases, and no abnormalities like shell melting occur. 2. Abnormal phenomena such as shell melting, shell carbonization, or flames are observed. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 27
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
8 Deployment
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
8.1 General
|
The IFDN hybrid cabling system is with the features including fully pre-connected, optical and electrical ports integrated, plug and play, quick deployment and reliable connection. Before cabling, the following tools need to be prepared: optical power meter, voltage detection meter, and other auxiliary materials for construction and testing. The cable routing personnel needs to attend training and follow operating regulations, especially the safety regulations.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
8.2 Details to be noted
|
It is advisable to pay attention to the following items when routing cables: • Before routing a cable, mark the labels at both ends for easy management and maintenance. • When routing the branch cable, protect the cable and do not pull the cable with excessive force to avoid damaging the cable. Do not wind, twist, or step on the cable. • Protect the connectors of branch hybrid cables against collision. • Ensure that the cables are routed neatly and do not affect the unused space to facilitate capacity expansion and maintenance. • In practice, cables can be routed in the direction from the ADU to the MFU or from the MFU to the ADU based on the site environment to ensure that optical cables are easy to coil. • When routing and securing cables, straighten the cables every 5 m. The bending radius of cables should be no less than 24 mm. • Strong-current and weak-current cables are advised to be routed separately and far away from heat sources. If there are mice, PVC pipes should be used to protect the cables, and rodent-proof measurements should be taken in the surrounding environment. • When passing through concealed pipes, pay attention to the old cables or network cables in the pipes which have taken too much space, and avoid excessive pulling force during wiring in case the connectors could be pulled off. • Before threading the tube, try to remove sharp and hard objects in the tube to avoid friction and cracking of the cable sheath. Besides, the environment inside the tube should be clean and dry in case of the corrosion damage to the cables. • When threading the pipe, stress the cable rather than the connector. • Keep the cable laid out in a good ventilation and heat dissipation environment rather than placing the remaining length of the coil directly near the heat source during deployment.
|
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
8.3 Acceptance items
| |
5b2fbb7f9f34ec451504359adb0cbf12
|
104 097
|
8.3.1 Electrical performance acceptance
|
During the cable routing, the ADU need not work and affect the remote power supply. The following method can be used to verify the issue: 1) Route the cables, and connect the other end of the hybrid connector to the voltage detection meter as described in Figure 8.1. 2) Turn on the ADU to check if the light of normal work status is on. ETSI ETSI TR 104 097 V1.1.1 (2025-10) 28 3) Test the voltage between the positive and negative PIN poles to verify if it is within the normal voltage ranges. Figure 8.1: Schematic diagram of the electrical performance acceptance test 8.3.2 Optical power acceptance During the cable routing, the cable might be broken or severely bent. As a result, the optical power loss is severe and the insertion loss increases. The following method can be used to verify the issue: a) Route the cables, and connect an Adapter, a patch cord and an optical power meter as described in Figure 8.2. b) Measure and record the optical power of the ADU and hybrid cable. c) Check if the optical power can meet the receiving sensitivity index of SFU. Figure 8.2: Schematic diagram of the Optical power acceptance test ETSI ETSI TR 104 097 V1.1.1 (2025-10) 29 Annex A: Biobliography • IEC 61076-3-127: "Connectors for Electrical and Electronic Equipment - Product Requirements - Part 3-127: Rectangular connectors - Detail specification for hybrid connectors with 2-pole 2,0 A max, 60 V DC electric portion for power supply and type XC fibre optic portion for data transmission". • IEC 61076-3: "Connectors for Electrical and Electronic Equipment - Product Requirements - Part 3-123 Rectangular connectors - Detail specification for hybrid connectors with 2-pole 2,0 A max, 60 V DC electric portion for power supply and type SC fibre optic portion for data transmission, with push-pull locking". ETSI ETSI TR 104 097 V1.1.1 (2025-10) 30 History Version Date Status V1.1.1 October 2025 Publication
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
1 Scope
|
The present document presents an analysis of the user expectations, with respect to the study of data driven technologies (Artificial Intelligence (AI), deep learning, Machine Learning (ML)) to present the definition and concept of the User Information System (UIS), that enables Smart Customized Services (SCS) from both user and provider side. These services aim to provide personalization, adaptability, and intelligent decision support within the digital ecosystem. NOTE: The UIS and SCS are designed to serve a broad spectrum of users. Their objective is to empower and protect all citizens. By integrating smart and assistive technologies, the system seeks to enhance participation in public, social, and economic activities, while also offering advanced users more autonomy and self-management capabilities.
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
2 References
| |
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
2.1 Normative references
|
Normative references are not applicable in the present document.
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
2.2 Informative references
|
References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long-term validity. The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's understanding, but are not required for conformance to the present document. [i.1] ETSI TR 103 438: "User Group; User centric approach in Digital Ecosystem". [i.2] ETSI EG 203 602: "User Group; User Centric Approach: Guidance for users; Best practices to interact in the Digital Ecosystem". [i.3] ETSI TR 103 603: "User Group; User Centric Approach; Guidance for providers and standardization makers". [i.4] ETSI TR 103 604: "User Group; User centric approach; Qualification of the interaction with the digital ecosystem". [i.5] Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services (Text with EEA relevance). [i.6] EN 301 549 (V3.2.1) (2021-03): "Accessibility requirements for ICT products and services". [i.7] ISO 9241-210:2019: "Ergonomics of human-system interaction; Part 210: Human-centred design for interactive systems", Edition 2; 2019. [i.8] Interaction Design Foundation: "Design for All". [i.9] Centre for Excellence in Universal Design: "The 7 Principles". [i.10] ETSI EG 202 116 (V1.2.2) (2009-03): "Human Factors (HF); Guidelines for ICT products and services; "Design for All"". [i.11] ETSI TS 102 747 (V1.1.1) (2009-12): "Human Factors (HF); Personalization and User Profile Management; Architectural Framework". ETSI ETSI TR 104 027 V1.1.1 (2025-10) 7 [i.12] ETSI ES 202 746 (V1.1.1) (2010-02): "Human Factors (HF); Personalization and User Profile Management; User Profile Preferences and Information". [i.13] University of East London: "Exploring the Ethical Implications of AI-Powered Personalization in Digital Marketing". [i.14] EU TAi Guidelines. [i.15] ETSI TR 104 221: "Securing Artificial Intelligence (SAI); Problem Statement". [i.16] ETSI TS 104 224: "Securing Artificial Intelligence (SAI); Explicability and transparency of AI processing". [i.17] ETSI TS 104 102: "Cyber Security (CYBER); Encrypted Traffic Integration (ETI); ZT-Kipling methodology". [i.18] ETSI TR 103 477: "eHEALTH; Standardization use cases for eHealth". [i.19] Assist-IoT project report D3.2: "Use Cases Manual & Requirements and Business Analysis - Initial". [i.20] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). [i.21] The EU project i-Tour. [i.22] Lazarotto, B.: "The right to data portability: A holistic analysis of GDPR, DMA and the Data Act". [i.23] ETSI EN 303 760 (V1.1.1) (2024-10): "SmartM2M; SAREF Guidelines for IoT Semantic Interoperability; Develop, apply and evolve Smart Applications ontologies". [i.24] ETSI TR 103 875-2: "User Centric approach in Digital Ecosystem; The Smart Interface; Part 2: Smart Identity: A Proof of Concept". [i.25] ETSI TR 103 437: "USER; Quality of ICT services; New QoS approach in a digital ecosystem". [i.26] OMG UML®: "OMG® Unified Modeling Language®", Version 2.5.1. [i.27] Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC (eIDAS). [i.28] Regulation (EU) 2024/1183 of the European Parliament and of the Council of 11 April 2024 amending Regulation (EU) No 910/2014 as regards establishing the European Digital Identity Framework (eIDAS2). [i.29] ETSI EN 319 401: "Electronic Signatures and Trust Infrastructures (ESI); General Policy Requirements for Trust Service Providers". [i.30] Brandt Dainow: "Digital Alienation as the Foundation of Online Privacy Concerns". [i.31] ETSI TR 119 476: "Electronic Signatures and Trust Infrastructures (ESI); Analysis of selective disclosure and zero-knowledge proofs applied to Electronic Attestation of Attributes". [i.32] ETSI TS 103 486: "CYBER; Identity Management and Discovery for IoT". [i.33] ISO/IEC 7498-1: "Information technology -- Open Systems Interconnection - Basic Reference Model: The Basic Model". [i.34] Foureaux, Simon & Daum, Thomas (2025): ""But don't think it is a game": Agricultural videogames and "good farming"". Journal of Rural Studies. 117. 10.1016/j.jrurstud.2025.103686. [i.35] ETSI Directives. ETSI ETSI TR 104 027 V1.1.1 (2025-10) 8 [i.36] Regulation (EU) 2024/2847 of the European Parliament and of the Council of 23 October 2024 on horizontal cybersecurity requirements for products with digital elements and amending Regulations (EU) No 168/2013 and (EU) 2019/1020 and Directive (EU) 2020/1828 (Cyber Resilience Act). [i.37] Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (directive SRI 2). [i.38] Sasan Rostambeik, Noemi Simoni, Antoine Boutignon: "Userware: A framework for next generation personalized services", Computer Communications, Volume 30, Issue 3, 2007, Pages 619-629, ISSN 0140-3664. [i.39] "OGC City Geography Markup Language (CityGML); Part 1: Conceptual Model Standard". [i.40] "OGC City Geography Markup Language (CityGML); Part 2: GML Encoding Standard". [i.41] The EU project iLocate. [i.42] The EU project Assist-IoT. [i.43] Waze. [i.44] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (EU Digital Markets Act (DMA)). [i.45] ETSI TS 102 165-2: "Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); Methods and protocols; Part 2: Protocol Framework Definition; Security Counter Measures". NOTE: An update is in preparation to a CYBER document at the time of preparation of the present document.
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
3 Definition of terms, symbols and abbreviations
| |
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
3.1 Terms
|
For the purposes of the present document, the following terms apply: Architecture Communication Information Function Organization (ACIFO) model: framework for interpreting and analysing complex systems whole through the five dimensions that characterize it artificial intelligence: ability of a system to handle representations, both explicit and implicit, and procedures to perform tasks that would be considered intelligent if performed by a human avatar: representation of the user in digital form digital ecosystem: network of interconnected digital technologies, platforms, and services that interact with each other to create value for businesses and consumers and facilitate access to digital technology for everyone machine learning: branch of artificial intelligence concerned with algorithms that learn how to perform tasks by analysing data, rather than explicitly programmed reinforcement learning: form of machine learning where a policy defining how to act is learned by agents through experience to maximize their reward; and agents gain experience by interacting in an environment through state transitions semi-supervised learning: form of machine learning where the data set is partially labelled. In this case, even the unlabelled data can be used to improve the quality of the model supervised learning: form of machine learning where all the training data is labelled and the model can be trained to predict the output based on a new set of inputs ETSI ETSI TR 104 027 V1.1.1 (2025-10) 9 unsupervised learning: form of machine learning where the data set is unlabelled, and the model looks for structure in the data, including grouping and clustering User Platform as a Service (UPaaS): userware developed according to the "aas" model
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
3.2 Symbols
|
Void.
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
3.3 Abbreviations
|
For the purposes of the present document, the following abbreviations apply: aaS as a Service ACIFO Architecture Communication Information Function Organization AI Artificial Intelligence API Application Programming Interface AR Augmented Reality DaaS Device as a Service DAC Discretionary Access Control DMA Digital Market Act DTP Data Transfer Project E2E End-to-End eIDAS electronic IDentification, Authentication and trust Services EUDI EU Digital Identity Wallet GDPR General Data Protection Regulation GIS Geographic Information Systems HMI Human Machine Interface IaC Infrastructure as Code ICT Information & Communications Technology IoT Internet of Things MAC Mandatory Access Control ML Machine Learning NaaS Network as a Service ODA Open Distributed Architecture OSH Occupational Safety and Health PaaS Platform as a Service PPE Personal Protective Equipment QoE Quality of Experience QoS Quality of Service SaaS Software as a Service SAREF Smart Applications REFerence ontology SCS Smart Customized Service SOA Service-Oriented Architecture SUMA Smart Urban Mobility Assistant UDR User Digital Representation UIS User Information System UML Unified Modelling Language UPaaS User Platform as a Service ETSI ETSI TR 104 027 V1.1.1 (2025-10) 10
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
4 Smart Customized Services for UIS
| |
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
4.1 Identification of the problem to be solved
|
Users of digital services have historically had limited ability to control the use of personal data by digital services and how a service shares that data with other services, nor is there usually fine-grained control of what is given in terms of personal data, rather there is often a "share all" approach to how data is released. Whilst users are protected by a number of legislative instruments (see clause 6 and Annex D, for a broad summary of these instruments), the control is often passed to the service provider and the user is often not directly involved in the way in which their data is used or how a service is composed from component services. Whilst there are a number of technical means to restrict the personal data given to a service, e.g. by selective disclosure as outlined for eIDAS [i.27], [i.28] and the EU Digital Identity Wallet (EUDI) in ETSI TR 119 476 [i.31], or by application of specific permutations of a user profile following the approach of ETSI TS 103 486 [i.32], there are wider implications and requirements that are considered in the present document that seek to empower the user's control of personal data. NOTE 1: In the eIDAS2 regulation [i.28] it is stated in Recital 59 that "Selective disclosure is a concept empowering the owner of data to disclose only certain parts of a larger data set, in order for the receiving entity to obtain only such information as is necessary for the provision of a service requested by a user. The European Digital Identity Wallet should technically enable the selective disclosure of attributes to relying parties. It should be technically possible for the user to selectively disclose attributes, including from multiple, distinct electronic attestations, and to combine and present them seamlessly to relying parties. This feature should become a basic design feature of European Digital Identity Wallets, thereby reinforcing convenience and the protection of personal data, including data minimization". Addressing only the user's data and its selective disclosure is not sufficient, rather the services that are offered to, or built by the user, also have to be cognizant of the problems that users face in having assurances of the protection of personal data, including data minimization whilst maximizing the conveniences of the digital ecosystem. To this end the present document expands the idea of user by considering the user as an information system (the User Information System (UIS)) in the context of Smart Customized Services (SCS). In this respect it is noted that the concept of user centric design is well established in many industries and describes an approach wherein products and services are explicitly developed around the user. This does not imply bespoke design and manufacture, rather it allows the user to choose aspects of the way the service is presented and accessed, particularly in the ICT domain by personalization of user interfaces. By allowing for greater control of how data is used and how services are constructed using the UIS/SCS model the user is afforded control of, and maintenance of, their personal autonomy. The userware (see [i.38]) is then a means of allowing the user to explicitly control their autonomy within the provision of services. The role and purpose of SCS is to place the user at the centre of their own digital ecosystem as the UIS (i.e. allows users to have control of their autonomy), being a virtual representation of the user's preferences as an information element and active entity. The UIS in SCS is therefore a persistent digital object in the service domain, representing an intelligent agent of the user (i.e. as an AI-enabled avatar acting as the user). SCS and UIS together extend and develop prior concepts of users being represented as information elements in order to allow users to maintain control over their data and more generally their own information system in the way that they present themselves to services and more generally online. In order to support SCS/UIS a number of system pre-requisites have to be met. The primary pre-requisite is that service components are considered as always available and are able to semantically and contextually identify themselves. It is also expected that service components exhibit the following characteristics (these are expanded upon in clause 7 of the present document): • Statelessness: Each service should be able to process requests without retaining any request-specific or contextual information. Operations should function independently of prior invocations. • Autonomy: Services should execute their functionalities independently of each other. • Loose Coupling: Connections between services should be flexible rather than rigid and not require functional dependency on any other service. • Cohesive: Services should be logically coherent and self-contained (see also autonomy). ETSI ETSI TR 104 027 V1.1.1 (2025-10) 11 • Abstract: The internal service logic should remain abstracted from external environments (i.e. independent of). NOTE 2: The term user in the present document is not intended to only refer to a human user but may include a service using other services. NOTE 3: Services can have multiple characteristics, they may be information services or interactive services, and if a composition of services results in a new service then it is the composition that is referred to as the service. In addition to the technical pre-requisites identified above there is an attestation in the present document that there is a consumer demand for more control of services (demand chain), and a matching willingness on the part of providers to meet that demand (supply chain). The smart component is identified as an AI element and applies intelligence to ensure that services are configured and personalized only where the required data from UIS is appropriately acquired and curated (see ETSI TR 104 221 [i.15] for a wider examination of the role of data in machine learning). The UIS model and its realization in the management of services with SCS expands the models from each of ETSI TS 102 747 [i.11] and ETSI ES 202 746 [i.12] to have a persistent user profile able to interact with multiple services. This is shown in Figure 1 as a Venn diagram where SCS lies at the intersection of these 3 technological design paradigms: • User centric design: - Addressing Quality of Experience (QoE) aligned to Quality of Service (QoS). • Societal digitization: - Addresses the increasingly important role of digital devices and their use to connect to services for business, entertainment and governance representing a digital ecosystem. • Automation: - This includes the evolution of smart systems and the application of AI in various forms. Figure 1: Intersection of domains that identifies the role of SCS User centric design Automation Societal digitisation UIS/SCS ETSI ETSI TR 104 027 V1.1.1 (2025-10) 12 Whilst one outcome of loosely controlled release of personal data is that of digital alienation, it is also clear that current best practices such as for the EUDI [i.31] are not often as widely implemented as would be required to give the same quality of experience across all user interactions with the digital world with regards to the use of personal data that would mitigate threats such as those of digital alienation [i.30].
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
4.2 Application of the ACIFO model in SCS
|
The Venn diagram of Figure 1 is expanded first into the model given below in Figure 2 which illustrates the role of elements of each domain on SCS. In the context of Smart Customized Services (SCS), personalization is not limited to the adaptation of content or functionality to an individual. Analysis, through the ACIFO model, shows that it also results from organizational choices regarding data processing and governance, which dynamically adapt to the evolving user context and preferences. The layering that results is: Serviceware where the Platform as a service providers exist; Networkware that provides the necessary connectivity; and Userware where user-centric services exist (see Figure 2). Figure 2: User Centric Approach: Personalization SCS can be further developed using the 5-dimensions of Architecture, Communication, Information, Function, Organization (ACIFO) in the ACIFO model described in ETSI TR 103 438 [i.1], ETSI EG 203 602 [i.2], ETSI TR 103 603 [i.3], ETSI TR 103 604 [i.4]) examined in detail for the UIS/SCS environment in clause 7. In particular recommendations for application in a service centric environment are addressed in clause 7.1, and recommendations for application in a user centric environment (e.g. userware) are addressed in clause 7.2 of the present document: • Architectural Model: defines the global structure, including semantics and is optimized for the stated objectives. • Communication (Relational) Model: defines the exchange protocols, including HMIs (User) and APIs (provider) exchange and management protocols over three planes: (1) Management (Monitoring), (2) Control, and (3) Usage. • Information Model: defines the different Profiles (User, device, service). The information covers the whole ecosystem (equipment, network, applications, services, HMIs, User, etc.) from the offer to the resource's availability for Users, Providers and any other partners. It is a knowledge data base representing the whole ecosystem. EXAMPLE 1: In the present document the information model is the UIS, which includes all of the user preferences and contextual knowledge. • Functional Model: defines services and service composition. The functionalities (the process) to compose any service based on "micro-service". ETSI ETSI TR 104 027 V1.1.1 (2025-10) 13 EXAMPLE 2: In the present document the functional model requires that functions are available using the "as a Service" model. • Organization Model: defines the role of any actor and which actor is responsible of each action. ("Who is doing what?" in terms of responsibility for processing and data governance). The particular use of ACIFO in the present documented is augmented by application of the ZT-Kipling criteria from ETSI TS 104 102 [i.17] which gathers knowledge of every interaction of the user (as UIS) with the system components (via the SCS) by requiring answers to the following questions on each use: What?, Why?, When?, How?, Where?, and Who? Whilst the application of [i.17] primarily impacts the Communication and Information elements of the ACIFO model, the consequence is that each of the other elements of the ACIFO model (i.e. the Architectural, Functional and Organizational elements) have to be designed in such a way that the ZT-Kipling criteria can be fulfilled. NOTE: The ACIFO approach does not infer a specific order of addressing the dimensions but for the purposes of the present document are presented in the order of the acronym.
|
9612427cf1aa38840a14974b2ee90b89
|
104 027
|
5 Use cases for User Information Systems
|
5.1 Introduction to use cases for UIC and their service composition The present document adopts the model given in ETSI TR 103 477 [i.18] where it is stated that use cases are developed to examine problem statements that are a concise description of issues that need to be solved in the context of the use case. The purpose of the use case is to clearly describe: • What the problem is. • Who has that problem i.e. who will benefit when it is solved. • What are the consequences of the problem. • What a possible solution would be, this sets the expectations and the scope of the solution (is it a new process, an application, etc.). In the context of standardization the problem is multi-fold but is primarily concerned with determination of interoperability. This may be at the application level where syntactic and semantic coherence is critical, or at any of the layers of the OSI stack (see ISO/IEC 7498-1 [i.33]). For communications interoperability the main concerns are to give assurance of connectivity, of routing (i.e. the ability of devices to connect in order to provide reliable transport of information from source to sink), and of mutuality of transfer rates (i.e. to ensure that data produced at a given rate can be consumed at the same rate). The purpose of the use cases given in the present document are to identify common requirements of UIS and SCS. The uses cases identify multiple functions to build relatively complex systems, although it is recognized that such systems (e.g. the urban mobility use case) are extensions of how users typically interact with the transport systems of their local environment, the potential of UIS/SCS to accelerate interventions and to act "in the loop" is identified by the use cases that follow. The use cases are presented to show how they impact different forms of user (in the form of actors in the use cases), and what information is required from and between actors to enable the use cases. It is noted that for most use cases there is a rational decomposition into multiple use cases. Each use case identifies the actors in the use case, the principal interactions and the expected output in terms of the role of UIS and SCS. The use cases are drawn using the conventions of the Unified Modelling Language (UML) [i.26]. NOTE: The UIS/SCS is modelled as an UML Class for the present document but may be modelled in other ways. The generalized use case model of UIS/SCS is given in Figure 3 where each actor manages their preference set as UIS, and creates SCSs which combine the data from the UIS of the actors with the microservices available. ETSI ETSI TR 104 027 V1.1.1 (2025-10) 14 Figure 3: Generalized use case model of UIS/SCS The services used in SCS via the Compose Service use-case are micro-services that have the characteristics outlined in clause 4.1 above. In all cases it is assumed that the user controls which elements of the UIS are released, and that critically, the UIS has a means of selective disclosure available natively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.