Official Journal
of the European Union
EN
Iris Oifigiúil
an Aontais Eorpaigh
GA
REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
RIALACHÁN (AE) 2024/1689 Ó PHARLAIMINT NA hEORPA AGUS ÓN gCOMHAIRLE
laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
lena leagtar síos rialacha comhchuibhithe maidir leis an intleacht shaorga agus lena leasaítear Rialacháin (CE) Uimh. 300/2008, (AE) Uimh. 167/2013, (AE) Uimh. 168/2013, (AE) 2018/858, (AE) 2018/1139 agus (AE) 2019/2144 agus Treoracha 2014/90/AE, (AE) 2016/797 agus (AE) 2020/1828 (an Gníomh um an Intleacht Shaorga)
(Text with EEA relevance)
(Téacs atá ábhartha maidir le LEE)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
TÁ PARLAIMINT NA hEORPA AGUS COMHAIRLE AN AONTAIS EORPAIGH,
Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof,
Ag féachaint don Chonradh ar Fheidhmiú an Aontais Eorpaigh, agus go háirithe Airteagail 16 agus 114 de,
Having regard to the proposal from the European Commission,
Ag féachaint don togra ón gCoimisiún Eorpach,
After transmission of the draft legislative act to the national parliaments,
Tar éis dóibh an dréachtghníomh reachtach a chur chuig na parlaimintí náisiúnta,
Having regard to the opinion of the European Economic and Social Committee (1),
Ag féachaint don tuairim ó Choiste Eacnamaíoch agus Sóisialta na hEorpa (1),
Having regard to the opinion of the European Central Bank (2),
Ag féachaint don tuairim ón mBanc Ceannais Eorpach (2),
Having regard to the opinion of the Committee of the Regions (3),
Ag féachaint don tuairim ó Choiste na Réigiún (3),
Acting in accordance with the ordinary legislative procedure (4),
Ag gníomhú dóibh i gcomhréir leis an ngnáthnós imeachta reachtach (4),
De bharr an mhéid seo a leanas:
(1)
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
(1)
Is é is cuspóir don Rialachán seo feabhas a chur ar fheidhmiú an mhargaidh inmheánaigh trí chreat dlíthiúil aonfhoirmeach a leagan síos go háirithe maidir le córais intleachta saorga (córais IS) a fhorbairt, a chur ar an margadh, a chur i seirbhís agus a úsáid san Aontas, i gcomhréir le luachanna an Aontais, chun glacadh na hintleachta saorga atá dírithe ar an duine agus iontaofa (intleacht shaorga) a chur chun cinn agus, ag an am céanna, ardleibhéal cosanta sláinte, sábháilteachta agus cearta bunúsacha mar a chumhdaítear i gCairt um Chearta Bunúsacha an Aontais Eorpaigh (‘an Chairt’), lena n-áirítear an daonlathas, an smacht reachta agus cosaint an chomhshaoil, chun cosaint a thabhairt ar éifeachtaí díobhálacha na gcóras intleachta saorga san Aontas, agus chun tacú le nuálaíocht. Áirithítear leis an Rialachán seo saorghluaiseacht trasteorann earraí agus seirbhísí atá bunaithe ar an intleacht shaorga, rud a chuireann cosc ar na Ballstáit srianta a fhorchur ar fhorbairt, margaíocht agus úsáid córas intleachta saorga, ach amháin má údaraítear sin go sainráite leis an Rialachán seo.
(2)
This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI.
(2)
Ba cheart an Rialachán seo a chur i bhfeidhm i gcomhréir le luachanna an Aontais mar a chumhdaítear sa Chairt iad, rud a éascóidh cosaint daoine nádúrtha, gnóthas, an daonlathais, an smachta reachta agus cosaint an chomhshaoil, agus ag an am céanna a chuirfidh borradh faoin nuálaíocht agus faoin bhfostaíocht agus a fhágfaidh go mbeidh an tAontas ar thús cadhnaíochta maidir le glacadh na hintleachta saorga iontaofa.
(3)
AI systems can be easily deployed in a large variety of sectors of the economy and many parts of society, including across borders, and can easily circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that AI is trustworthy and safe and is developed and used in accordance with fundamental rights obligations. Diverging national rules may lead to the fragmentation of the internal market and may decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market on the basis of Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for remote biometric identification for the purpose of law enforcement, of the use of AI systems for risk assessments of natural persons for the purpose of law enforcement and of the use of AI systems of biometric categorisation for the purpose of law enforcement, it is appropriate to base this Regulation, in so far as those specific rules are concerned, on Article 16 TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
(3)
Is féidir córais intleachta saorga a úsáid go héasca i raon leathan earnálacha den gheilleagar agus in go leor codanna den tsochaí, lena n-áirítear thar theorainneacha, agus is féidir iad a scaipeadh go héasca ar fud an Aontais. Tá fiosruithe déanta ag Ballstáit áirithe cheana féin maidir le rialacha náisiúnta a ghlacadh chun a áirithiú go bhfuil an intleacht shaorga iontaofa agus sábháilte agus go bhforbraítear agus go n-úsáidtear í i gcomhréir le hoibleagáidí i ndáil le cearta bunúsacha. Ilroinnt an mhargaidh inmheánaigh agus laghdú ar an deimhneacht dhlíthiúil le haghaidh oibreoirí a fhorbraíonn, a allmhairíonn nó a úsáideann córais intleachta saorga, d’fhéadfadh an méid sin a bheith mar thoradh ar rialacha náisiúnta éagsúla a bheith ann. Ba cheart, dá bhrí sin, ardleibhéal comhsheasmhach cosanta a áirithiú ar fud an Aontais chun intleacht shaorga iontaofa a ghnóthú, agus dibhéirseachtaí a chuireann isteach ar shaorghluaiseacht, nuálaíocht, cur in úsáid agus glacadh na gcóras intleachta saorga laistigh den mhargadh inmheánach, mar aon leis na táirgí agus na seirbhísí a bhaineann leo a chosc trí oibleagáidí aonfhoirmeacha le haghaidh oibreoirí a leagan síos agus trí chosaint aonfhoirmeach cúiseanna sáraitheacha a ráthú a bhaineann le leas an phobail agus cosaint daoine ar fud an mhargaidh inmheánaigh ar bhonn Airteagal 114 den Chonradh ar Fheidhmiú an Aontais Eorpaigh (CFAE). Sa mhéid go bhfuil rialacha sonracha sa Rialachán seo maidir le daoine aonair a chosaint i ndáil le próiseáil sonraí pearsanta a bhaineann le srianta ar úsáid córas intleachta saorga le haghaidh cian-sainaithint bhithmhéadrach chun críoch fhorfheidhmiú an dlí, ar úsáid córas intleachta saorga le haghaidh measúnuithe riosca ar dhaoine nádúrtha chun críoch fhorfheidhmiú an dlí agus ar úsáid córas intleachta saorga le haghaidh catagóiriú bithmhéadrach chun críoch fhorfheidhmiú an dlí, is iomchuí an Rialachán seo a bhunú, a mhéid a bhaineann leis na rialacha sonracha sin, ar Airteagal 16 CFAE. I bhfianaise na rialacha sonracha sin agus iontaoibh Airteagal 16 CFAE, is iomchuí dul i gcomhairle leis an mBord Eorpach um Chosaint Sonraí.
(4)
AI is a fast evolving family of technologies that contributes to a wide array of economic, environmental and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of AI can provide key competitive advantages to undertakings and support socially and environmentally beneficial outcomes, for example in healthcare, agriculture, food safety, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate change mitigation and adaptation.
(4)
Grúpa teicneolaíochtaí atá san intleacht shaorga atá ag athrú go tapa agus a chuireann le réimse leathan tairbhí eacnamaíocha, comhshaoil agus sochaíocha ar fud raon iomlán earnálacha agus gníomhaíochtaí sóisialta trí chéile. Trí fheabhas a chur ar thuar, oibríochtaí agus leithdháileadh acmhainní a bharrfheabhsú, agus réitigh dhigiteacha atá ar fáil do dhaoine aonair agus d’eagraíochtaí a phearsantú, is féidir le húsáid na hintleachta saorga príomhbhuntáistí iomaíocha a sholáthar do ghnóthais agus tacú le torthaí a rachaidh chun tairbhe don tsochaí agus don chomhshaol, mar shampla i gcúram sláinte, talmhaíocht, sábháilteacht bia, oideachas agus oiliúint, na meáin, spóirt, cultúr, bainistiú bonneagair, fuinneamh, iompar agus lóistíocht, seirbhísí poiblí, slándáil, ceartas, éifeachtúlacht acmhainní agus fuinnimh, faireachán comhshaoil, bithéagsúlacht agus éiceachórais a chaomhnú agus a athbhunú agus maolú ar an athrú aeráide agus oiriúnú don athrú sin.
(5)
At the same time, depending on the circumstances regarding its specific application, use, and level of technological development, AI may generate risks and cause harm to public interests and fundamental rights that are protected by Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic harm.
(5)
Ag an am céanna, ag brath ar na himthosca maidir lena cur i bhfeidhm sonrach, lena húsáid agus le leibhéal na forbartha teicneolaíche, d’fhéadfadh rioscaí a bheith ag baint leis an intleacht shaorga agus d’fhéadfadh sí díobháil a dhéanamh do leasanna an phobail agus cearta bunúsacha atá faoi chosaint dhlí an Aontais. D’fhéadfadh an díobháil sin a bheith ábhartha nó neamhábhartha, lena n-áirítear díobháil fhisiceach, shíceolaíoch, shochaíoch nó eacnamaíoch.
(6)
Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its regulatory framework to be developed in accordance with Union values as enshrined in Article 2 of the Treaty on European Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and, pursuant to Article 6 TEU, the Charter. As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the ultimate aim of increasing human well-being.
(6)
I bhfianaise an mhórthionchair a d’fhéadfadh a bheith ag an intleacht shaorga ar an tsochaí agus an gá le muinín a chothú, tá sé ríthábhachtach go bhforbrófaí an intleacht shaorga agus a creat rialála i gcomhréir le luachanna an Aontais mar a chumhdaítear in Airteagal 2 den Chonradh ar an Aontas Eorpach (CAE), leis na cearta agus na saoirsí bunúsacha a chumhdaítear sna Conarthaí agus, de bhun Airteagal 6 CAE, an Chairt. Mar réamhriachtanas, ba cheart an intleacht shaorga a bheith ina teicneolaíocht atá dírithe ar an duine. Ba cheart di feidhmiú mar uirlis do dhaoine, agus é mar aidhm dheiridh léi dea-bhail an duine a fheabhsú.
(7)
In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common rules for high-risk AI systems should be established. Those rules should be consistent with the Charter, non-discriminatory and in line with the Union’s international trade commitments. They should also take into account the European Declaration on Digital Rights and Principles for the Digital Decade and the Ethics guidelines for trustworthy AI of the High-Level Expert Group on Artificial Intelligence (AI HLEG).
(7)
Chun ardleibhéal comhsheasmhach cosanta a áirithiú do leasanna poiblí a mhéid a bhaineann le sláinte, sábháilteacht agus cearta bunúsacha, ba cheart rialacha comhchoiteanna a bhunú maidir le córais intleachta saorga ardriosca. Ba cheart na rialacha sin a bheith comhsheasmhach leis an gCairt, neamh-idirdhealaitheach agus i gcomhréir le gealltanais idirnáisiúnta trádála an Aontais. Ba cheart dóibh an Dearbhú Eorpach maidir le Cearta Digiteacha agus Prionsabail Dhigiteacha le haghaidh na Deacáide Digití agus na treoirlínte eitice le haghaidh intleacht shaorga iontaofa ón nGrúpa Ardleibhéil Saineolaithe um an Intleacht Shaorga a chur san áireamh freisin.
(8)
A Union legal framework laying down harmonised rules on AI is therefore needed to foster the development, use and uptake of AI in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, including democracy, the rule of law and environmental protection as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market, the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. Those rules should be clear and robust in protecting fundamental rights, supportive of new innovative solutions, enabling a European ecosystem of public and private actors creating AI systems in line with Union values and unlocking the potential of the digital transformation across all regions of the Union. By laying down those rules as well as measures in support of innovation with a particular focus on small and medium enterprises (SMEs), including startups, this Regulation supports the objective of promoting the European human-centric approach to AI and being a global leader in the development of secure, trustworthy and ethical AI as stated by the European Council (5), and it ensures the protection of ethical principles, as specifically requested by the European Parliament (6).
(8)
Dá bhrí sin, tá gá le creat dlíthiúil de chuid an Aontais lena leagtar síos rialacha comhchuibhithe maidir leis an intleacht shaorga chun forbairt, úsáid agus glacadh na hintleachta saorga a chothú sa mhargadh inmheánach agus ag an am céanna ardleibhéal cosanta maidir le leasanna poiblí a bhaint amach, amhail an tsláinte agus an tsábháilteacht agus cosaint na gceart bunúsach, lena n-áirítear an daonlathas, an smacht reachta agus cosaint an chomhshaoil mar a aithnítear agus a chosnaítear faoi dhlí an Aontais. Chun an cuspóir sin a bhaint amach, ba cheart rialacha a leagan síos lena rialaítear cur ar an margadh, cur i mbun seirbhíse agus úsáid córas intleachta saorga áirithe, agus sa dóigh sin dea-fheidhmiú an mhargaidh inmheánaigh a áirithiú agus ligean do na córais sin tairbhe a bhaint as prionsabal na saorghluaiseachta earraí agus seirbhísí. Ba cheart na rialacha sin a bheith soiléir agus láidir maidir le cearta bunúsacha a chosaint, ba cheart dóibh tacú le réitigh nuálacha nua, lena gcumasófar éiceachóras Eorpach de ghníomhaithe poiblí agus príobháideacha a chruthaíonn córais intleachta saorga i gcomhréir le luachanna an Aontais agus a bhaineann leas as acmhainneacht an chlaochlaithe dhigitigh fud réigiúin uile an Aontais. Trí na rialacha sin a leagan síos chomh maith le bearta chun tacú leis an nuálaíocht agus béim ar leith á leagan ar fhiontair bheaga agus mheánmhéide (FBManna), lena n-áirítear gnólachtaí nuathionscanta, tacaíonn an Rialachán seo leis an gcuspóir cur chuige Eorpach duinelárnach i leith na hintleachta saorga a chur chun cinn agus a bheith mar cheannaire domhanda i bhforbairt intleachta saorga atá slán, iontaofa agus eiticiúil mar a luaigh an Chomhairle Eorpach (5), agus áirithítear leis cosaint na bprionsabal eiticiúil, mar a d’iarr Parlaimint na hEorpa go sonrach (6).
(9)
Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council (7), Decision No 768/2008/EC of the European Parliament and of the Council (8) and Regulation (EU) 2019/1020 of the European Parliament and of the Council (9) (New Legislative Framework). The harmonised rules laid down in this Regulation should apply across sectors and, in line with the New Legislative Framework, should be without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, employment, and protection of workers, and product safety, to which this Regulation is complementary. As a consequence, all rights and remedies provided for by such Union law to consumers, and other persons on whom AI systems may have a negative impact, including as regards the compensation of possible damages pursuant to Council Directive 85/374/EEC (10) remain unaffected and fully applicable. Furthermore, in the context of employment and protection of workers, this Regulation should therefore not affect Union law on social policy and national labour law, in compliance with Union law, concerning employment and working conditions, including health and safety at work and the relationship between employers and workers. This Regulation should also not affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States as well as the right to negotiate, to conclude and enforce collective agreements or to take collective action in accordance with national law. This Regulation should not affect the provisions aiming to improve working conditions in platform work laid down in a Directive of the European Parliament and of the Council on improving working conditions in platform work. Moreover, this Regulation aims to strengthen the effectiveness of such existing rights and remedies by establishing specific requirements and obligations, including in respect of the transparency, technical documentation and record-keeping of AI systems. Furthermore, the obligations placed on various operators involved in the AI value chain under this Regulation should apply without prejudice to national law, in compliance with Union law, having the effect of limiting the use of certain AI systems where such law falls outside the scope of this Regulation or pursues legitimate public interest objectives other than those pursued by this Regulation. For example, national labour law and law on the protection of minors, namely persons below the age of 18, taking into account the UNCRC General Comment No 25 (2021) on children’s rights in relation to the digital environment, insofar as they are not specific to AI systems and pursue other legitimate public interest objectives, should not be affected by this Regulation.
(9)
Ba cheart rialacha comhchuibhithe is infheidhme maidir le córais intleachta saorga ardriosca a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid a leagan síos i gcomhréir le Rialachán (CE) Uimh. 765/2008 ó Pharlaimint na hEorpa agus ón gComhairle (7), Cinneadh Uimh. 768/2008/CE ó Pharlaimint na hEorpa agus ón gComhairle (8) agus Rialachán (AE) 2019/1020 ó Pharlaimint na hEorpa agus ón gComhairle (9) (Creat Reachtach Nua). Ba cheart feidhm a bheith ag na rialacha comhchuibhithe a leagtar síos sa Rialachán seo ar fud na n-earnálacha agus, i gcomhréir leis an gCreat Reachtach Nua, ba cheart iad a bheith gan dochar do dhlí an Aontais atá ann cheana, go háirithe maidir le cosaint sonraí, cosaint tomhaltóirí, cearta bunúsacha, fostaíocht, cosaint oibrithe, agus sábháilteacht táirgí, lena bhfuil an Rialachán seo comhlántach. Mar thoradh air sin, maidir leis na cearta agus na leigheasanna uile dá bhforáiltear leis an dlí sin de chuid an Aontais do thomhaltóirí, agus do dhaoine eile a bhféadfadh tionchar diúltach a bheith ag córais intleachta saorga orthu, lena n-áirítear a mhéid a bhaineann le cúiteamh damáistí a d’fhéadfadh a bheith ann de bhun Threoir 85/374/CEE ón gComhairle (10), ní dhéanfar difear dóibh agus beidh siad infheidhme go hiomlán. Thairis sin, i gcomhthéacs na fostaíochta agus chosaint na n-oibrithe, níor cheart leis an Rialachán seo difear a dhéanamh do dhlí an Aontais maidir le beartas sóisialta agus dlí náisiúnta an tsaothair, i gcomhréir le dlí an Aontais, maidir le dálaí fostaíochta agus oibre, lena n-áirítear sláinte agus sábháilteacht ag an obair agus an caidreamh idir fostóirí agus oibrithe. Anuas air sin, níor cheart leis an Rialachán seo difear a dhéanamh d’fheidhmiú na gceart bunúsach mar a aithnítear sna Ballstáit agus ar leibhéal an Aontais, lena n-áirítear an ceart nó an tsaoirse chun dul ar stailc nó gníomh eile a dhéanamh a chumhdaítear leis na córais shonracha caidrimh thionsclaíoch atá sna Ballstáit, chomh maith leis an gceart comhaontuithe comhchoiteanna a chaibidliú, a thabhairt i gcrích agus a fhorfheidhmiú nó gníomhaíocht chomhchoiteann a dhéanamh i gcomhréir leis an dlí náisiúnta. Níor cheart leis an Rialachán seo difear a dhéanamh do na forálacha arb é is aidhm dóibh dálaí oibre a fheabhsú san obair ardáin a leagtar síos i dTreoir ó Pharlaimint na hEorpa agus ón gComhairle maidir le dálaí oibre a fheabhsú san obair ardáin. Thairis sin, is é is aidhm don Rialachán seo éifeachtacht na gceart agus na leigheasanna sin atá ann cheana a neartú trí cheanglais agus oibleagáidí sonracha a bhunú, lena n-áirítear i ndáil le trédhearcacht, doiciméadacht theicniúil agus coimeád taifead na gcóras intleachta saorga. Thairis sin, ba cheart feidhm a bheith ag na hoibleagáidí a chuirtear ar oibreoirí éagsúla a bhfuil baint acu le slabhra luacha na hintleachta saorga faoin Rialachán seo gan dochar don dlí náisiúnta, i gcomhréir le dlí an Aontais, a bhfuil d’éifeacht acu úsáid córas intleachta saorga áirithe a theorannú i gcás ina dtagann an dlí sin lasmuigh de raon feidhme an Rialacháin seo nó ina saothraíonn sé cuspóirí dlisteanacha maidir le leas an phobail seachas na cuspóirí a shaothraítear leis an Rialachán seo. Mar shampla, níor cheart don Rialachán seo difear a dhéanamh don dlí saothair náisiúnta agus don dlí maidir le cosaint mionaoiseach, eadhon daoine faoi bhun 18 mbliana d’aois, agus Barúil Ghinearálta Uimh. 25 (2021) ó Choinbhinsiún na Náisiún Aontaithe um Chearta an Linbh maidir le cearta an linbh i ndáil leis an timpeallacht dhigiteach á gcur san áireamh, sa mhéid nach mbaineann siad go sonrach le córais intleachta saorga agus go saothraíonn siad cuspóirí dlisteanacha eile maidir le leas an phobail.
(10)
The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) 2016/679 (11) and (EU) 2018/1725 (12) of the European Parliament and of the Council and Directive (EU) 2016/680 of the European Parliament and of the Council (13). Directive 2002/58/EC of the European Parliament and of the Council (14) additionally protects private life and the confidentiality of communications, including by way of providing conditions for any storing of personal and non-personal data in, and access from, terminal equipment. Those Union legal acts provide the basis for sustainable and responsible data processing, including where data sets include a mix of personal and non-personal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments. It also does not affect the obligations of providers and deployers of AI systems in their role as data controllers or processors stemming from Union or national law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling. Harmonised rules for the placing on the market, the putting into service and the use of AI systems established under this Regulation should facilitate the effective implementation and enable the exercise of the data subjects’ rights and other remedies guaranteed under Union law on the protection of personal data and of other fundamental rights.
(10)
Cumhdaítear an ceart bunúsach maidir le cosaint sonraí pearsanta go háirithe le Rialacháin (AE) 2016/679 (11) agus (AE) 2018/1725 (12) ó Pharlaimint na hEorpa agus ón gComhairle agus le Treoir (AE) 2016/680 ó Pharlaimint na hEorpa agus ón gComhairle (13). Ina theannta sin, le Treoir 2002/58/CE ó Pharlaimint na hEorpa agus ón gComhairle (14), cosnaítear an saol príobháideach agus rúndacht na cumarsáide, lena n-áirítear trí choinníollacha a chur ar fáil maidir le haon stóráil sonraí pearsanta agus neamhphearsanta i dtrealamh teirminéil nó rochtain ar na sonraí pearsanta nó neamhphearsanta sin ó threalamh teirminéil. Tá na gníomhartha dlí sin de chuid an Aontais ina mbunús le próiseáil sonraí inbhuanaithe agus fhreagrach, lena n-áirítear cásanna ina gcuimsítear le tacair sonraí teaglaim de shonraí pearsanta agus neamhphearsanta. Ní fhéachtar leis an Rialachán seo le difear a dhéanamh do chur i bhfeidhm dhlí an Aontais atá ann cheana lena rialaítear próiseáil sonraí pearsanta, lena n-áirítear cúraimí agus cumhachtaí na n-údarás maoirseachta neamhspleách atá inniúil chun faireachán a dhéanamh ar chomhlíonadh na n-ionstraimí sin. Ina theannta sin, ní dhéantar leis difear d’oibleagáidí soláthraithe agus úsáideoirí gairmiúla córas intleachta saorga ina ról mar rialaitheoirí nó próiseálaithe sonraí a eascraíonn as dlí an Aontais nó an dlí náisiúnta maidir le cosaint sonraí pearsanta a mhéid a bhaineann dearadh, forbairt nó úsáid córas intleachta saorga le próiseáil sonraí pearsanta. Is iomchuí a shoiléiriú freisin go leanann na hábhair sonraí de bheith ag tairbhiú de na cearta agus na ráthaíochtaí go léir a bhronntar orthu le dlí sin an Aontais, lena n-áirítear na cearta a bhaineann le cinnteoireacht aonair uathoibrithe amháin, lena n-áirítear próifíliú. Le rialacha comhchuibhithe maidir le córais intleachta saorga a bhunaítear faoin Rialachán seo a chur ar an margadh, a chur i mbun seirbhíse agus a úsáid, ba cheart cur chun feidhme éifeachtach chearta na n-ábhar sonraí a éascú agus feidhmiú na gceart sin agus leigheasanna eile a ráthaítear faoi dhlí an Aontais maidir le cosaint sonraí pearsanta agus cearta bunúsacha eile a chumasú.
(11)
This Regulation should be without prejudice to the provisions regarding the liability of providers of intermediary services as set out in Regulation (EU) 2022/2065 of the European Parliament and of the Council (15).
(11)
Ba cheart don Rialachán seo a bheith gan dochar do na forálacha maidir le dliteanas na soláthraithe seirbhísí idirghabhálacha a leagtar amach i Rialachán (AE) 2022/2065 ó Pharlaimint na hEorpa agus ón gComhairle (15).
(12)
The notion of ‘AI system’ in this Regulation should be clearly defined and should be closely aligned with the work of international organisations working on AI to ensure legal certainty, facilitate international convergence and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field. Moreover, the definition should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI systems to derive models or algorithms, or both, from inputs or data. The techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning, reasoning or modelling. The term ‘machine-based’ refers to the fact that AI systems run on machines. The reference to explicit or implicit objectives underscores that AI systems can operate according to explicit defined objectives or to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. For the purposes of this Regulation, environments should be understood to be the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems and include predictions, content, recommendations or decisions. AI systems are designed to operate with varying levels of autonomy, meaning that they have some degree of independence of actions from human involvement and of capabilities to operate without human intervention. The adaptiveness that an AI system could exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use. AI systems can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serves the functionality of the product without being integrated therein (non-embedded).
(12)
Ba cheart an coincheap ‘córas intleachta saorga’ sa Rialachán seo a shainiú go soiléir agus ba cheart é a ailíniú go dlúth le hobair na n-eagraíochtaí idirnáisiúnta atá ag obair ar an intleacht shaorga chun deimhneacht dhlíthiúil a áirithiú, comhchuibhiú agus glacadh forleathan a éascú, agus solúbthacht á soláthar ag an am céanna chun freastal ar na forbairtí tapa teicneolaíocha sa réimse sin. Thairis sin, ba cheart an sainmhíniú a bheith bunaithe ar phríomh-shaintréithe na gcóras intleachta saorga chun idirdhealú a dhéanamh idir é agus córais bogearraí traidisiúnta nó cuir chuige ríomhchlárúcháin níos simplí agus níor cheart a chumhdach leis córais atá bunaithe ar na rialacha arna sainiú ag daoine nádúrtha amháin chun oibríochtaí a chur i gcrích go huathoibríoch. Príomh-shaintréith de chórais intleachta saorga is ea a gcumas tuisceana. Maidir leis an gcumas tuisceana sin, is tagairt é don phróiseas atá ann aschuir a fháil, amhail tuartha, inneachar, moltaí, nó cinntí, ar féidir leo tionchar a imirt ar thimpeallachtaí fisiciúla agus fíorúla, agus do chumas na gcóras intleachta saorga samhlacha nó algartaim a dhíorthú ó ionchuir nó ó shonraí. Ar na teicnící lena gcumasaítear an tuiscint sin agus córas intleachta saorga á thógáil, tá cuir chuige mheaisínfhoghlama a fhoghlaimíonn ó shonraí conas cuspóirí áirithe a bhaint amach, agus cuir chuige loighicbhunaithe agus eolasbhunaithe a fhaigheann an tuiscint ó eolas ionchódaithe nó ó léiriú siombalach, nó ón dá rud, ar an tasc atá le réiteach. Tá níos mó i gceist le cumas an chórais intleachta saorga tuiscint a bheith aige ná próiseáil sonraí bunúsacha sa mhéid is go gcumasaíonn sé foghlaim, réasúnaíocht nó samhaltú. Tagraíonn an téarma ‘meaisín-bhunaithe’ don fhíoras go mbíonn córais intleachta saorga ag feidhmiú ar mheaisíní. Leis an tagairt do chuspóirí sainráite nó intuigthe, cuirtear i bhfios go láidir gur féidir le córais intleachta saorga oibriú de réir cuspóirí sainráite nó cuspóirí intuigthe. Féadfaidh cuspóirí an chórais intleachta saorga a bheith éagsúil leis an gcríoch atá beartaithe don chóras intleachta saorga i gcomhthéacs sonrach. Chun críocha an Rialacháin seo, ba cheart timpeallachtaí a thuiscint mar na comhthéacsanna ina n-oibríonn na córais intleachta saorga, ach a mhéid a bhaineann le haschuir arna nginiúint ag an gcóras intleachta saorga, léiríonn siad feidhmeanna éagsúla a dhéanann córais intleachta saorga agus áirítear leo tuartha, inneachar, moltaí nó cinntí. Tá córais intleachta saorga ceaptha chun oibriú le leibhéil éagsúla neamhspleáchais, rud a chiallaíonn go bhfuil méid áirithe neamhspleáchais gníomhaíochtaí acu ar rannpháirtíocht an duine agus méid áirithe neamhspleáchais cumais oibriú gan idirghabháil ón duine. An leibhéal oiriúnachta a d’fhéadfadh córas intleachta saorga a thaispeáint tar éis a bheith curtha in úsáid, is tagairt é sin do chumais féinfhoghlama, rud a fhágann gur féidir leis an gcóras athrú agus é á úsáid. Is féidir córais intleachta saorga a úsáid ar bhonn neamhspleách nó mar chomhpháirt táirge, gan beann ar cé acu atá nó nach bhfuil an córas comhtháite go fisiciúil sa táirge (leabaithe) nó ag freastal ar fheidhmiúlacht an táirge gan a bheith comhtháite ann (neamhleabaithe).
(13)
The notion of ‘deployer’ referred to in this Regulation should be interpreted as any natural or legal person, including a public authority, agency or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. Depending on the type of AI system, the use of the system may affect persons other than the deployer.
(13)
Ba cheart an coincheap ‘úsáideoir gairmiúil’ dá dtagraítear sa Rialachán seo a léirmhíniú mar aon duine nádúrtha nó dlítheanach, lena n-áirítear údarás poiblí, gníomhaireacht nó comhlacht eile a bhfuil córas intleachta saorga á úsáid aige faoina údarás féin, ach amháin i gcás ina n-úsáidtear an córas intleachta saorga le linn gníomhaíocht phearsanta neamhghairmiúil. Ag brath ar an gcineál córais intleachta saorga, d’fhéadfadh úsáid an chórais difear a dhéanamh do dhaoine nach iad an t-úsáideoir gairmiúil iad.
(14)
The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 and Article 3, point (13) of Directive (EU) 2016/680. Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons.
(14)
Ba cheart an coincheap ‘sonraí bithmhéadracha’ a úsáidtear sa Rialachán seo a léirmhíniú i bhfianaise choincheap na sonraí bithmhéadracha mar a shainmhínítear in Airteagal 4, pointe (14) de Rialachán (AE) 2016/679, Airteagal 3, pointe (18) de Rialachán (AE) 2018/1725 agus Airteagal 3, pointe (13) de Threoir (AE) 2016/680. Le sonraí bithmhéadracha, is féidir fíordheimhniú, sainaithint nó catagóiriú daoine nádúrtha a chumasú, chomh maith lena mothúcháin a aithint.
(15)
The notion of ‘biometric identification’ referred to in this Regulation should be defined as the automated recognition of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics, for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a reference database, irrespective of whether the individual has given its consent or not. This excludes AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having security access to premises.
(15)
Ba cheart an coincheap ‘sainaitheantas bithmhéadrach’ dá dtagraítear sa Rialachán seo a shainmhíniú mar aithint uathoibrithe ghnéithe fisiciúla, fiseolaíocha agus iompraíochta an duine amhail aghaidh an duine, gluaiseacht súl, cruth coirp, guth, prosóid, siúl, iompar coirp, ráta croí, brú fola, boladh, saintréithe eochairbhuille, chun críche céannacht duine aonair a bhunú trí shonraí bithmhéadracha an duine aonair sin a chur i gcomparáid le sonraí bithmhéadracha daoine aonair eile ar sonraí iad arna stóráil i mbunachar sonraí tagartha, gan beann ar thoiliú an duine aonair a bheith tugtha nó gan a bheith tugtha. Ní áirítear leis sin córais intleachta saorga atá beartaithe a úsáid le haghaidh fíorú bithmhéadrach, lena n-áirítear fíordheimhniú, arb é an t-aon chuspóir atá leis a dheimhniú gur duine nádúrtha ar leith an duine a mhaíonn sé nó sí a bheith ann agus chun céannacht duine nádúrtha a dheimhniú chun rochtain a fháil ar sheirbhís, feiste a dhíghlasáil nó rochtain slándála a bheith aige ar áitreabh, agus an méid sin amháin.
(16)
The notion of ‘biometric categorisation’ referred to in this Regulation should be defined as assigning natural persons to specific categories on the basis of their biometric data. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority, sexual or political orientation. This does not include biometric categorisation systems that are a purely ancillary feature intrinsically linked to another commercial service, meaning that the feature cannot, for objective technical reasons, be used without the principal service, and the integration of that feature or functionality is not a means to circumvent the applicability of the rules of this Regulation. For example, filters categorising facial or body features used on online marketplaces could constitute such an ancillary feature as they can be used only in relation to the principal service which consists in selling a product by allowing the consumer to preview the display of the product on him or herself and help the consumer to make a purchase decision. Filters used on online social network services which categorise facial or body features to allow users to add or modify pictures or videos could also be considered to be ancillary feature as such filter cannot be used without the principal service of the social network services consisting in the sharing of content online.
(16)
Ba cheart an coincheap ‘catagóiriú bithmhéadrach’ dá dtagraítear sa Rialachán seo a shainmhíniú mar seo: coincheap lena sanntar daoine nádúrtha do chatagóirí sonracha ar bhonn a sonraí bithmhéadracha. D’fhéadfadh baint a bheith ag na catagóirí sonracha sin le raon gnéithe, amhail gnéas, aois, dath súl, dath gruaige, tatúnna, tréithe iompraíochta nó pearsantachta, teanga, reiligiún, ballraíocht i mionlach náisiúnta, gnéaschlaonadh nó claonadh polaitiúil. Ní áirítear leis sin córais catagóirithe bhithmhéadraigh ar gné choimhdeach amháin iad a bhfuil dlúthbhaint acu le seirbhís tráchtála eile, rud a chiallaíonn nach féidir an ghné a úsáid, ar chúiseanna teicniúla oibiachtúla, gan an phríomhsheirbhís agus nach bealach é comhtháthú na gné sin chun dul timpeall ar infheidhmeacht rialacha an Rialacháin seo. Mar shampla, d’fhéadfadh scagairí a dhéanann aicmiú ar ghnéithe aghaidhe nó coirp a úsáidtear ar mhargaí ar líne a bheith ina ngné choimhdeach den sórt sin toisc nach féidir iad a úsáid ach amháin i ndáil leis an bpríomhsheirbhís arb é atá inti táirge a dhíol trí ligean don tomhaltóir réamhamharc a bheith aige nó aici den táirge air féin nó uirthi féin, rud a chuidíonn leis an tomhaltóir cinneadh ceannaigh a dhéanamh. Maidir le scagairí a úsáidtear ar sheirbhísí líonra shóisialta ar líne lena ndéantar gnéithe den aghaidh nó gnéithe den chorp a chatagóiriú chun gur féidir le húsáideoirí pictiúir nó físeáin a chur ann nó a mhodhnú, d’fhéadfaí a mheas gur gné choimhdeach iad toisc nach féidir an scagaire sin a úsáid gan príomhsheirbhís na seirbhísí líonra shóisialta arb é atá inti inneachar a chomhroinnt ar líne.
(17)
The notion of ‘remote biometric identification system’ referred to in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons without their active involvement, typically at a distance, through the comparison of a person’s biometric data with the biometric data contained in a reference database, irrespectively of the particular technology, processes or types of biometric data used. Such remote biometric identification systems are typically used to perceive multiple persons or their behaviour simultaneously in order to facilitate significantly the identification of natural persons without their active involvement. This excludes AI systems intended to be used for biometric verification, which includes authentication, the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having security access to premises. That exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental rights of natural persons compared to the remote biometric identification systems which may be used for the processing of the biometric data of a large number of persons without their active involvement. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems concerned by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data has already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
(17)
Ba cheart an coincheap ‘córas cian-sainaitheanta bithmhéadraí’ dá dtagraítear sa Rialachán seo a shainmhíniú ar bhonn feidhmiúil mar chóras intleachta saorga atá beartaithe chun daoine nádúrtha a shainaithint gan iad a bheith rannpháirteach go gníomhach, go cianda de ghnáth, trí chomparáid a dhéanamh idir sonraí bithmhéadracha an duine agus na sonraí bithmhéadracha atá i mbunachar sonraí tagartha, gan beann ar an teicneolaíocht, próisis nó cineálacha sonraí bithmhéadracha áirithe a úsáidtear. Is iondúil go n-úsáidtear na córais cian-sainaitheanta bithmhéadraí sin chun roinnt daoine nó a n-iompar a bhrath ag an am céanna chun sainaithint daoine nádúrtha a éascú go suntasach gan iad a bheith rannpháirteach go gníomhach. Ní áirítear leis sin córais intleachta saorga atá beartaithe a úsáid le haghaidh fíorú bithmhéadrach, lena n-áirítear fíordheimhniú, arb é an t-aon chuspóir atá leis a dheimhniú gur duine nádúrtha ar leith an duine a mhaíonn sé nó sí a bheith ann agus chun céannacht duine nádúrtha a dheimhniú chun rochtain a fháil ar sheirbhís, feiste a dhíghlasáil nó rochtain slándála a bheith aige ar áitreabh, agus an méid sin amháin. Tá údar leis an eisiamh sin toisc gur dócha go mbeidh tionchar beag ag na córais sin ar chearta bunúsacha daoine nádúrtha i gcomparáid leis na córais cian-sainaitheanta bithmhéadraí a d’fhéadfaí a úsáid chun sonraí bithmhéadracha líon mór daoine a phróiseáil gan iad a bheith rannpháirteach go gníomhach. I gcás na gcóras ‘fíor-ama’, déantar gabháil na sonraí bithmhéadracha, an chomparáid agus an tsainaithint ar fad go meandrach, go meandrach nach mór nó in aon chás gan moill mhór. I dtaca leis sin, níor cheart aon fhéidearthacht a bheith ann dul timpeall ar rialacha an Rialacháin seo maidir le húsáid ‘fíor-ama’ na gcóras intleachta saorga lena mbaineann trí fhoráil a dhéanamh maidir le mionmhoilleanna. Bíonn úsáid ábhair ‘bheo’ nó ‘neasbheo’ i gceist le córais ‘fíor-ama’, amhail píosa scannánaíochta, arna ghiniúint le ceamara nó le feiste eile ag a bhfuil feidhmiúlacht chomhchosúil. I gcás na gcóras ‘iar-aimseartha’, i gcodarsnacht leis sin, tá na sonraí bithmhéadracha gafa cheana féin agus déantar an chomparáid agus an tsainaithint tar éis moill mhór. Is éard atá i gceist leis sin ábhar, amhail pictiúir nó píosa scannánaíochta arna nginiúint le ceamaraí teilifíse ciorcaid iata nó le gairis phríobháideacha, a gineadh roimh úsáid an chórais i leith na ndaoine nádúrtha lena mbaineann.
(18)
The notion of ‘emotion recognition system’ referred to in this Regulation should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue, including, for example, systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. This does also not include the mere detection of readily apparent expressions, gestures or movements, unless they are used for identifying or inferring emotions. Those expressions can be basic facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or characteristics of a person’s voice, such as a raised voice or whispering.
(18)
Ba cheart an coincheap ‘córais aitheanta mothúcháin’ dá dtagraítear sa Rialachán seo a shainmhíniú mar chóras intleachta saorga atá ceaptha chun mothúcháin nó rúin daoine nádúrtha a shainaithint ar bhonn a sonraí bithmhéadracha nó chun mothúcháin nó rúin a oibriú amach ar bhonn na sonraí sin. Tagraíonn an coincheap do raon mothúchán nó rún, amhail sonas, brón, fearg, iontas, déistin, náire, sceitimíní, aiféaltas, drochmheas, sásamh agus sult. Ní áirítear leis staideanna fisiciúla, amhail pian nó tuirse lena n-áirítear, mar shampla, córais a úsáidtear chun tuirse píolótaí nó tiománaithe gairmiúla a bhrath chun tionóiscí a chosc. Ní áirítear leis sin ach oiread gothaí gnúise, gothaí nó gluaiseachtaí atá ríshoiléir a bhrath, ach amháin má úsáidtear iad chun mothúcháin a aithint nó a thuiscint. D’fhéadfadh a bheith i gceist leis na gothaí sin gothaí aghaidhe bunúsacha, amhail strainc nó miongháire, nó gothaí amhail gluaiseacht na lámh, na ngéag nó an chinn, nó tréithe guth duine, ar nós guth ard nó a bheith ag cogarnach.
(19)
For the purposes of this Regulation the notion of ‘publicly accessible space’ should be understood as referring to any physical space that is accessible to an undetermined number of natural persons, and irrespective of whether the space in question is privately or publicly owned, irrespective of the activity for which the space may be used, such as for commerce, for example, shops, restaurants, cafés; for services, for example, banks, professional activities, hospitality; for sport, for example, swimming pools, gyms, stadiums; for transport, for example, bus, metro and railway stations, airports, means of transport; for entertainment, for example, cinemas, theatres, museums, concert and conference halls; or for leisure or otherwise, for example, public roads and squares, parks, forests, playgrounds. A space should also be classified as being publicly accessible if, regardless of potential capacity or security restrictions, access is subject to certain predetermined conditions which can be fulfilled by an undetermined number of persons, such as the purchase of a ticket or title of transport, prior registration or having a certain age. In contrast, a space should not be considered to be publicly accessible if access is limited to specific and defined natural persons through either Union or national law directly related to public safety or security or through the clear manifestation of will by the person having the relevant authority over the space. The factual possibility of access alone, such as an unlocked door or an open gate in a fence, does not imply that the space is publicly accessible in the presence of indications or circumstances suggesting the contrary, such as. signs prohibiting or restricting access. Company and factory premises, as well as offices and workplaces that are intended to be accessed only by relevant employees and service providers, are spaces that are not publicly accessible. Publicly accessible spaces should not include prisons or border control. Some other spaces may comprise both publicly accessible and non-publicly accessible spaces, such as the hallway of a private residential building necessary to access a doctor’s office or an airport. Online spaces are not covered, as they are not physical spaces. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
(19)
Chun críocha an Rialacháin seo, ba cheart an coincheap ‘spás atá inrochtana don phobal’ a thuiscint mar thagairt d’aon spás fisiciúil atá inrochtana do líon neamhchinntithe daoine nádúrtha, agus gan beann ar an spás atá i gceist a bheith faoi úinéireacht phríobháideach nó phoiblí, gan beann ar an ngníomhaíocht a bhféadfar an spás a úsáid ina leith, amhail le haghaidh tráchtála, mar shampla, siopaí, bialanna, caiféanna; le haghaidh seirbhísí, mar shampla, bainc, gníomhaíochtaí gairmiúla, fáilteachas; le haghaidh spóirt, mar shampla, linnte snámha, giomnáisiamaí, staideanna; le haghaidh iompair, mar shampla, stáisiúin bus, mheitreo agus iarnróid, aerfoirt, modhanna iompair; le haghaidh siamsaíochta, mar shampla, pictiúrlanna, amharclanna, músaeim, hallaí ceolchoirme agus comhdhálacha; nó le haghaidh fóillíochta nó eile, mar shampla, bóithre agus cearnóga poiblí, páirceanna, foraoisí, clóis súgartha. Ba cheart spás a aicmiú freisin mar áit atá inrochtana don phobal freisin más rud é, gan beann ar acmhainneacht nó srianta slándála a d’fhéadfadh a bheith ann, go bhfuil rochtain faoi réir coinníollacha réamhshocraithe áirithe ar féidir le líon neamhchinntithe daoine iad a chomhlíonadh, amhail ticéad a cheannach nó ticéad iompair a cheannach, clárúchán roimh ré nó aois áirithe a bheith slánaithe agat. Os a choinne sin, níor cheart a mheas go bhfuil spás inrochtana don phobal má tá rochtain teoranta do dhaoine nádúrtha sonracha agus sainithe trí dhlí an Aontais nó tríd an dlí náisiúnta a bhaineann go díreach le sábháilteacht nó slándáil phoiblí nó trí léiriú soiléir thoil an duine a bhfuil an t-údarás ábhartha aige ar an spás. Ní thugtar le tuiscint go bhfuil ar an spás inrochtana don phobal díreach toisc gur féidir rochtain a fháil uirthi, amhail doras nach bhfuil glas air nó geata oscailte i bhfál, agus comharthaí nó imthosca inti a thugann a mhalairt le fios, amhail comharthaí lena dtoirmisctear nó lena sriantar rochtain. Is spásanna iad áitribh chuideachta agus mhonarchan, chomh maith le hoifigí agus ionaid oibre nach bhfuil inrochtana don phobal, cé is moite d'fhostaithe ábhartha agus soláthraithe seirbhíse, mar a bheartaítear é. Níor cheart príosúin nó rialú teorann a áireamh i spásanna atá inrochtana don phobal. D’fhéadfadh roinnt spásanna eile a bheith comhdhéanta de limistéir atá inrochtana don phobal agus spásanna nach bhfuil inrochtana don phobal, amhail halla foirgnimh chónaithe phríobháidigh atá riachtanach chun rochtain a fháil ar oifig dochtúra nó an t-aerfort. Ní chumhdaítear spásanna ar líne, ós rud é nach spásanna fisiciúla iad. Cé acu atá spás a bhfuil rochtain ag an bpobal air nó nach bhfuil, áfach, ba cheart é sin a chinneadh ar bhonn cás ar chás, ag féachaint do shonraíochtaí an cháis aonair faoi chaibidil.
(20)
In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of the application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement. Furthermore, the wide implementation of AI literacy measures and the introduction of appropriate follow-up actions could contribute to improving working conditions and ultimately sustain the consolidation, and innovation path of trustworthy AI in the Union. The European Artificial Intelligence Board (the ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant stakeholders, the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among persons dealing with the development, operation and use of AI.
(20)
Chun na tairbhí is mó a fháil ó chórais intleachta saorga agus, ag an am céanna, cearta bunúsacha, sláinte agus sábháilteacht a chosaint agus rialú daonlathach a chumasú, ba cheart do litearthacht intleachta saorga na coincheapa is gá a thabhairt do sholáthraithe, d’úsáideoirí gairmiúla agus do dhaoine a ndéantar difear dóibh chun cinntí eolasacha a dhéanamh maidir le córais intleachta saorga. D’fhéadfadh na coincheapa sin a bheith éagsúil maidir leis an gcomhthéacs ábhartha agus is féidir a áireamh leo tuiscint a fháil ar chur i bhfeidhm ceart na n-eilimintí teicniúla le linn chéim forbartha an chórais intleachta saorga, na bearta atá le cur i bhfeidhm le linn a húsáide, na bealaí oiriúnacha chun aschur an chórais intleachta saorga a léirmhíniú, agus, i gcás na ndaoine a ndéantar difear dóibh, an t-eolas is gá chun go dtuigfidh siad an tionchar a bheidh ag cinntí a dhéanfar le cúnamh na hintleachta saorga orthu. I gcomhthéacs chur i bhfeidhm an Rialacháin seo, ba cheart do litearthacht intleachta saorga na léargais is gá a sholáthar do na gníomhaithe ábhartha uile i slabhra luacha na hintleachta saorga chun comhlíonadh iomchuí agus forfheidhmiú ceart a áirithiú. Ina theannta sin, dá gcuirfí bearta litearthachta intleachta saorga chun feidhme go forleathan agus dá dtabharfaí gníomhaíochtaí leantacha iomchuí isteach, d’fhéadfaí rannchuidiú le feabhas a chur ar dhálaí oibre agus, ar deireadh, tacú le comhdhlúthú agus conair nuálaíochta na hintleachta saorga iontaofa san Aontas. Ba cheart don Bhord Eorpach um an Intleacht Shaorga (‘an Bord’) tacú leis an gCoimisiún litearthacht intleachta saorga, feasacht phoiblí agus tuiscint ar na tairbhí, na rioscaí, na coimircí, na cearta agus na hoibleagáidí i ndáil le húsáid córas intleachta saorga a chur chun cinn. I gcomhar leis na geallsealbhóirí ábhartha, ba cheart don Choimisiún agus do na Ballstáit tarraingt suas cód iompair deonach a éascú chun litearthacht intleachta saorga a chur chun cinn i measc daoine a bhíonn ag déileáil le forbairt, oibriú agus úsáid na hintleachta saorga.
(21)
In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union.
(21)
Chun cothrom na Féinne agus cosaint éifeachtach cearta agus saoirsí daoine aonair ar fud an Aontais a áirithiú, ba cheart feidhm a bheith ag na rialacha a bhunaítear leis an Rialachán seo maidir le soláthraithe córas intleachta saorga ar bhonn neamh-idirdhealaitheach, gan beann ar iad a bheith bunaithe laistigh den Aontas nó i dtríú tír, agus maidir le húsáideoirí gairmiúla córas intleachta saorga atá bunaithe laistigh den Aontas.
(22)
In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are not placed on the market, put into service, or used in the Union. This is the case, for example, where an operator established in the Union contracts certain services to an operator established in a third country in relation to an activity to be performed by an AI system that would qualify as high-risk. In those circumstances, the AI system used in a third country by the operator could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union. Nonetheless, to take into account existing arrangements and special needs for future cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of cooperation or international agreements concluded at Union or national level for law enforcement and judicial cooperation with the Union or the Member States, provided that the relevant third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. Where relevant, this may cover activities of entities entrusted by the third countries to carry out specific tasks in support of such law enforcement and judicial cooperation. Such framework for cooperation or agreements have been established bilaterally between Member States and third countries or between the European Union, Europol and other Union agencies and third countries and international organisations. The authorities competent for supervision of the law enforcement and judicial authorities under this Regulation should assess whether those frameworks for cooperation or international agreements include adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. Recipient national authorities and Union institutions, bodies, offices and agencies making use of such outputs in the Union remain accountable to ensure their use complies with Union law. When those international agreements are revised or new ones are concluded in the future, the contracting parties should make utmost efforts to align those agreements with the requirements of this Regulation.
(22)
I bhfianaise a nádúr digiteach, ba cheart córais intleachta saorga áirithe a theacht faoi raon feidhme an Rialacháin seo fiú nuair nach gcuirtear ar an margadh iad, nach gcuirtear i seirbhís iad, nó nuair nach n-úsáidtear san Aontas iad. Is amhlaidh atá, mar shampla, i gcás oibreoir atá bunaithe san Aontas a dhéanann conradh maidir le seirbhísí áirithe le hoibreoir atá bunaithe i dtríú tír i ndáil le gníomhaíocht atá le déanamh ag córas intleachta saorga a cháileodh mar ghníomhaíocht ardriosca. Sna himthosca sin, maidir leis an gcóras intleachta saorga a úsáideann an t-oibreoir i dtríú tír, d’fhéadfadh sé sonraí a bhailítear san Aontas agus a aistrítear amach as an Aontas go dlíthiúil a phróiseáil, agus aschur an chórais intleachta saorga sin atá mar thoradh ar an bpróiseáil sin a sholáthar don oibreoir conarthach san Aontas, gan an córas intleachta saorga sin a chur ar an margadh, a chur i seirbhís nó a úsáid san Aontas. Chun dul timpeall ar an Rialachán seo a chosc agus chun cosaint éifeachtach daoine nádúrtha atá lonnaithe san Aontas a áirithiú, ba cheart feidhm a bheith ag an Rialachán seo freisin maidir le soláthraithe agus úsáideoirí gairmiúla córas intleachta saorga atá bunaithe i dtríú tír, sa mhéid atá sé beartaithe an t-aschur arna tháirgeadh ag na córais sin a úsáid san Aontas. Mar sin féin, chun socruithe atá ann cheana a chur san áireamh chomh maith le riachtanais speisialta i gcás comhar le comhpháirtithe eachtracha amach anseo a ndéantar malartú faisnéise agus fianaise leo, níor cheart feidhm a bheith ag an Rialachán seo maidir le húdaráis phoiblí tríú tír agus eagraíochtaí idirnáisiúnta agus iad ag gníomhú faoi chuimsiú an chomhair nó an chomhaontuithe idirnáisiúnta a tugadh i gcrích ar leibhéal an Aontais nó ar an leibhéal náisiúnta le haghaidh fhorfheidhmiú an dlí agus comhar breithiúnach leis an Aontas nó leis na Ballstáit, ar choinníoll go gcuireann an tríú tír ábhartha nó na heagraíochtaí idirnáisiúnta ábhartha coimircí leordhóthanacha ar fáil i ndáil le cearta agus saoirsí bunúsacha daoine aonair a chosaint. I gcás inarb ábhartha, féadfar a chumhdach leis sin gníomhaíochtaí na n-eintiteas ar chuir na tríú tíortha de chúram orthu cúraimí sonracha a dhéanamh chun tacú le forfheidhmiú an dlí agus leis an gcomhar breithiúnach sin. Bunaíodh an creat comhair sin nó na comhaontuithe sin go déthaobhach idir na Ballstáit agus tríú tíortha nó idir an tAontas Eorpach, Europol agus gníomhaireachtaí eile de chuid an Aontais agus tríú tíortha agus eagraíochtaí idirnáisiúnta. Ba cheart do na húdaráis atá inniúil ar mhaoirseacht a dhéanamh ar na húdaráis forfheidhmithe dlí agus ar na húdaráis bhreithiúnacha faoin Rialachán seo a mheas cé acu atá nó nach bhfuil coimircí leordhóthanacha i ndáil le cearta agus saoirsí bunúsacha daoine aonair a chosaint san áireamh sna creataí comhair nó sna comhaontuithe idirnáisiúnta sin. Údaráis náisiúnta is faighteoirí agus institiúidí, comhlachtaí, oifigí agus gníomhaireachtaí an Aontais a bhaineann úsáid as an aschur sin san Aontas, leanann siadsan de bheith cuntasach chun a áirithiú go gcomhlíonann siad dlí an Aontais. Nuair a dhéanfar na comhaontuithe idirnáisiúnta sin a athbhreithniú nó nuair a thabharfar comhaontuithe nua i gcrích amach anseo, ba cheart do na páirtithe conarthacha a ndícheall a dhéanamh na comhaontuithe sin a ailíniú le ceanglais an Rialacháin seo.
(23)
This Regulation should also apply to Union institutions, bodies, offices and agencies when acting as a provider or deployer of an AI system.
(23)
Ba cheart feidhm a bheith ag an Rialachán seo freisin maidir le hinstitiúidí, comhlachtaí, oifigí agus gníomhaireachtaí an Aontais agus iad ag gníomhú mar sholáthraí nó úsáideoir gairmiúil córais intleachta saorga.
(24)
If, and insofar as, AI systems are placed on the market, put into service, or used with or without modification of such systems for military, defence or national security purposes, those should be excluded from the scope of this Regulation regardless of which type of entity is carrying out those activities, such as whether it is a public or private entity. As regards military and defence purposes, such exclusion is justified both by Article 4(2) TEU and by the specificities of the Member States’ and the common Union defence policy covered by Chapter 2 of Title V TEU that are subject to public international law, which is therefore the more appropriate legal framework for the regulation of AI systems in the context of the use of lethal force and other AI systems in the context of military and defence activities. As regards national security purposes, the exclusion is justified both by the fact that national security remains the sole responsibility of Member States in accordance with Article 4(2) TEU and by the specific nature and operational needs of national security activities and specific national rules applicable to those activities. Nonetheless, if an AI system developed, placed on the market, put into service or used for military, defence or national security purposes is used outside those temporarily or permanently for other purposes, for example, civilian or humanitarian purposes, law enforcement or public security purposes, such a system would fall within the scope of this Regulation. In that case, the entity using the AI system for other than military, defence or national security purposes should ensure the compliance of the AI system with this Regulation, unless the system is already compliant with this Regulation. AI systems placed on the market or put into service for an excluded purpose, namely military, defence or national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within the scope of this Regulation and providers of those systems should ensure compliance with this Regulation. In those cases, the fact that an AI system may fall within the scope of this Regulation should not affect the possibility of entities carrying out national security, defence and military activities, regardless of the type of entity carrying out those activities, to use AI systems for national security, military and defence purposes, the use of which is excluded from the scope of this Regulation. An AI system placed on the market for civilian or law enforcement purposes which is used with or without modification for military, defence or national security purposes should not fall within the scope of this Regulation, regardless of the type of entity carrying out those activities.
(24)
Má dhéantar, agus a mhéid a dhéantar, córais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse, nó a úsáid le modhnú nó gan modhnú na gcóras sin chun críocha míleata, cosanta nó slándála náisiúnta, ba cheart iad sin a eisiamh ó raon feidhme an Rialacháin seo gan beann ar an gcineál eintitis atá i mbun na ngníomhaíochtaí sin, amhail más eintiteas poiblí nó príobháideach é. A mhéid a bhaineann le críocha míleata agus cosanta, tá údar leis an eisiamh sin de bharr Airteagal 4(2) CAE agus shonraíochtaí na mBallstát agus chomhbheartas cosanta an Aontais a chumhdaítear le Caibidil 2 de Theideal V CAE atá faoi réir an dlí idirnáisiúnta phoiblí, arb é, dá bhrí sin, an creat dlíthiúil is iomchuí chun córais intleachta saorga a rialáil i gcomhthéacs úsáid fornirt mharfaigh agus córas intleachta saorga eile i gcomhthéacs gníomhaíochtaí míleata agus cosanta. A mhéid a bhaineann le cuspóirí slándála náisiúnta, tá údar leis an eisiamh toisc gurb iad na Ballstáit amháin atá freagrach as an tslándáil náisiúnta i gcomhréir le hAirteagal 4(2) CAE agus mar gheall ar chineál sonrach agus riachtanais oibríochtúla na ngníomhaíochtaí slándála náisiúnta agus rialacha náisiúnta sonracha is infheidhme maidir leis na gníomhaíochtaí sin. Mar sin féin, maidir le córas intleachta saorga a fhorbraítear, a chuirtear ar an margadh, a chuirtear i mbun seirbhíse nó a úsáidtear chun críocha míleata, cosanta nó slándála náisiúnta, má úsáidtear an córas intleachta saorga sin lasmuigh de na críocha sin go sealadach nó go buan chun críocha eile, mar shampla chun críocha sibhialta nó daonnúla, chun críocha fhorfheidhmiú an dlí nó na slándála poiblí, thiocfadh an córas sin faoi raon feidhme an Rialacháin seo. Sa chás sin, ba cheart don eintiteas a úsáideann an córas intleachta saorga chun críocha eile seachas críocha míleata, cosanta nó slándála náisiúnta a áirithiú go gcomhlíonann an córas an Rialachán seo, ach amháin i gcás go gcomhlíonann an córas intleachta saorga sin an Rialachán seo cheana féin. Maidir le córais intleachta saorga a chuirtear ar an margadh nó i seirbhís chun críocha eisiata, eadhon chun críocha míleata, cosanta nó slándála náisiúnta agus chun críche amháin nó níos mó nach bhfuil eisiata, amhail críocha sibhialta nó forfheidhmiú an dlí, tagann siad faoi raon feidhme an Rialacháin seo agus ba cheart do sholáthraithe na gcóras sin comhlíonadh an Rialacháin seo a áirithiú. Sna cásanna sin, ós rud é go bhféadfadh córas intleachta saorga teacht faoi raon feidhme an Rialacháin seo, níor cheart dó difear a dhéanamh don fhéidearthacht go bhféadfadh eintitis gníomhaíochtaí slándála náisiúnta, cosanta agus míleata a dhéanamh, gan beann ar an gcineál eintitis a dhéanann na gníomhaíochtaí sin, córais intleachta saorga a úsáid chun críocha slándála náisiúnta, míleata agus cosanta, a bhfuil a n-úsáid eisiata ó raon feidhme an Rialacháin seo. Maidir le córas intleachta saorga a chuirtear ar an margadh chun críocha sibhialta nó fhorfheidhmiú an dlí agus a úsáidtear le modhnú nó gan mhodhnú chun críocha míleata, cosanta nó slándála náisiúnta, níor cheart dó teacht faoi raon feidhme an Rialacháin seo, gan beann ar an gcineál eintitis a dhéanann na gníomhaíochtaí sin.
(25)
This Regulation should support innovation, should respect freedom of science, and should not undermine research and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. Moreover, it is necessary to ensure that this Regulation does not otherwise affect scientific research and development activity on AI systems or models prior to being placed on the market or put into service. As regards product-oriented research, testing and development activity regarding AI systems or models, the provisions of this Regulation should also not apply prior to those systems and models being put into service or placed on the market. That exclusion is without prejudice to the obligation to comply with this Regulation where an AI system falling into the scope of this Regulation is placed on the market or put into service as a result of such research and development activity and to the application of provisions on AI regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to the exclusion of AI systems specifically developed and put into service for the sole purpose of scientific research and development, any other AI system that may be used for the conduct of any research and development activity should remain subject to the provisions of this Regulation. In any event, any research and development activity should be carried out in accordance with recognised ethical and professional standards for scientific research and should be conducted in accordance with applicable Union law.
(25)
Ba cheart don Rialachán seo tacú leis an nuálaíocht, ba cheart dó saoirse na heolaíochta a urramú, agus níor cheart dó an bonn a bhaint de ghníomhaíocht taighde agus forbartha. Is gá, dá bhrí sin, córais agus samhlacha intleachta saorga a fhorbraítear agus a chuirtear i mbun seirbhíse go sonrach chun críche taighde agus forbartha eolaíche, agus chun na críche sin amháin, a eisiamh óna raon feidhme. Thairis sin, is gá a áirithiú nach ndéanfaidh an Rialachán seo difear ar shlí eile do thaighde eolaíoch agus do ghníomhaíocht forbartha ar chórais intleachta saorga nó ar shamhlacha intleachta saorga sula gcuirfear ar an margadh nó i seirbhís iad. A mhéid a bhaineann le gníomhaíocht taighde, tástála agus forbartha atá dírithe ar tháirgí maidir le córais nó samhlacha intleachta saorga, níor cheart feidhm a bheith ag forálacha an Rialacháin seo sula gcuirfear na córais agus na samhlacha sin i seirbhís nó sula gcuirfear ar an margadh iad. Tá an t-eisiamh sin gan dochar don oibleagáid maidir leis an Rialachán seo a chomhlíonadh i gcás ina gcuirtear córas intleachta saorga a thagann faoi raon feidhme an Rialacháin seo ar an margadh nó i seirbhís mar thoradh ar an ngníomhaíocht taighde agus forbartha sin agus do chur i bhfeidhm na bhforálacha maidir le boscaí gainimh rialála intleachta saorga agus tástáil i bhfíordhálaí. Thairis sin, gan dochar d’eisiamh na gcóras intleachta saorga a fhorbraítear agus a chuirtear i seirbhís go sonrach chun críche taighde agus forbartha eolaíche agus chun na críche sin amháin, ba cheart d’aon chóras intleachta saorga eile a d’fhéadfaí a úsáid chun aon ghníomhaíocht taighde agus forbartha a dhéanamh leanúint de bheith faoi réir fhorálacha an Rialacháin seo. In aon chás, ba cheart aon ghníomhaíocht taighde agus forbartha a dhéanamh i gcomhréir le caighdeáin eiticiúla proifisiúnta aitheanta don taighde eolaíoch agus ba cheart í a dhéanamh i gcomhréir le dlí an Aontais is infheidhme.
(26)
In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable AI practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
(26)
Chun tacar rialacha ceangailteacha atá comhréireach agus éifeachtach a thabhairt isteach maidir le córais intleachta saorga, ba cheart cur chuige rioscabhunaithe atá sainithe go soiléir a leanúint. Leis an gcur chuige sin, ba cheart cineál agus inneachar na rialacha sin a chur in oiriúint do dhéine agus raon feidhme na rioscaí a d’fhéadfadh a bheith ag baint le córais intleachta saorga. Is gá, dá bhrí sin, toirmeasc a chur ar chleachtais dho-ghlactha intleachta saorga áirithe, ceanglais a leagan síos maidir le córais intleachta saorga ardriosca agus oibleagáidí a leagan síos le haghaidh na n-oibreoirí ábhartha, agus oibleagáidí trédhearcachta a leagan síos maidir le córais intleachta saorga áirithe.
(27)
While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is important to recall the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG appointed by the Commission. In those guidelines, the AI HLEG developed seven non-binding ethical principles for AI which are intended to help ensure that AI is trustworthy and ethically sound. The seven principles include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being and accountability. Without prejudice to the legally binding requirements of this Regulation and any other applicable Union law, those guidelines contribute to the design of coherent, trustworthy and human-centric AI, in line with the Charter and with the values on which the Union is founded. According to the guidelines of the AI HLEG, human agency and oversight means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. Technical robustness and safety means that AI systems are developed and used in a way that allows robustness in the case of problems and resilience against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, and minimise unintended harm. Privacy and data governance means that AI systems are developed and used in accordance with privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights. Diversity, non-discrimination and fairness means that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. Social and environmental well-being means that AI systems are developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. The application of those principles should be translated, when possible, in the design and use of AI models. They should in any case serve as a basis for the drafting of codes of conduct under this Regulation. All stakeholders, including industry, academia, civil society and standardisation organisations, are encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best practices and standards.
(27)
Cé go bhfuil an cur chuige rioscabhunaithe mar bhonn do thacar comhréireach agus éifeachtach rialacha ceangailteacha, tá sé tábhachtach cuimhneamh ar threoirlínte Eitice 2019 le haghaidh intleacht shaorga iontaofa a d’fhorbair an Grúpa Ardleibhéil Saineolaithe um an Intleacht Shaorga, ar grúpa neamhspleách é arna cheapadh ag an gCoimisiún. Sna treoirlínte sin, d’fhorbair an Grúpa Ardleibhéil Saineolaithe um an Intleacht Shaorga seacht bprionsabal eiticiúla neamhcheangailteacha le haghaidh intleacht shaorga atá beartaithe chun a áirithiú go mbeidh an intleacht shaorga iontaofa agus fónta ó thaobh eitice de. Áirítear ar na seacht bprionsabal gníomhú daonna agus formhaoirseacht dhaonna; stóinseacht theicniúil agus sábháilteacht; príobháideachas agus rialachas sonraí; trédhearcacht; éagsúlacht, neamh-idirdhealú agus cothroime; dea-bhail shóisialta agus chomhshaoil agus cuntasacht. Gan dochar do cheanglais an Rialacháin seo atá ceangailteach ó thaobh dlí agus d’aon dlí eile is infheidhme de chuid an Aontais, rannchuidíonn na treoirlínte sin le dearadh intleachta saorga atá comhleanúnach, iontaofa agus duinelárnach, i gcomhréir leis an gCairt agus leis na luachanna ar a bhfuil an tAontas fothaithe. De réir threoirlínte an Ghrúpa Ardleibhéil Saineolaithe um an Intleacht Shaorga, ciallaíonn gníomhú daonna agus formhaoirseacht dhaonna go bhforbraítear agus go n-úsáidtear córais intleachta saorga mar uirlis a fhónann do dhaoine, a urramaíonn dínit an duine agus neamhspleáchas pearsanta, agus a fheidhmíonn ar bhealach ar féidir le daoine í a rialú agus a mhaoirsiú go hiomchuí. Ciallaíonn stóinseacht theicniúil agus sábháilteacht go ndéantar córais intleachta saorga a fhorbairt agus a úsáid ar bhealach lena gceadaítear stóinseacht i gcás fadhbanna agus athléimneacht i gcoinne iarrachtaí chun úsáid nó feidhmíocht an chórais intleachta saorga a athrú chun ligean do thríú páirtithe úsáid neamhdhleathach a bhaint as, agus díobháil neamhbheartaithe a íoslaghdú. Ciallaíonn príobháideachas agus rialachas sonraí go bhforbraítear agus go n-úsáidtear córais intleachta saorga i gcomhréir leis na rialacha príobháideachais agus cosanta sonraí, agus próiseáil á déanamh ar shonraí a chomhlíonann ardchaighdeáin ó thaobh cáilíochta agus sláine de. Ciallaíonn trédhearcacht go bhforbraítear agus go n-úsáidtear córais intleachta saorga ar bhealach lenar féidir inrianaitheacht agus inmhínitheacht iomchuí a dhéanamh, agus daoine á gcur ar an eolas ag an am céanna go ndéanann siad cumarsáid nó idirghníomhú le córas intleachta saorga, chomh maith le húsáideoirí gairmiúla á gcur ar an eolas go cuí faoi chumais agus teorainneacha an chórais intleachta saorga sin agus daoine dá ndéantar difear á gcur ar eolas faoina gcearta. Ciallaíonn éagsúlacht, neamh-idirdhealú agus cothroime go bhforbraítear agus go n-úsáidtear córais intleachta saorga ar bhealach ina n-áirítear gníomhaithe éagsúla agus ina gcuirtear rochtain chomhionann, comhionannas inscne agus éagsúlacht chultúrtha chun cinn, agus tionchair idirdhealaitheacha agus claontachtaí éagóracha a thoirmisctear le dlí an Aontais nó leis an dlí náisiúnta á seachaint ag an am céanna. Ciallaíonn dea-bhail shóisialta agus chomhshaoil go bhforbraítear agus go n-úsáidtear córais intleachta saorga ar bhealach atá inbhuanaithe agus neamhdhíobhálach don chomhshaol agus ar bhealach a théann chun tairbhe do gach duine, agus faireachán agus measúnú á ndéanamh ag an am céanna ar na tionchair fhadtéarmacha ar an duine aonair, ar an tsochaí agus ar an daonlathas. Ba cheart cur i bhfeidhm na bprionsabal sin a léiriú, nuair is féidir, i ndearadh agus in úsáid samhlacha intleachta saorga. Ba cheart dóibh fónamh, ar aon chuma, mar bhunús chun cóid iompair a dhréachtú faoin Rialachán seo. Moltar do na geallsealbhóirí uile, lena n-áirítear lucht tionscail, an saol acadúil, an tsochaí shibhialta agus eagraíochtaí um chaighdeánú, na prionsabail eiticiúla a chur san áireamh, de réir mar is iomchuí, chun dea-chleachtais agus caighdeáin dheonacha a fhorbairt.
(28)
Aside from the many beneficial uses of AI, it can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to non-discrimination, to data protection and to privacy and the rights of the child.
(28)
Cé is moite de na húsáidí tairbhiúla éagsúla a bhaineann leis an intleacht shaorga, d’fhéadfaí í a mhí-úsáid freisin agus uirlisí úra cumhachtacha a sholáthar le haghaidh cleachtais ionramhálacha dhúshaothraithe agus cleachtais smachta sóisialta. D’fhéadfaí dochar ar leith a dhéanamh agus droch-íde ar leith a thabhairt, leis na cleachtais sin, agus ba cheart iad a thoirmeasc toisc go dtagann siad salach ar luachanna an Aontais maidir le meas ar dhínit an duine, an tsaoirse, an comhionannas, an daonlathas, an smacht reachta agus cearta bunúsacha arna gcumhdach sa Chairt, lena n-áirítear an ceart chun neamh-idirdhealaithe, an ceart chun na cosanta sonraí, an ceart chun príobháideachais agus cearta an linbh.
(29)
AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making and free choices. The placing on the market, the putting into service or the use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby significant harms, in particular having sufficiently important adverse impacts on physical, psychological health or financial interests are likely to occur, are particularly dangerous and should therefore be prohibited. Such AI systems deploy subliminal components such as audio, image, video stimuli that persons cannot perceive, as those stimuli are beyond human perception, or other manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making or free choice in ways that people are not consciously aware of those techniques or, where they are aware of them, can still be deceived or are not able to control or resist them. This could be facilitated, for example, by machine-brain interfaces or virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar as they may materially distort their behaviour in a significantly harmful manner. In addition, AI systems may also otherwise exploit the vulnerabilities of a person or a specific group of persons due to their age, disability within the meaning of Directive (EU) 2019/882 of the European Parliament and of the Council (16), or a specific social or economic situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme poverty, ethnic or religious minorities. Such AI systems can be placed on the market, put into service or used with the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause significant harm to that or another person or groups of persons, including harms that may be accumulated over time and should therefore be prohibited. It may not be possible to assume that there is an intention to distort behaviour where the distortion results from factors external to the AI system which are outside the control of the provider or the deployer, namely factors that may not be reasonably foreseeable and therefore not possible for the provider or the deployer of the AI system to mitigate. In any case, it is not necessary for the provider or the deployer to have the intention to cause significant harm, provided that such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the provisions contained in Directive 2005/29/EC of the European Parliament and of the Council (17), in particular unfair commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or otherwise. The prohibitions of manipulative and exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in accordance with the applicable law and medical standards, for example explicit consent of the individuals or their legal representatives. In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled practices.
(29)
Is féidir teicnící ionramhálacha atá cumasaithe ag intleacht shaorga a úsáid chun a chur ina luí ar dhaoine dul i mbun iompraíocht neamh-inmhianaithe, nó chun iad a mheabhlú trí thathant orthu cinntí a dhéanamh ar bhealach a threascraíonn agus a chuireann isteach ar a neamhspleáchas, a gcinnteoireacht agus a saor-roghanna. Maidir le córais intleachta saorga áirithe a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid a bhfuil sé mar chuspóir leo iompraíocht an duine a shaobhadh go hábhartha, nó a bhfuil d’éifeacht acu iompraíocht an duine a shaobhadh go hábhartha, sa chaoi is gur dócha go ndéanfaí díobháil shuntasach, a dhéanfadh go leor dochair, go háirithe, go fisiciúil, go síceolaíoch nó do leasanna airgeadais, tá siad thar a bheith contúirteach agus ba cheart, dá bhrí sin, iad a thoirmeasc. Imscarann na córais intleachta saorga sin comhpháirteanna fothairseachúla, amhail spreagthaigh fuaime, íomhá, físe nach féidir le daoine a bhrath ós rud é go bhfuil siad lasmuigh de dhearcadh an duine nó teicnící eile ionramhála nó meabhlacha a threascraíonn nó a chuireann isteach ar neamhspleáchas, ar chinnteoireacht nó ar shaor-rogha daoine ar bhealaí nach bhfuil daoine ar an eolas faoi na teicnící sin go comhfhiosach, nó, i gcás ina mbíonn siad ar an eolas fúthu, gur féidir an dallamullóg a chur orthu mar sin féin nó nach bhfuil smacht acu orthu nó nach bhfuil siad in ann diúltú dóibh. D’fhéadfaí é sin a éascú, mar shampla, trí chomhéadain meaisín-inchinn nó réaltacht fhíorúil toisc go gceadaíonn siad leibhéal níos airde smachta ar an spreagadh a thugtar do dhaoine, sa mhéid go bhféadfaidís a n-iompar a shaobhadh go hábhartha ar bhealach atá an-díobhálach. Ina theannta sin, féadfaidh córais intleachta saorga leas a bhaint as leochaileachtaí duine nó grúpa shonraigh daoine mar gheall ar a n-aois, a míchumas de réir bhrí Threoir (AE) 2019/882 ó Pharlaimint na hEorpa agus ón gComhairle (16), nó staid shóisialta nó eacnamaíoch shonrach ar dócha go mbeidh na daoine sin níos leochailí i leith an dúshaothraithe dá barr, amhail daoine atá ag maireachtáil faoi fhíorbhochtaineacht, mionlaigh eitneacha nó reiligiúnacha. Is féidir na córais intleachta saorga sin a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid leis an gcuspóir nó an éifeacht iompar duine a shaobhadh go hábhartha agus ar bhealach a ndéantar díobháil shuntasach nó ar dócha go ndéanfar díobháil shuntasach don duine sin nó do dhuine eile nó do ghrúpaí daoine, lena n-áirítear díobháil a d’fhéadfaí a charnadh le himeacht ama agus ba cheart, dá bhrí sin, iad a thoirmeasc. Ní gá go bhféadfaí glacadh leis go bhfuil sé beartaithe iompraíocht a shaobhadh i gcás ina n-eascraíonn an saobhadh as fachtóirí lasmuigh den chóras intleachta saorga nach bhfuil smacht ag an soláthraí ná ag an úsáideoir gairmiúil orthu, eadhon fachtóirí nach bhféadfaí a thuar go réasúnta agus, dá bhrí sin, nach bhféadfadh soláthraí ná úsáideoir gairmiúil an chórais intleachta saorga iad a mhaolú. I gcás ar bith, ní gá go mbeadh sé ar intinn ag an soláthraí ná ag an úsáideoir gairmiúil díobháil fhisiciúil nó shíceolaíoch a dhéanamh, ar choinníoll gur as na cleachtais ionramhála nó dhúshaothraithe atá cumasaithe ag intleacht shaorga a eascraíonn an díobháil sin. Tá na toirmisc ar na cleachtais intleachta saorga sin comhlántach leis na forálacha atá i dTreoir 2005/29/CE ó Pharlaimint na hEorpa agus ón gComhairle (17), go háirithe go dtoirmisctear i ngach cás cleachtais tráchtála éagóracha as a n-eascraíonn díobháil eacnamaíoch nó airgeadais do thomhaltóirí, gan beann ar iad a bheith curtha i bhfeidhm trí chórais intleachta saorga nó ar bhealach eile. Níor cheart leis na toirmisc ar chleachtais ionramhála agus dhúshaothraithe sa Rialachán seo difear a dhéanamh do chleachtais dhleathacha i gcomhthéacs cóireála leighis, amhail cóireáil shíceolaíoch do ghalar meabhrach nó athshlánú fisiciúil, nuair a dhéantar na cleachtais sin i gcomhréir leis an dlí is infheidhme agus leis na caighdeáin leighis is infheidhme, mar shampla toiliú sainráite ó na daoine aonair nó óna n-ionadaithe dlíthiúla. Ina theannta sin, níor cheart cleachtais tráchtála choiteanna agus dhlisteanacha, mar shampla i réimse na fógraíochta, atá i gcomhréir leis an dlí is infheidhme a mheas, iontu féin, mar chleachtais ionramhála dhíobhálacha atá cumasaithe ag intleacht shaorga.
(30)
Biometric categorisation systems that are based on natural persons’ biometric data, such as an individual person’s face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership, religious or philosophical beliefs, race, sex life or sexual orientation should be prohibited. That prohibition should not cover the lawful labelling, filtering or categorisation of biometric data sets acquired in line with Union or national law according to biometric data, such as the sorting of images according to hair colour or eye colour, which can for example be used in the area of law enforcement.
(30)
Ba cheart toirmeasc a chur ar chórais chatagóirithe bhithmhéadracha atá bunaithe ar shonraí bithmhéadracha daoine nádúrtha, amhail aghaidh nó méarlorg duine aonair, chun tuairimí polaitiúla, ballraíocht i gceardchumann, creideamh reiligiúnach nó fealsúnach, cine, saol gnéis nó gnéaschlaonadh duine aonair a aimsiú nó a chur in iúl. Níor cheart a chumhdach, leis an toirmeasc sin, lipéadú, scagadh nó catagóiriú dleathach tacar sonraí bithmhéadracha a fhaightear i gcomhréir le dlí an Aontais nó leis an dlí náisiúnta de réir sonraí bithmhéadracha, amhail sórtáil íomhánna de réir dath gruaige nó dhath na súl, ar féidir iad a úsáid, mar shampla, i réimse fhorfheidhmiú an dlí.
(31)
AI systems providing social scoring of natural persons by public or private actors may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify natural persons or groups thereof on the basis of multiple data points related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics over certain periods of time. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable scoring practices and leading to such detrimental or unfavourable outcomes should therefore be prohibited. That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law.
(31)
D’fhéadfadh sé go mbeadh torthaí idirdhealaitheacha agus eisiamh grúpaí áirithe ann de thoradh scóráil shóisialta daoine nádúrtha ag gníomhaithe poiblí nó príobháideacha, rud a sholáthraítear le córais intleachta saorga áirithe. D’fhéadfadh sé go sárófaí leo an ceart chun dínite agus an ceart chun neamh-idirdhealaithe agus luachanna maidir le comhionannas agus an ceartas. Déanann na córais intleachta saorga sin meastóireacht nó aicmiú ar dhaoine nádúrtha nó ar ghrúpaí díobh bunaithe ar iliomad pointí sonraí a bhaineann lena n-iompraíocht shóisialta i gcomhthéacsanna éagsúla nó saintréithe pearsanta nó pearsantachta intuigthe nó tuartha thar thréimhsí áirithe ama. De thoradh an scóir shóisialta arna fháil ó na córais intleachta saorga sin, d’fhéadfadh sé go gcaithfí go neamhfhabhrach nó go díobhálach le daoine nádúrtha nó grúpaí iomlána daoine nádúrtha i gcomhthéacsanna sóisialta nach bhfuil aon bhaint acu leis an gcomhthéacs inar gineadh nó inar bailíodh na sonraí i dtosach báire nó d’fhéadfadh sé go gcaithfí leo go díobhálach ar bhealach atá gan údar nó díréireach le tromchúis a n-iompraíochta sóisialta. Ba cheart, dá bhrí sin, córais intleachta saorga lena mbaineann na cleachtais scórála neamh-inghlactha sin a thoirmeasc, ar córais iad a mbíonn torthaí díobhálacha nó neamhfhabhracha den sórt sin mar thoradh orthu. Níor cheart don toirmeasc sin difear a dhéanamh do chleachtais mheastóireachta dhleathacha daoine nádúrtha a dhéantar chun críche sonraí i gcomhréir le dlí an Aontais agus leis an dlí náisiúnta.
(32)
The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. Such possible biased results and discriminatory effects are particularly relevant with regard to age, ethnicity, race, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in real-time carry heightened risks for the rights and freedoms of the persons concerned in the context of, or impacted by, law enforcement activities.
(32)
Maidir le húsáid córas intleachta saorga le haghaidh cian-sainaithint bhithmhéadrach ‘fíor-ama’ daoine nádúrtha i spásanna atá inrochtana don phobal chun críoch fhorfheidhmiú an dlí, cuireann sí isteach go mór ar chearta agus saoirsí na ndaoine lena mbaineann, sa mhéid go bhfuil féidearthacht ann go ndéanfaí difear do shaol príobháideach cuid mhór den phobal, go nginfí mothúchán go bhfuiltear faoi fhaireachas leanúnach agus go ndéanfaí feidhmiú saoirse tionóil agus cearta bunúsacha eile a dhíspreagadh. Maidir le míchruinnis theicniúla i gcórais intleachta saorga atá beartaithe le haghaidh cian-sainaithint bhithmhéadrach daoine nádúrtha, d’fhéadfadh siad a bheith ina gcúis le torthaí claonta agus éifeachtaí idirdhealaithe. Na torthaí claonta agus na héifeachtaí idirdhealaitheacha a d’fhéadfadh a bheith ann, tá siad ábhartha go háirithe maidir le haois, eitneacht, cine, gnéas nó míchumais. Thairis sin, i bhfianaise neasacht an tionchair agus na ndeiseanna teoranta atá ann maidir le seiceálacha agus ceartuithe breise i ndáil le húsáid na gcóras sin a oibríonn i bhfíor-am, bíonn rioscaí níos mó ann i leith chearta agus shaoirsí na ndaoine lena mbaineann i gcomhthéacs gníomhaíochtaí forfheidhmithe dlí, nó a mbíonn tionchar ag gníomhaíochtaí forfheidhmithe dlí orthu.
(33)
The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for certain victims of crime including missing persons; certain threats to the life or to the physical safety of natural persons or of a terrorist attack; and the localisation or identification of perpetrators or suspects of the criminal offences listed in an annex to this Regulation, where those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years and as they are defined in the law of that Member State. Such a threshold for the custodial sentence or detention order in accordance with national law contributes to ensuring that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, the list of criminal offences provided in an annex to this Regulation is based on the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA (18), taking into account that some of those offences are, in practice, likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification could, foreseeably, be necessary and proportionate to highly varying degrees for the practical pursuit of the localisation or identification of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. An imminent threat to life or the physical safety of natural persons could also result from a serious disruption of critical infrastructure, as defined in Article 2, point (4) of Directive (EU) 2022/2557 of the European Parliament and of the Council (19), where the disruption or destruction of such critical infrastructure would result in an imminent threat to life or the physical safety of a person, including through serious harm to the provision of basic supplies to the population or to the exercise of the core function of the State. In addition, this Regulation should preserve the ability for law enforcement, border control, immigration or asylum authorities to carry out identity checks in the presence of the person concerned in accordance with the conditions set out in Union and national law for such checks. In particular, law enforcement, border control, immigration or asylum authorities should be able to use information systems, in accordance with Union or national law, to identify persons who, during an identity check, either refuse to be identified or are unable to state or prove their identity, without being required by this Regulation to obtain prior authorisation. This could be, for example, a person involved in a crime, being unwilling, or unable due to an accident or a medical condition, to disclose their identity to law enforcement authorities.
(33)
Ba cheart, dá bhrí sin, toirmeasc a chur ar úsáid na gcóras sin chun críoch fhorfheidhmiú an dlí, ach amháin maidir le cásanna atá liostaithe go cuimsitheach agus sainithe go cúng, cásanna lena bhfuil dianghá le húsáid na gcóras sin chun freastal ar leas substaintiúil maidir leis an bpobal, ina meastar gur mó a dtábhacht ná na rioscaí. Baineann na cásanna sin le cuardach íospartach áirithe coireachta, lena n-áirítear daoine ar iarraidh; bagairtí áirithe ar bheatha nó ar shábháilteacht fhisiciúil daoine nádúrtha nó ionsaí sceimhlitheoireachta; agus logánú nó sainaithint dhéantóirí nó amhrastach na gcionta coiriúla a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, i gcás ina bhfuil na cionta coiriúla sin inphionóis sa Bhallstát lena mbaineann le pianbhreith choimeádta nó le hordú coinneála go ceann uastréimhse ceithre bliana ar a laghad agus mar a shainítear iad i ndlí an Bhallstáit sin. An tairseach sin maidir leis an bpianbhreith choimeádta nó ordú coinneála i gcomhréir leis an dlí náisiúnta, rannchuidíonn sí chun a áirithiú gur cheart an cion a bheith tromchúiseach go leor le húdar a thabhairt le húsáid córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’. Thairis sin, tá liosta na gcionta coiriúla dá bhforáiltear in iarscríbhinn a ghabhann leis an Rialachán seo bunaithe ar na 32 chion choiriúla a liostaítear i gCinneadh Réime 2002/584/CGB ón gComhairle (18), á chur san áireamh gur dócha go mbeidh cuid de na cionta sin níos ábhartha ná a chéile, i gcleachtas, sa mhéid is go bhféadfadh sé go mbeadh gá ar leibhéil éagsúla go hintuartha le hiontaoibh cian-sainaitheanta bithmhéadraí ‘fíor-ama’ agus go mbeadh sé comhréireach ar leibhéil éagsúla den chur chun cinn praiticiúil maidir le logánú nó sainaithint duine faoi dhrochamhras nó dhéantóirí na gcionta coiriúla éagsúla a liostaítear agus ag féachaint do na héagsúlachtaí dóchúla maidir le tromchúis, dóchúlacht agus leibhéal na díobhála nó na hiarmhairtí diúltacha a d’fhéadfadh a bheith ann. D’fhéadfadh garbhagairt ar bheatha nó ar shábháilteacht fhisiciúil daoine nádúrtha a bheith mar thoradh freisin ar shuaitheadh tromchúiseach ar bhonneagar criticiúil, mar a shainmhínítear in Airteagal 2, pointe (4) de Threoir (AE) 2022/2557 ó Pharlaimint na hEorpa agus ón gComhairle (19), i gcás ina mbeadh garbhagairt ar bheatha nó ar shábháilteacht fhisiciúil duine mar thoradh ar shuaitheadh nó scriosadh an bhonneagair chriticiúil sin, lena n-áirítear trí dhíobháil thromchúiseach do sholáthar soláthairtí bunúsacha don daonra nó d’fheidhmiú phríomhfheidhm an Stáit. Ina theannta sin, ba cheart go gcaomhnófaí leis an Rialachán seo cumas na n-údarás forfheidhmithe dlí, rialaithe teorann, inimirce nó na n-údarás tearmainn seiceálacha aitheantais a dhéanamh i láthair an duine lena mbaineann i gcomhréir leis na coinníollacha a leagtar amach i ndlí an Aontais agus sa dlí náisiúnta maidir leis na seiceálacha sin. Go háirithe, ba cheart d’údaráis forfheidhmithe dlí, rialaithe teorann, inimirce nó tearmainn a bheith in ann úsáid a bhaint as córais faisnéise, i gcomhréir le dlí an Aontais nó leis an dlí náisiúnta, chun daoine a shainaithint a dhiúltaíonn a bheith sainaitheanta nó nach bhfuil in ann a gcéannacht a shonrú nó a chruthú, gan ceangal a bheith orthu leis an Rialachán seo réamhúdarú a fháil. D’fhéadfadh a bheith i gceist leis sin, mar shampla, duine a raibh baint aige le coir, nó duine nach bhfuil toilteanach, nó nach bhfuil ar a chumas, mar gheall ar thimpiste nó riocht sláinte, a aitheantas a nochtadh d’údaráis forfheidhmithe dlí.
(34)
In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be deployed only to confirm the specifically targeted individual’s identity and should be limited to what is strictly necessary concerning the period of time, as well as the geographic and personal scope, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The use of the real-time remote biometric identification system in publicly accessible spaces should be authorised only if the relevant law enforcement authority has completed a fundamental rights impact assessment and, unless provided otherwise in this Regulation, has registered the system in the database as set out in this Regulation. The reference database of persons should be appropriate for each use case in each of the situations mentioned above.
(34)
Chun a áirithiú go n-úsáidtear na córais sin ar bhealach freagrach comhréireach, tá sé tábhachtach freisin a shuí gur cheart eilimintí áirithe a chur san áireamh i ngach ceann de na cásanna a liostaítear go cuimsitheach agus a sainítear go cúng, go háirithe a mhéid a bhaineann le nádúr an cháis is cúis leis an iarraidh agus iarmhairtí na húsáide i leith chearta agus shaoirsí na ndaoine uile lena mbaineann agus na coimircí agus coinníollacha dá bhforáiltear leis an úsáid. Thairis sin, maidir le húsáid córas intleachta saorga le haghaidh córais cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críoch fhorfheidhmiú an dlí, níor cheart sin a dhéanamh ach amháin chun céannacht an duine aonair spriocdhírithe a dheimhniú agus ba cheart an úsáid sin a bheith teoranta don mhéid a bhfuil fíorghá leis maidir leis an tréimhse ama, chomh maith leis an raon feidhme geografach agus pearsanta, ag féachaint go háirithe don fhianaise nó do na tásca maidir leis na bagairtí, na híospartaigh nó an déantóir. Níor cheart úsáid an chórais cian-sainaitheanta bithmhéadraí fíor-ama i spásanna atá inrochtana don phobal a údarú ach amháin má tá measúnú tionchair ar chearta bunúsacha curtha i gcrích ag an údarás ábhartha forfheidhmithe dlí agus, mura bhforáiltear a mhalairt sa Rialachán seo, má chláraigh sé an córas sa bhunachar sonraí mar a leagtar amach sa Rialachán seo. Ba cheart bunachar sonraí tagartha na ndaoine a bheith iomchuí lena úsáid i ngach ceann de na cásanna a luaitear thuas.
(35)
Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State whose decision is binding. Such authorisation should, in principle, be obtained prior to the use of the AI system with a view to identifying a person or persons. Exceptions to that rule should be allowed in duly justified situations on grounds of urgency, namely in situations where the need to use the systems concerned is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use of the AI system. In such situations of urgency, the use of the AI system should be restricted to the absolute minimum necessary and should be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations request such authorisation while providing the reasons for not having been able to request it earlier, without undue delay and at the latest within 24 hours. If such an authorisation is rejected, the use of real-time biometric identification systems linked to that authorisation should cease with immediate effect and all the data related to such use should be discarded and deleted. Such data includes input data directly acquired by an AI system in the course of the use of such system as well as the results and outputs of the use linked to that authorisation. It should not include input that is legally acquired in accordance with another Union or national law. In any case, no decision producing an adverse legal effect on a person should be taken based solely on the output of the remote biometric identification system.
(35)
Gach úsáid a bhaintear as córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críoch fhorfheidhmiú an dlí, ba cheart don úsáid sin a bheith faoi réir údarú sonrach sainráite ó údarás breithiúnach nó ó údarás riaracháin neamhspleách de chuid Ballstáit a bhfuil a chinneadh ceangailteach. Ba cheart, i bprionsabal, an t-údarú sin a fháil sula n-úsáidfear an córas intleachta saorga d’fhonn duine nó daoine a shainaithint. Ba cheart eisceachtaí ar an riail sin a cheadú i gcásanna a bhfuil údar cuí leo ar fhoras práinne, eadhon, cásanna ina bhfuil sé dodhéanta go héifeachtúil agus go hoibiachtúil, de thairbhe úsáid an chórais intleachta saorga, údarú a fháil sula n-úsáidfear an córas i dtrácht. Sna cásanna práinneacha sin, ba cheart úsáid an chórais intleachta saorga a theorannú don íosmhéid a bhfuil géarghá leis agus í a bheith faoi réir na gcoimircí agus na gcoinníollacha iomchuí, de réir mar a chinntear leis an dlí náisiúnta agus a shonraítear i gcomhthéacs gach cáis phráinnigh ar leith ag an údarás forfheidhmithe dlí féin. Ina theannta sin, ba cheart don údarás forfheidhmithe dlí an t-údarú sin a iarraidh sna cásanna sin agus na cúiseanna a thabhairt nach raibh sé in ann é a iarraidh níos luaithe, gan moill mhíchuí agus laistigh de 24 uair an chloig ar a dhéanaí. Má dhiúltaítear an t-údarú sin, ba cheart deireadh a chur, láithreach, le húsáid córas sainaitheanta bithmhéadraí fíor-ama atá nasctha leis an údarú sin agus ba cheart na sonraí uile a bhaineann leis an úsáid sin a chur i leataobh agus a scriosadh. Áirítear leis na sonraí sin sonraí ionchuir arna bhfáil go díreach ag córas intleachta saorga agus an córas sin á úsáid chomh maith le torthaí agus aschuir na húsáide atá nasctha leis an údarú sin. Níor cheart ionchur a fhaightear go dlíthiúil i gcomhréir le dlí eile de chuid an Aontais nó le dlí náisiúnta eile a áireamh ann. I gcás ar bith, níor cheart aon chinneadh a mbeidh éifeacht dhíobhálach dhlíthiúil aige ar dhuine a dhéanamh bunaithe ar aschur an chórais cian-sainaitheanta bithmhéadraí amháin.
(36)
In order to carry out their tasks in accordance with the requirements set out in this Regulation as well as in national rules, the relevant market surveillance authority and the national data protection authority should be notified of each use of the real-time biometric identification system. Market surveillance authorities and the national data protection authorities that have been notified should submit to the Commission an annual report on the use of real-time biometric identification systems.
(36)
Chun a gcúraimí a dhéanamh i gcomhréir leis na ceanglais a leagtar amach sa Rialachán seo agus sna rialacha náisiúnta, ba cheart fógra a thabhairt don údarás ábhartha faireachais margaidh agus don údarás náisiúnta cosanta sonraí maidir le gach úsáid a bhaintear as an gcóras sainaitheanta bithmhéadraí fíor-ama. Ba cheart do na húdaráis faireachais margaidh agus do na húdaráis náisiúnta cosanta sonraí dá dtugtar fógra tuarascáil bhliantúil maidir le húsáid córas sainaitheanta bithmhéadraí fíor-ama a chur faoi bhráid an Choimisiúin.
(37)
Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State concerned has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation. Such national rules should be notified to the Commission within 30 days of their adoption.
(37)
Thairis sin, is iomchuí a fhoráil, laistigh de chreat uileghabhálach an Rialacháin seo, nár cheart an úsáid sin ar chríoch Bhallstáit i gcomhréir leis an Rialachán seo a bheith indéanta ach amháin i gcás inar chinn an Ballstát lena mbaineann, agus sa mhéid gur chinn an Ballstát lena mbaineann, foráil a dhéanamh go sainráite go bhféadfaí an úsáid sin a údarú ina rialacha mionsonraithe den dlí náisiúnta. Dá réir sin, beidh sé de shaoirse ag na Ballstáit faoin Rialachán seo gan foráil a dhéanamh in aon chor maidir le féidearthacht mar sin nó gan foráil a dhéanamh maidir le féidearthacht mar sin ach amháin i leith cuid de na cuspóirí a bhfuil sé de chumas acu údar a thabhairt le húsáid údaraithe a shainaithnítear sa Rialachán seo. Ba cheart fógra a thabhairt don Choimisiún faoi na rialacha náisiúnta sin laistigh de 30 lá tar éis a nglactha.
(38)
The use of AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should be possible only in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In that context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive (EU) 2016/680. However, the use of real-time remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to that authorisation.
(38)
Is gá sonraí bithmhéadracha a phróiseáil chun córas intleachta saorga a úsáid le haghaidh cian-sainaithint bhithmhéadrach fíor-ama daoine nádúrtha i spásanna atá inrochtana don phobal chun críoch fhorfheidhmiú an dlí. Na rialacha sa Rialachán seo maidir le toirmeasc a chur ar an úsáid sin, faoi réir eisceachtaí áirithe, atá bunaithe ar Airteagal 16 CFAE, ba cheart feidhm a bheith acu mar lex specialis i leith na rialacha atá in Airteagal 10 de Threoir (AE) 2016/680 maidir le próiseáil sonraí bithmhéadracha, agus dá bhrí sin an úsáid sin agus an phróiseáil sonraí bithmhéadracha lena mbaineann a rialáil go cuimsitheach. Dá bhrí sin, níor cheart an úsáid agus an phróiseáil sin bheith indéanta ach a mhéid atá siad comhoiriúnach leis an gcreat a leagtar amach leis an Rialachán seo, gan féidearthacht a bheith ann, lasmuigh den chreat sin, do na húdaráis inniúla, agus iad ag gníomhú chun críche fhorfheidhmiú an dlí, na córais sin a úsáid agus na sonraí sin lena mbaineann a phróiseáil ar na forais a liostaítear in Airteagal 10 de Threoir (AE) 2016/680. Sa chomhthéacs sin, níl an Rialachán seo beartaithe a bheith ina bhunús dlí leis an bpróiseáil sonraí pearsanta faoi Airteagal 8 de Threoir (AE) 2016/680. Maidir le húsáid córais cian-sainaitheanta bithmhéadraí fíor-ama i spásanna atá inrochtana don phobal chun críoch eile cé is moite d’fhorfheidhmiú an dlí, lena n-áirítear úsáid ag údaráis inniúla, níor cheart í a chumhdach, áfach, leis an gcreat sonrach maidir leis an úsáid sin chun críoch fhorfheidhmiú an dlí a shocraítear leis an Rialachán seo. An úsáid sin chun críoch eile cé is moite d’fhorfheidhmiú an dlí, níor cheart, dá bhrí sin, í a bheith faoi réir an cheanglais údaraithe faoin Rialachán seo ná na rialacha mionsonraithe is infheidhme sa dlí náisiúnta lena dtabharfaí éifeacht don údarú sin.
(39)
Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply with all requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than law enforcement, Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725 prohibit the processing of biometric data subject to limited exceptions as provided in those Articles. In the application of Article 9(1) of Regulation (EU) 2016/679, the use of remote biometric identification for purposes other than law enforcement has already been subject to prohibition decisions by national data protection authorities.
(39)
Maidir le próiseáil sonraí bithmhéadracha agus sonraí pearsanta eile a bhaineann le húsáid córas intleachta saorga le haghaidh sainaithint bhithmhéadrach, ach amháin i ndáil le húsáid córas cian-sainaitheanta bithmhéadraí fíor-ama i spásanna atá inrochtana don phobal chun críoch fhorfheidhmiú an dlí mar a rialáiltear leis an Rialachán seo, ba cheart an phróiseáil sin leanúint de bheith i gcomhréir leis na ceanglais atá mar thoradh ar Airteagal 10 de Threoir (AE) 2016/680. Chun críocha seachas forfheidhmiú an dlí, toirmisctear le hAirteagal 9(1) de Rialachán (AE) 2016/679 agus le hAirteagal 10(1) de Rialachán (AE) 2018/1725 próiseáil sonraí bithmhéadracha faoi réir eisceachtaí teoranta dá bhforáiltear sna hAirteagail sin. Agus Airteagal 9(1) de Rialachán (AE) 2016/679 á chur i bhfeidhm, bhí úsáid cian-sainaitheanta bithmhéadraí chun críocha seachas forfheidhmiú an dlí faoi réir cinntí toirmisc cheana féin arna ndéanamh ag na húdaráis náisiúnta cosanta sonraí.
(40)
In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), first subparagraph, point (g), to the extent it applies to the use of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation in criminal matters, Article 5(1), first subparagraph, point (d), to the extent it applies to the use of AI systems covered by that provision, Article 5(1), first subparagraph, point (h), Article 5(2) to (6) and Article 26(10) of this Regulation adopted on the basis of Article 16 TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 TFEU.
(40)
I gcomhréir le hAirteagal 6a de Phrótacal Uimh. 21 maidir le seasamh na Ríochta Aontaithe agus na hÉireann i dtaca leis an limistéar saoirse, slándála agus ceartais, atá i gceangal leis an gConradh ar an Aontas Eorpach agus leis an gConradh ar Fheidhmiú an Aontais Eorpaigh, na rialacha a leagtar amach in Airteagal 5(1), an chéad fhomhír, pointe (g), a mhéid a bhfuil feidhm aige maidir le húsáid córas catagóirithe bhithmhéadraigh le haghaidh gníomhaíochtaí i réimse an chomhair póilíneachta agus an chomhair bhreithiúnaigh in ábhair choiriúla, in Airteagal 5(1), an chéad fhomhír, pointe (d), a mhéid a bhfuil feidhm aige maidir le húsáid na gcóras intleachta saorga arna gcumhdach leis an bhforáil sin, Airteagal 5(1), an chéad fhomhír, pointe (h), Airteagal 5(2) go (6) agus Airteagal 26(10) den Rialachán seo arna ghlacadh ar bhonn Airteagal 16 CFAE a bhaineann le sonraí pearsanta a bheith á bpróiseáil ag na Ballstáit agus gníomhaíochtaí á gcur i gcrích acu a thagann faoi raon feidhme Chaibidil 4 nó Chaibidil 5 de Theideal V de Chuid a Trí de CFAE, níl siad ina gceangal ar Éirinn nuair nach mbeidh na rialacha ina gceangal uirthi lena rialaítear cineálacha an chomhair bhreithiúnaigh in ábhair choiriúla nó cineálacha an chomhair phóilíneachta a cheanglaíonn na forálacha a leagtar síos ar bhonn Airteagal 16 CFAE a chomhlíonadh.
(41)
In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and to the TFEU, Denmark is not bound by rules laid down in Article 5(1), first subparagraph, point (g), to the extent it applies to the use of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation in criminal matters, Article 5(1), first subparagraph, point (d), to the extent it applies to the use of AI systems covered by that provision, Article 5(1), first subparagraph, point (h), (2) to (6) and Article 26(10) of this Regulation adopted on the basis of Article 16 TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.
(41)
I gcomhréir le hAirteagal 2 agus 2a de Phrótacal 22 maidir le seasamh na Danmhairge, atá i gceangal leis an gConradh ar an Aontas Eorpach agus leis an gConradh ar Fheidhmiú an Aontais Eorpaigh, na rialacha a leagtar síos in Airteagal 5(1), an chéad fhomhír, pointe (g), a mhéid a bhfuil feidhm aige maidir le húsáid córas catagóirithe bhithmhéadraigh le haghaidh gníomhaíochtaí i réimse an chomhair póilíneachta agus an chomhair bhreithiúnaigh in ábhair choiriúla, in Airteagal 5(1), an chéad fhomhír, pointe (d), a mhéid a bhfuil feidhm aige maidir le húsáid na gcóras intleachta saorga arna gcumhdach leis an bhforáil sin, Airteagal 5(1), an chéad fhomhír, pointe (h), (2) go (6) agus Airteagal 26(10) den Rialachán seo arna nglacadh ar bhonn Airteagal 16 CFAE a bhaineann le sonraí pearsanta a bheith á bpróiseáil ag na Ballstáit agus gníomhaíochtaí á gcur i gcrích acu a thagann faoi raon feidhme Chaibidil 4 nó Chaibidil 5 de Theideal V de Chuid a Trí de CFAE, níl siad ina gceangal ar an Danmhairg.
(42)
In line with the presumption of innocence, natural persons in the Union should always be judged on their actual behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on objective verifiable facts and without human assessment thereof. Therefore, risk assessments carried out with regard to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics should be prohibited. In any case, that prohibition does not refer to or touch upon risk analytics that are not based on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for example on the basis of known trafficking routes.
(42)
I gcomhréir le toimhde na neamhchiontachta, is de réir a n-iompraíochta iarbhír ba cheart daoine nádúrtha san Aontas a mheas i gcónaí. Níor cheart daoine nádúrtha a mheas riamh de réir iompraíocht atá tuartha ag an intleacht shaorga bunaithe ar phróifíliú, ar a dtréithe nó ar a saintréithe pearsantachta, amhail náisiúntacht, áit bhreithe, áit chónaithe, líon leanaí, leibhéal fiachais nó cineál gluaisteáin, agus orthu sin amháin, gan amhras réasúnach go bhfuil an duine sin rannpháirteach i ngníomhaíocht choiriúil bunaithe ar fhíorais infhíoraithe oibiachtúla agus gan measúnú daonna a dhéanamh ar na nithe sin. Dá bhrí sin, ba cheart measúnuithe riosca a thoirmeasc a dhéantar maidir le daoine nádúrtha i dtaobh an dóchúlacht atá ann go ndéanfaidís cion a mheas nó lena thuar go ndéanfaí cion coiriúil iarbhír nó féideartha, bunaithe ar phróifíliú nó ar mheasúnú ar a dtréithe agus a saintréithe pearsantachta, agus orthu sin amháin. I gcás ar bith, ní thagraíonn an toirmeasc sin d’anailísíocht riosca ná ní bhaineann sé le hanailísíocht riosca nach bhfuil bunaithe ar phróifíliú daoine aonair ná ar thréithe ná saintréithe pearsantachta daoine aonair, amhail córais intleachta saorga a úsáideann anailísíocht riosca chun an dóchúlacht go ndéanfadh gnóthais calaois airgeadais a mheas bunaithe ar idirbhearta amhrasacha nó a úsáideann uirlisí anailíseacha riosca chun an dóchúlacht go bhfuil támhshuanaigh nó earraí aindleathacha á logánú ag údaráis chustaim a thuar, mar shampla bunaithe ar bhealaí aitheanta gáinneála.
(43)
The placing on the market, the putting into service for that specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage, should be prohibited because that practice adds to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy.
(43)
Ba cheart toirmeasc a chur ar chórais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse chun na críche sonraí sin, nó a úsáid, ar córais iad lena gcruthaítear nó lena leathnaítear bunachair sonraí aghaidh-aitheanta trí íomhánna den aghaidh a chnuaschóipeáil gan spriocdhíriú ón idirlíon nó ó thaifid TCI, toisc go gcuireann an cleachtas sin le braistint ollfhaireachais agus go bhféadfadh mórsháruithe ar chearta bunúsacha, lena n-áirítear an ceart chun príobháideachais, a bheith mar thoradh air.
(44)
There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly as expression of emotions vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such systems are the limited reliability, the lack of specificity and the limited generalisability. Therefore, AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of power in the context of work or education, combined with the intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof. Therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited. That prohibition should not cover AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutical use.
(44)
Tá imní mhór ann faoi bhunús eolaíoch na gcóras intleachta saorga arb é is aidhm dóibh mothúcháin a shainaithint nó a chur in iúl, go háirithe ós rud é go bhfuil éagsúlachtaí móra ann i measc cultúir agus cásanna, agus fiú laistigh de dhuine aonair, maidir le mothúcháin a léiriú. I measc phríomheasnaimh na gcóras sin tá an iontaofacht theoranta, an easpa sainiúlachta agus an inghinearálaitheacht theoranta. Dá bhrí sin, maidir le córais intleachta saorga lena ndéantar mothúcháin nó rúin daoine nádúrtha a shainaithint nó a bhaint as a gcuid sonraí bithmhéadracha, d’fhéadfadh torthaí idirdhealaitheacha eascairt astu agus d’fhéadfaidís cur isteach ar chearta agus ar shaoirsí na ndaoine lena mbaineann. I bhfianaise éagothroime na cumhachta i gcomhthéacs na hoibre nó an oideachais, in éineacht le cineál ionrach na gcóras sin, d’fhéadfadh sé go gcaithfí go díobhálach nó go neamhfhabhrach le daoine nádúrtha áirithe nó le grúpaí iomlána díobh mar thoradh ar na córais sin. Dá bhrí sin, ba cheart toirmeasc a chur ar chórais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse, nó a úsáid, ar córais iad atá beartaithe lena n-úsáid chun staid mhothúchánach daoine aonair a bhrath i gcásanna a bhaineann leis an ionad oibre agus leis an oideachas. Níor cheart córais intleachta saorga a chuirtear ar an margadh ar chúiseanna leighis nó sábháilteachta, agus ar na cúiseanna sin amháin, a chumhdach leis an toirmeasc sin, amhail córais atá beartaithe le haghaidh úsáid theiripeach.
(45)
Practices that are prohibited by Union law, including data protection law, non-discrimination law, consumer protection law, and competition law, should not be affected by this Regulation.
(45)
Leis an Rialachán seo níor cheart difear a dhéanamh do chleachtais a thoirmisctear le dlí an Aontais, lena n-áirítear an dlí cosanta sonraí, an dlí neamh-idirdhealaitheach, an dlí maidir le cosaint tomhaltóirí agus dlí na hiomaíochta.
(46)
High-risk AI systems should only be placed on the Union market, put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. On the basis of the New Legislative Framework, as clarified in the Commission notice ‘The “Blue Guide” on the implementation of EU product rules 2022’ (20), the general rule is that more than one legal act of Union harmonisation legislation, such as Regulations (EU) 2017/745 (21) and (EU) 2017/746 (22) of the European Parliament and of the Council or Directive 2006/42/EC of the European Parliament and of the Council (23), may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. To ensure consistency and avoid unnecessary administrative burdens or costs, providers of a product that contains one or more high-risk AI systems, to which the requirements of this Regulation and of the Union harmonisation legislation listed in an annex to this Regulation apply, should have flexibility with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all applicable requirements of the Union harmonisation legislation in an optimal manner. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation should minimise any potential restriction to international trade.
(46)
Níor cheart córais intleachta saorga ardriosca a chur ar mhargadh an Aontais, a chur i mbun seirbhíse ná a úsáid ach amháin má chomhlíonann siad ceanglais shainordaitheacha áirithe. Leis na ceanglais sin, ba cheart a áirithiú, maidir le córais intleachta saorga ardriosca atá ar fáil san Aontas nó a n-úsáidtear a n-aschur san Aontas ar shlite eile, nach mbeadh rioscaí neamh-inghlactha ag baint leo do leasanna tábhachtacha poiblí an Aontais mar atá aitheanta agus faoi chosaint ag dlí an Aontais. Ar bhonn an Chreata Reachtaigh Nua, mar a soiléiríodh san fhógra ón gCoimisiún ‘An “Treoir Ghorm” maidir le cur chun feidhme rialacha an Aontais maidir le táirgí 2022’ (20), is é an riail ghinearálta go bhféadfadh feidhm a bheith ag níos mó ná gníomh dlí amháin de reachtaíocht chomhchuibhithe an Aontais, amhail Rialacháin (AE) 2017/745 (21) agus (AE) 2017/746 (22) ó Pharlaimint na hEorpa agus ón gComhairle nó Treoir 2006/42/CE ó Pharlaimint na hEorpa agus ón gComhairle (23), maidir le táirge amháin, ós rud é nach féidir an cur ar fáil nó cur i mbun seirbhíse a dhéanamh ach amháin nuair a chomhlíonann an táirge an reachtaíocht chomhchuibhithe uile is infheidhme. Chun comhsheasmhacht a áirithiú agus chun ualaí riaracháin neamhriachtanacha nó costais neamhriachtanacha a sheachaint, ba cheart do sholáthraithe táirge ina bhfuil córais intleachta saorga ardriosca amháin nó níos mó, a bhfuil feidhm ag ceanglais an Rialacháin seo agus ag ceanglais reachtaíocht chomhchuibhithe an Aontais a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo maidir leo, ba cheart solúbthacht a bheith acu maidir le cinntí oibríochtúla maidir le conas a áirithiú go gcomhlíonfaidh táirge ina bhfuil córas intleachta saorga amháin nó níos mó ceanglais uile is infheidhme reachtaíocht chomhchuibhithe an Aontais ar bhealach optamach. Ba cheart córais intleachta saorga a shainaithnítear mar chórais ardriosca a theorannú dóibh siúd a bhfuil tionchar suntasach díobhálach acu ar shláinte, ar shábháilteacht agus ar chearta bunúsacha daoine san Aontas agus, ba cheart, leis an teorannú sin, aon srian a d’fhéadfadh a bheith ann ar thrádáil idirnáisiúnta a íoslaghdú.
(47)
AI systems could have an adverse impact on the health and safety of persons, in particular when such systems operate as safety components of products. Consistent with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate.
(47)
D’fhéadfadh sé go mbeadh tionchar neamhfhabhrach ag córais intleachta saorga ar shláinte agus sábháilteacht daoine, go háirithe i gcás ina n-oibríonn na córais sin mar chomhpháirteanna sábháilteachta táirgí. I gcomhréir le cuspóirí reachtaíocht chomhchuibhithe an Aontais, chun saorghluaiseacht táirgí sa mhargadh inmheánach a éascú agus chun a áirithiú nach gcuirfear ach táirgí atá sábháilte agus comhlíontach ar bhealach eile ar an margadh, tá sé tábhachtach go ndéanfar na rioscaí sábháilteachta a sheachaint nó a mhaolú go hiomchuí, ar rioscaí iad a d’fhéadfadh a bheith ann mar gheall ar tháirge ina iomláine de thoradh a chomhpháirteanna digiteacha, lena n-áirítear córais intleachta saorga. Cuir i gcás, róbait uathrialaitheacha a bhfuil a n-uathriail ag dul i méid de réir a chéile, cé acu i gcomhthéacs monaraíochta nó cúnaimh phearsanta agus cúraim atá sé, ba cheart iad a bheith in ann oibriú agus a bhfeidhmeanna a dhéanamh go sábháilte i dtimpeallachtaí casta. Ar an gcuma chéanna, san earnáil sláinte, áit a bhfuil go leor i ngeall léi maidir le beatha agus sláinte, ba cheart córais diagnóisice agus córais a thacaíonn le cinntí daonna, ar córais iad atá ag éirí níos sofaisticiúla, a bheith iontaofa agus cruinn.
(48)
The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, the right to non-discrimination, the right to education, consumer protection, workers’ rights, the rights of persons with disabilities, gender equality, intellectual property rights, the right to an effective remedy and to a fair trial, the right of defence and the presumption of innocence, and the right to good administration. In addition to those rights, it is important to highlight the fact that children have specific rights as enshrined in Article 24 of the Charter and in the United Nations Convention on the Rights of the Child, further developed in the UNCRC General Comment No 25 as regards the digital environment, both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.
(48)
Tá ábharthacht ar leith ag baint le fairsinge thionchar díobhálach an chórais intleachta saorga ar na cearta bunúsacha atá faoi chosaint ag an gCairt, go háirithe agus córas intleachta saorga á aicmiú mar chóras ardriosca. Áirítear ar na cearta sin an ceart chun dínit an duine, maidir le meas ar an saol príobháideach agus ar shaol an teaghlaigh, ar chosaint sonraí pearsanta, chun tuairimí a nochtadh agus faisnéis a fháil, chun saoirse comhthionóil agus comhlachais, an ceart chun neamh-idirdhealaithe, an ceart chun oideachais, cearta cosanta tomhaltóirí, cearta oibrithe, cearta daoine faoi mhíchumas, comhionannas inscne, cearta maoine intleachtúla, an ceart chun leighis éifeachtaigh agus chun trialach córa, ceart na cosanta agus toimhde na neamhchiontachta, agus an ceart chun dea-riaracháin. I dteanna na gceart sin, tá sé tábhachtach a chur i dtábhacht go bhfuil cearta sonracha ag leanaí mar a chumhdaítear in Airteagal 24 den Chairt agus i gCoinbhinsiún na Náisiún Aontaithe um Chearta an Linbh, a forbraíodh a thuilleadh i mBarúil Ghinearálta Uimh. 25 ó UNCRC a mhéid a bhaineann leis an timpeallacht dhigiteach, ós rud é go gceanglaítear leis an dá cheann breithniú ar leochaileachtaí na leanaí agus an chosaint agus an cúram is gá a sholáthar ar mhaithe lena ndea-bhail. An ceart chun ardleibhéil cosanta don chomhshaol a chumhdaítear sa Chairt agus a chuirtear chun feidhme i mbeartais an Aontais, ba cheart é a chur san áireamh freisin agus measúnú á dhéanamh ar dhéine na díobhála ar féidir córas intleachta saorga a dhéanamh, lena n-áirítear i ndáil le sláinte agus sábháilteacht daoine.
(49)
As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council (24), Regulation (EU) No 167/2013 of the European Parliament and of the Council (25), Regulation (EU) No 168/2013 of the European Parliament and of the Council (26), Directive 2014/90/EU of the European Parliament and of the Council (27), Directive (EU) 2016/797 of the European Parliament and of the Council (28), Regulation (EU) 2018/858 of the European Parliament and of the Council (29), Regulation (EU) 2018/1139 of the European Parliament and of the Council (30), and Regulation (EU) 2019/2144 of the European Parliament and of the Council (31), it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant delegated or implementing acts on the basis of those acts.
(49)
A mhéid a bhaineann le córais intleachta saorga ardriosca is comhpháirteanna sábháilteachta táirgí nó córas iad, nó is táirgí nó córais iad féin, a thagann faoi raon feidhme Rialachán (CE) Uimh. 300/2008 ó Pharlaimint na hEorpa agus ón gComhairle (24), Rialachán (AE) Uimh. 167/2013 ó Pharlaimint na hEorpa agus ón gComhairle (25), Rialachán (AE) Uimh. 168/2013 ó Pharlaimint na hEorpa agus ón gComhairle (26), Treoir 2014/90/AE ó Pharlaimint na hEorpa agus ón gComhairle (27), Treoir (AE) 2016/797 ó Pharlaimint na hEorpa agus ón gComhairle (28),Rialachán (AE) 2018/858 ó Pharlaimint na hEorpa agus ón gComhairle (29), Rialachán (AE) 2018/1139 ó Pharlaimint na hEorpa agus ón gComhairle (30), agus Rialachán (AE) 2019/2144 ó Pharlaimint na hEorpa agus ón gComhairle (31), is iomchuí na gníomhartha sin a leasú chun a áirithiú, ar bhonn sainiúlachtaí teicniúla agus rialála gach earnála, agus gan cur isteach ar shásraí rialachais, measúnaithe comhréireachta agus forfheidhmithe atá ann cheana agus na húdaráis arna mbunú iontu, go gcuirfidh an Coimisiún san áireamh na ceanglais shainordaitheacha le haghaidh córais intleachta saorga ardriosca a leagtar síos sa Rialachán seo agus aon ghníomh ábhartha tarmligthe nó cur chun feidhme á ghlacadh aige amach anseo ar bhonn na ngníomhartha sin.
(50)
As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation listed in an annex to this Regulation, it is appropriate to classify them as high-risk under this Regulation if the product concerned undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation.
(50)
Maidir le córais intleachta saorga ardriosca is comhpháirteanna sábháilteachta táirgí nó córas iad, nó is táirgí nó córais iad féin, a thagann faoi raon feidhme reachtaíocht chomhchuibhithe áirithe de chuid an Aontais a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, is iomchuí iad a aicmiú mar chórais ardriosca faoin Rialachán seo má chuirtear an nós imeachta um measúnú comhréireachta i bhfeidhm ar an táirge lena mbaineann le comhlacht tríú páirtí um measúnú comhréireachta de bhun na reachtaíochta comhchuibhithe ábhartha sin de chuid an Aontais. Go sonrach, is éard atá i gceist leis na táirgí sin innealra, bréagáin, ardaitheoirí, trealamh agus córais chosanta atá ceaptha lena n-úsáid in atmaisféir a d’fhéadfadh a bheith pléascach, trealamh raidió, brú-threalamh, trealamh le haghaidh árthaí áineasa, suiteálacha cábla-bhealaigh, fearais a dhónn breoslaí gásacha, feistí leighis agus feistí leighis diagnóiseacha in vitro, mótarfheithiclí agus eitlíocht.
(51)
The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered to be high-risk under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is, in particular, the case for Regulations (EU) 2017/745 and (EU) 2017/746, where a third-party conformity assessment is provided for medium-risk and high-risk products.
(51)
Maidir le haicmiú córas intleachta saorga mar chóras ardriosca de bhun an Rialacháin seo, níor cheart gá a bheith ann go gciallódh sin go measfar an táirge a bhfuil an córas intleachta saorga ann mar chomhpháirt sábháilteachta, nó an córas intleachta saorga féin mar tháirge, a bheith ina tháirge ardriosca faoi na critéir a bhunaítear sa reachtaíocht chomhchuibhithe ábhartha de chuid an Aontais a bhfuil feidhm aici maidir leis an táirge. Is amhlaidh atá, go háirithe, i gcás Rialacháin (AE) 2017/745 agus (AE) 2017/746, ina ndéantar foráil do mheasúnú comhréireachta tríú páirtí le haghaidh táirgí meánriosca agus ardriosca.
(52)
As regards stand-alone AI systems, namely high-risk AI systems other than those that are safety components of products, or that are themselves products, it is appropriate to classify them as high-risk if, in light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in this Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems that the Commission should be empowered to adopt, via delegated acts, to take into account the rapid pace of technological development, as well as the potential changes in the use of AI systems.
(52)
A mhéid a bhaineann le córais intleachta saorga neamhspleácha, eadhon córais intleachta saorga ardriosca cé is moite de na córais is comhpháirteanna sábháilteachta táirgí iad, nó is táirgí iad féin, is iomchuí iad a aicmiú mar chórais ardriosca más rud é, i bhfianaise na críche atá beartaithe dóibh, go bhfuil ardriosca díobhála ag baint leo do shláinte, sábháilteacht nó cearta bunúsacha daoine, agus déine na díobhála féideartha agus an dóchúlacht go dtarlódh an díobháil á gcur san áireamh agus más rud é go n-úsáidtear iad i roinnt limistéir réamhshainithe shonracha a shonraítear sa Rialachán seo. Tá sainaithint na gcóras sin bunaithe ar an modheolaíocht chéanna agus ar na critéir chéanna atá beartaithe freisin maidir le haon leasú a dhéanfar amach anseo ar liosta na gcóras intleachta saorga ardriosca ar cheart an Coimisiún a chumhachtú chun iad a ghlacadh, trí bhíthin gníomhartha tarmligthe, chun mearluas na forbartha teicneolaíche a chur san áireamh, chomh maith leis na hathruithe ionchasacha ar úsáid na gcóras intleachta saorga.
(53)
It is also important to clarify that there may be specific cases in which AI systems referred to in pre-defined areas specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas because they do not materially influence the decision-making or do not harm those interests substantially. For the purposes of this Regulation, an AI system that does not materially influence the outcome of decision-making should be understood to be an AI system that does not have an impact on the substance, and thereby the outcome, of decision-making, whether human or automated. An AI system that does not materially influence the outcome of decision-making could include situations in which one or more of the following conditions are fulfilled. The first such condition should be that the AI system is intended to perform a narrow procedural task, such as an AI system that transforms unstructured data into structured data, an AI system that classifies incoming documents into categories or an AI system that is used to detect duplicates among a large number of applications. Those tasks are of such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI system in a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be that the task performed by the AI system is intended to improve the result of a previously completed human activity that may be relevant for the purposes of the high-risk uses listed in an annex to this Regulation. Considering those characteristics, the AI system provides only an additional layer to a human activity with consequently lowered risk. That condition would, for example, apply to AI systems that are intended to improve the language used in previously drafted documents, for example in relation to professional tone, academic style of language or by aligning text to a certain brand messaging. The third condition should be that the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns. The risk would be lowered because the use of the AI system follows a previously completed human assessment which it is not meant to replace or influence, without proper human review. Such AI systems include for instance those that, given a certain grading pattern of a teacher, can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag potential inconsistencies or anomalies. The fourth condition should be that the AI system is intended to perform a task that is only preparatory to an assessment relevant for the purposes of the AI systems listed in an annex to this Regulation, thus making the possible impact of the output of the system very low in terms of representing a risk for the assessment to follow. That condition covers, inter alia, smart solutions for file handling, which include various functions from indexing, searching, text and speech processing or linking data to other data sources, or AI systems used for translation of initial documents. In any case, AI systems used in high-risk use-cases listed in an annex to this Regulation should be considered to pose significant risks of harm to the health, safety or fundamental rights if the AI system implies profiling within the meaning of Article 4, point (4) of Regulation (EU) 2016/679 or Article 3, point (4) of Directive (EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725. To ensure traceability and transparency, a provider who considers that an AI system is not high-risk on the basis of the conditions referred to above should draw up documentation of the assessment before that system is placed on the market or put into service and should provide that documentation to national competent authorities upon request. Such a provider should be obliged to register the AI system in the EU database established under this Regulation. With a view to providing further guidance for the practical implementation of the conditions under which the AI systems listed in an annex to this Regulation are, on an exceptional basis, non-high-risk, the Commission should, after consulting the Board, provide guidelines specifying that practical implementation, completed by a comprehensive list of practical examples of use cases of AI systems that are high-risk and use cases that are not.
(53)
Tá sé tábhachtach a shoiléiriú freisin go bhféadfadh cásanna sonracha a bheith ann nach mbeidh na córais intleachta saorga dá dtagraítear i réimsí réamhshainithe a shonraítear sa Rialachán seo ina gcúis le riosca suntasach díobhála do na leasanna dlíthiúla a chosnaítear faoi na limistéir sin toisc nach n-imríonn siad tionchar ábhartha ar an gcinnteoireacht nó nach ndéanann siad dochar suntasach do na leasanna sin. Chun críocha an Rialacháin seo, maidir le córas intleachta saorga nach bhfuil tionchar ábhartha aige ar thoradh na cinnteoireachta, ba cheart a thuiscint gur córas intleachta saorga é nach bhfuil tionchar aige ar shubstaint na cinnteoireachta, agus ar an gcaoi sin ar thoradh na cinnteoireachta, bíodh sé sin ina chóras daonna nó uathoibrithe. D’fhéadfaí a áireamh ar chóras intleachta saorga nach bhfuil tionchar ábhartha aige ar thoradh na cinnteoireachta cásanna ina gcomhlíontar ceann amháin nó níos mó de na coinníollacha seo a leanas. Is é ba cheart a bheith ar an gcéad choinníoll sin go bhfuil sé beartaithe go ndéanfaidh an córas intleachta saorga cúram cúng nós imeachta, amhail córas intleachta saorga lena gclaochlaítear sonraí neamhstruchtúrtha ina sonraí struchtúrtha, córas intleachta saorga lena n-aicmítear doiciméid inteachta ina gcatagóirí nó córas intleachta saorga a úsáidtear chun dúbláil as measc líon mór feidhmchlár a bhrath. Is cúraimí chomh cúng agus chomh teoranta sin iad nach mbaineann ach rioscaí teoranta leo nach méadaítear trí úsáid a bhaint as córas intleachta saorga i gcomhthéacs a liostaítear mar úsáid ardriosca in iarscríbhinn a ghabhann leis an Rialachán seo. Is é ba cheart a bheith ar an dara coinníoll gurb é atá beartaithe leis an gcúram a dhéanann an córas intleachta saorga feabhas a chur ar an toradh a bhí ar ghníomhaíocht a rinne duine roimhe sin a d’fhéadfadh a bheith ábhartha chun críocha chun críocha na n-úsáidí ardriosca a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo. I bhfianaise na saintréithe sin, ní dhéanann an córas intleachta saorga ach sraith bhreise a chur le gníomhaíocht dhaonna agus riosca laghdaithe ag baint leis dá bhrí sin. Bheadh feidhm ag an gcoinníoll sin, mar shampla, maidir le córais intleachta saorga atá ceaptha feabhas a chur ar an teanga a úsáideadh i ndoiciméid a dréachtaíodh roimhe seo, mar shampla maidir le ton gairmiúil, stíl acadúil teanga nó trí théacs a ailíniú le teachtaireachtaí áirithe branda. Is é ba cheart a bheith ar an tríú coinníoll go bhfuil sé beartaithe leis an gcóras intleachta saorga patrúin chinnteoireachta nó diallais ó phatrúin chinnteoireachta roimhe seo a bhrath. Laghdófaí an riosca toisc go leanann úsáid an chórais intleachta saorga measúnú daonna a cuireadh i gcrích roimhe seo agus nach bhfuil sé i gceist é a ionadú ná tionchar a imirt air, gan athbhreithniú cuí daonna. Áirítear ar na córais intleachta saorga sin, mar shampla, na córais sin is féidir a úsáid, i bhfianaise patrún áirithe grádaithe múinteora, lena sheiceáil ex post cibé acu a d’imigh an múinteoir ón bpatrún grádaithe d’fhonn neamhréireachtaí nó aimhrialtachtaí a d’fhéadfadh a bheith ann a chur in iúl. Is é ba cheart a bheith ar an gceathrú coinníoll go bhfuil sé beartaithe leis an gcóras intleachta saorga cúram a dhéanamh nach bhfuil ann ach cúram ullmhúcháin do mheasúnú atá ábhartha chun críocha na gcóras intleachta saorga a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, rud a fhágann go bhféadfadh tionchar an-íseal a bheith ag aschur an chórais ó thaobh riosca a bheith ag gabháil leis an measúnú a leanúint. Cumhdaítear leis an gcoinníoll sin, inter alia, réitigh chliste chun comhaid a láimhseáil, lena n-áirítear feidhmeanna éagsúla ó innéacsú, cuardach, téacs agus próiseáil urlabhra nó sonraí a nascadh le foinsí sonraí eile, nó córais intleachta saorga a úsáidtear chun doiciméid tosaigh a aistriú. In aon chás, córais intleachta saorga a úsáidtear i gcásanna úsáide ardriosca a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, ba cheart a mheas go bhfuil rioscaí suntasacha díobhála ag baint leis na córais intleachta saorga ardriosca sin do shláinte, do shábháilteacht nó do chearta bunúsacha má tá próifíliú intuigthe leis an gcóras intleachta saorga de réir bhrí Airteagal 4, pointe (4) de Rialachán (AE) 2016/679 nó Airteagal 3, pointe (4) de Threoir (AE) 2016/680 nó Airteagal 3, pointe (5) de Rialachán (AE) 2018/1725. Chun inrianaitheacht agus trédhearcacht a áirithiú, ba cheart do sholáthraí a mheasann nach mbaineann ardriosca le córas intleachta saorga ar bhonn na gcoinníollacha dá dtagraítear thuas doiciméadacht an mheasúnaithe a tharraingt suas sula gcuirfear an córas sin ar an margadh nó i mbun seirbhíse agus ba cheart dó an doiciméadacht sin a sholáthar d’údaráis inniúla náisiúnta arna iarraidh sin dóibh. Ba cheart oibleagáid a bheith ar an soláthraí sin an córas intleachta saorga a chlárú i mbunachar sonraí an Aontais arna bhunú faoin Rialachán seo. D’fhonn tuilleadh treorach a chur ar fáil maidir le cur chun feidhme praiticiúil na gcoinníollacha faoinar córais neamh-ardriosca, ar bhonn eisceachtúil, iad na córais intleachta saorga a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, ba cheart don Choimisiún, tar éis dó dul i gcomhairle leis an mBord, treoirlínte a sholáthar lena sonraítear go bhfuil an cur chun feidhme praiticiúil sin curtha i gcrích le liosta cuimsitheach de shamplaí praiticiúla de chásanna úsáide córas intleachta saorga atá ardriosca agus úsáide nach bhfuil.
(54)
As biometric data constitutes a special category of personal data, it is appropriate to classify as high-risk several critical-use cases of biometric systems, insofar as their use is permitted under relevant Union and national law. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. The risk of such biased results and discriminatory effects is particularly relevant with regard to age, ethnicity, race, sex or disabilities. Remote biometric identification systems should therefore be classified as high-risk in view of the risks that they pose. Such a classification excludes AI systems intended to be used for biometric verification, including authentication, the sole purpose of which is to confirm that a specific natural person is who that person claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having secure access to premises. In addition, AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics protected under Article 9(1) of Regulation (EU) 2016/679 on the basis of biometric data, in so far as these are not prohibited under this Regulation, and emotion recognition systems that are not prohibited under this Regulation, should be classified as high-risk. Biometric systems which are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures should not be considered to be high-risk AI systems.
(54)
Ós rud é gur catagóir speisialta sonraí pearsanta iad sonraí bithmhéadracha, is iomchuí roinnt cásanna úsáide criticiúla de chórais bhithmhéadracha a aicmiú mar chásanna lena mbaineann ardriosca, a mhéid a cheadaítear iad a úsáid faoi dhlí ábhartha an Aontais agus faoin dlí ábhartha náisiúnta. Maidir le míchruinnis theicniúla i gcórais intleachta saorga atá beartaithe le haghaidh cian-sainaithint bhithmhéadrach daoine nádúrtha, d’fhéadfadh siad a bheith ina gcúis le torthaí claonta agus éifeachtaí idirdhealaithe. An riosca go mbeadh torthaí claonta agus éifeachtaí idirdhealaitheacha ann, tá sí ábhartha go háirithe maidir le haois, eitneacht, cine, gnéas nó míchumais. Dá bhrí sin, ba cheart córais cian-sainaitheanta bithmhéadraí a aicmiú mar chórais ardriosca i bhfianaise na rioscaí a bhaineann leo. Ní áirítear le haicmiú den sórt sin córais intleachta saorga atá beartaithe a úsáid le haghaidh fíorú bithmhéadrach, lena n-áirítear fíordheimhniú, arb é an t-aon chuspóir atá leis a dheimhniú gur duine nádúrtha ar leith an duine a mhaíonn an duine sin a bheith ann agus chun céannacht duine nádúrtha a dheimhniú chun rochtain a fháil ar sheirbhís, feiste a dhíghlasáil nó rochtain slándála a bheith aige ar áitreabh, agus an méid sin amháin. Ina theannta sin, córais intleachta saorga atá beartaithe lena n-úsáid le haghaidh catagóiriú bithmhéadrach de réir airíonna íogaire nó saintréithe íogaire dá dtugtar cosaint faoi Airteagal 9(1) de Rialachán (AE) 2016/679 ar bhonn sonraí bithmhéadracha, sa mhéid nach dtoirmisctear iadsan sin faoin Rialachán seo, agus córais aitheanta mothúcháin nach dtoirmisctear faoin Rialachán seo, ba cheart iad a aicmiú mar chórais ardriosca. Córais bhithmhéadracha atá beartaithe lena n-úsáid chun críche bearta cibearshlándála agus cosanta sonraí pearsanta a chumasú, agus chun na críche sin amháin, níor cheart a mheas gur córais intleachta saorga ardriosca iad.
(55)
As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of critical digital infrastructure as listed in point (8) of the Annex to Directive (EU) 2022/2557, road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of critical infrastructure or the health and safety of persons and property but which are not necessary in order for the system to function. The failure or malfunctioning of such components might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety components of such critical infrastructure may include systems for monitoring water pressure or fire alarm controlling systems in cloud computing centres.
(55)
A mhéid a bhaineann le bonneagar criticiúil a bhainistiú agus a oibriú, is iomchuí aicme ardriosca a thabhairt ar na córais intleachta saorga atá beartaithe lena n-úsáid mar chomhpháirteanna sábháilteachta i mbainistíocht agus oibriú bonneagair dhigitigh chriticiúil mar a liostaítear i bpointe (8) d’Iarscríbhinn I de Threoir (AE) 2022/2557, trácht bóthair agus soláthar uisce, gáis, téimh agus leictreachais, ós rud é go bhféadfadh a dteip nó a mífheidhmiú beatha agus sláinte daoine a chur i mbaol agus go bhféadfadh cur isteach suntasach a bheith ann i ngnáthchúrsa na ngníomhaíochtaí sóisialta agus eacnamaíocha de bharr na gcóras sin. Is éard atá i gcomhpháirteanna sábháilteachta bonneagair chriticiúil, lena n-áirítear bonneagar digiteach criticiúil, córais a úsáidtear chun sláine fhisiceach an bhonneagair chriticiúil nó sláinte agus sábháilteacht daoine agus maoine a chosaint go díreach ach nach bhfuil riachtanach chun go bhfeidhmeoidh an córas. D’fhéadfadh rioscaí do shláine fhisiciúil an bhonneagair chriticiúil a bheith mar thoradh ar chliseadh nó mífheidhmiú na gcomhpháirteanna sin agus, dá bhrí sin, d’fhéadfadh rioscaí do shláinte agus sábháilteacht daoine agus maoine a bheith mar thoradh air sin. Níor cheart comhpháirteanna atá beartaithe lena n-úsáid chun críoch cibearshlándála amháin a cháiliú mar chomhpháirteanna sábháilteachta. D’fhéadfaí a áireamh ar shamplaí de chomhpháirteanna sábháilteachta an bhonneagair chriticiúil sin córais chun faireachán a dhéanamh ar bhrú uisce nó ar chórais rialaithe aláraim dóiteáin in ionaid néalríomhaireachta.
(56)
The deployment of AI systems in education is important to promote high-quality digital education and training and to allow all learners and teachers to acquire and share the necessary digital skills and competences, including media literacy, and critical thinking, to take an active part in the economy, society, and in democratic processes. However, AI systems used in education or vocational training, in particular for determining access or admission, for assigning persons to educational and vocational training institutions or programmes at all levels, for evaluating learning outcomes of persons, for assessing the appropriate level of education for an individual and materially influencing the level of education and training that individuals will receive or will be able to access or for monitoring and detecting prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine the educational and professional course of a person’s life and therefore may affect that person’s ability to secure a livelihood. When improperly designed and used, such systems may be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.
(56)
Tá sé tábhachtach córais intleachta saorga a úsáid san oideachas chun oideachas agus oiliúint dhigiteach ardcháilíochta a chur chun cinn agus chun go mbeidh gach foghlaimeoir agus múinteoir in ann na scileanna agus na hinniúlachtaí digiteacha is gá a fháil agus a chomhroinnt, lena n-áirítear litearthacht sna meáin, agus smaointeoireacht chriticiúil, chun páirt ghníomhach a ghlacadh sa gheilleagar, sa tsochaí agus i bpróisis dhaonlathacha. Maidir le córais intleachta saorga a úsáidtear san oideachas nó sa ghairmoiliúint, áfach, go háirithe chun rochtain nó ligean isteach a chinneadh, chun daoine a shannadh d’institiúidí oideachais agus gairmoiliúna nó cláir ag gach leibhéal, chun meastóireacht a dhéanamh ar thorthaí foghlama daoine, chun measúnú a dhéanamh ar leibhéal iomchuí oideachais do dhuine aonair agus tionchar ábhartha a imirt ar an leibhéal oideachais agus oiliúna a gheobhaidh daoine aonair nó ar a mbeidh siad in ann rochtain a fháil nó chun faireachán a dhéanamh ar iompar toirmiscthe mac léinn le linn tástálacha agus é a bhrath, ba cheart na córais intleachta saorga sin a aicmiú mar chórais intleachta saorga ardriosca, ós rud é go bhféadfaidh siad cúrsa oideachais agus gairmiúil saoil duine a chinneadh agus, dá bhrí sin, d’fhéadfaí difear a dhéanamh dá gcumas a slí bheatha a áirithiú. Má cheaptar nó má úsáidtear iad go míchuí, d’fhéadfadh an-chur isteach a bheith i gceist leis na córais sin agus d’fhéadfadh siad an ceart chun oideachais agus oiliúna chomh maith leis an gceart nach ndéanfaí idirdhealú ina n-aghaidh a shárú agus patrúin idirdhealaitheacha stairiúla a bhuanú, mar shampla in aghaidh na mban, aoisghrúpaí áirithe, daoine faoi mhíchumas nó daoine de bhunadh ciníoch nó eitneach áirithe nó gnéaschlaonta áirithe.
(57)
AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights. Relevant work-related contractual relationships should, in a meaningful manner, involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons may also undermine their fundamental rights to data protection and privacy.
(57)
Ba cheart córais intleachta saorga a úsáidtear i bhfostaíocht, bainistíocht fostaithe agus rochtain ar an bhféinfhostaíocht, go sonrach chun daoine a earcú agus a roghnú, chun cinntí a dhéanamh a théann i bhfeidhm ar choinníollacha caidrimh a bhaineann leis an obair, ar ardú céime agus ar fhoirceannadh caidrimh chonarthacha a bhaineann leis an obair, chun cúraimí a shannadh bunaithe ar iompar aonair, tréithe nó saintréithe pearsanta agus chun faireachán agus meastóireacht a dhéanamh ar dhaoine i gcaidrimh chonarthacha a bhaineann leis an obair, ba cheart na córais intleachta saorga sin a aicmiú mar chórais ardriosca, ós rud é go bhféadfaidís mórthionchar a imirt ar ionchais ghairme, ar shlite beatha na ndaoine sin agus ar chearta na n-oibrithe amach anseo. Maidir le caidrimh chonarthacha a bhaineann leis an obair, ba cheart fostaithe agus daoine a sholáthraíonn seirbhísí trí ardáin dá dtagraítear i gClár Oibre an Choimisiúin le haghaidh 2021 a bheith rannpháirteach iontu ar bhealach fóinteach. Le linn an phróisis earcaíochta agus i dtaca le meastóireacht a dhéanamh ar dhaoine i gcaidrimh chonarthacha a bhaineann leis an obair, nó ardú céime a thabhairt dóibh nó iad a choinneáil, d’fhéadfadh na córais sin patrúin idirdhealaitheacha stairiúla a bhuanú, mar shampla, in aghaidh na mban, aoisghrúpaí áirithe, daoine faoi mhíchumas nó daoine de bhunadh ciníoch nó eitneach áirithe nó gnéaschlaonta áirithe. Córais intleachta saorga a úsáidtear chun monatóireacht a dhéanamh ar fheidhmíocht agus iompraíocht na ndaoine sin, d’fhéadfadh siad an bonn a bhaint óna gcearta bunúsacha chun cosaint sonraí agus príobháideachais.
(58)
Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, natural persons applying for or receiving essential public assistance benefits and services from public authorities namely healthcare services, social security benefits, social services providing protection in cases such as maternity, illness, industrial accidents, dependency or old age and loss of employment and social and housing assistance, are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be granted, denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, those systems may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy and should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. In addition, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for those purposes may lead to discrimination between persons or groups and may perpetuate historical patterns of discrimination, such as that based on racial or ethnic origins, gender, disabilities, age or sexual orientation, or may create new forms of discriminatory impacts. However, AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services and for prudential purposes to calculate credit institutions’ and insurance undertakings’ capital requirements should not be considered to be high-risk under this Regulation. Moreover, AI systems intended to be used for risk assessment and pricing in relation to natural persons for health and life insurance can also have a significant impact on persons’ livelihood and if not duly designed, developed and used, can infringe their fundamental rights and can lead to serious consequences for people’s life and health, including financial exclusion and discrimination. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to dispatch or establish priority in the dispatching of emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems, should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.
(58)
Réimse eile ar cheart aird ar leith a thabhairt air is ea rochtain ar sheirbhísí agus sochair áirithe bhunriachtanacha phríobháideacha agus phoiblí agus tairbhiú díobh chun go mbeidh daoine in ann a bheith rannpháirteach go hiomlán sa tsochaí nó feabhas a chur ar a gcaighdeán maireachtála. Go háirithe, daoine nádúrtha a bhfuil iarratas á dhéanamh acu ar shochair chúnaimh phoiblí nó ar sheirbhísí cúnaimh phoiblí nó a fhaigheann na sochair nó na seirbhísí sin ó údaráis phoiblí, eadhon seirbhísí cúraim sláinte, sochair slándála sóisialta, seirbhísí sóisialta a sholáthraíonn cosaint i gcásanna amhail máithreachas, breoiteacht, tionóiscí tionsclaíocha, cleithiúnas nó seanaois agus i gcás ina gcailltear fostaíocht agus cúnamh sóisialta agus tithíochta, is iondúil a bhíonn na daoine sin ag brath ar na sochair agus seirbhísí sin agus is iondúil a bhíonn siad i riocht leochaileach i ndáil leis na húdaráis fhreagracha. Má úsáidtear córais intleachta saorga chun a chinneadh ar cheart do na húdaráis na sochair agus seirbhísí sin a dheonú, a dhiúltú, a laghdú, a chúlghairm nó a aiséileamh, lena n-áirítear cé acu atá tairbhithe i dteideal na dtairbhí nó na seirbhísí sin go dlisteanach nó nach bhfuil, d’fhéadfadh sé go mbeadh tionchar suntasach ag na córais sin ar shlí bheatha na ndaoine agus go ndéanfaí a gcearta bunúsacha a shárú, amhail an ceart chun cosanta sóisialta, an ceart chun neamh-idirdhealú, an ceart chun dínite daonna nó an ceart chun leighis éifeachtaigh, agus, dá bhrí sin, ba cheart iad a aicmiú mar chórais ardriosca. Mar sin féin, níor cheart don Rialachán seo bac a chur ar fhorbairt ná úsáid cur chuige nuálach sa riarachán poiblí, ar riarachán é a bhainfeadh leas as úsáid níos forleithne a bhaint as córais intleachta saorga chomhlíontacha shábháilte, ar choinníoll nach mbeadh ardriosca ag baint leo do dhaoine dlítheanacha agus nádúrtha. Ina theannta sin, córais intleachta saorga a úsáidtear chun meastóireacht a dhéanamh ar scór creidmheasa nó acmhainneacht chreidmheasa daoine nádúrtha, ba cheart iad a aicmiú mar chórais intleachta saorga ardriosca, ós rud é go gcinneann siad rochtain na ndaoine sin ar acmhainní airgeadais nó seirbhísí bunriachtanacha ar nós tithíochta, leictreachais agus seirbhísí teileachumarsáide. De thoradh na gcóras intleachta saorga a úsáidtear chun na gcríoch sin, d’fhéadfadh sé go ndéanfaí idirdhealú idir daoine nó grúpaí, agus go mbuanófaí patrúin idirdhealaitheacha stairiúla, amhail patrúin bunaithe ar bhunadh ciníoch nó eitneach, inscne, míchumais, aois nó gnéaschlaonadh, nó go gcruthófaí cineálacha nua tionchar idirdhealaithe. Mar sin féin, córais intleachta saorga dá bhforáiltear le dlí an Aontais chun calaois a bhrath i soláthar seirbhísí airgeadais agus chun críocha stuamachta chun ceanglais chaipitil institiúidí creidmheasa agus gnóthas árachais a ríomh, níor cheart na córais sin a mheas mar chórais ardriosca faoin Rialachán seo. Thairis sin, córais intleachta saorga atá beartaithe lena n-úsáid le haghaidh measúnú riosca agus praghsáil i ndáil le hárachas sláinte agus saoil do dhaoine nádúrtha, is féidir leo tionchar suntasach a imirt ar shlí bheatha daoine agus mura ndéantar iad a cheapadh, a fhorbairt agus a úsáid go cuí, is féidir leo a gcearta bunúsacha a shárú agus d’fhéadfadh iarmhairtí tromchúiseacha do shaol agus do shláinte daoine, lena n-áirítear eisiamh airgeadais agus idirdhealú, a bheith mar thoradh orthu. Ar deireadh, ba cheart córais intleachta saorga a úsáidtear chun glaonna éigeandála ó dhaoine nádúrtha a mheas agus a aicmiú nó chun seirbhísí céadfhreagartha éigeandála a sheoladh nó tosaíocht a thabhairt agus na seirbhísí sin á seoladh, lena n-áirítear seirbhísí arna soláthar ag póilíní, comhraiceoirí dóiteán agus cabhair leighis, chomh maith le córais triáise othar i réimse an chúraim sláinte éigeandála, ba cheart na córais intleachta saorga sin a aicmiú mar chórais ardriosca freisin, ós rud é go ndéanann siad cinntí i gcásanna atá criticiúil do bheatha agus sláinte daoine agus a maoin.
(59)
Given their role and responsibility, actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high-quality data, does not meet adequate requirements in terms of its performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under relevant Union and national law, a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, or agencies in support of law enforcement authorities for assessing the risk of a natural person to become a victim of criminal offences, as polygraphs and similar tools, for the evaluation of the reliability of evidence in in the course of investigation or prosecution of criminal offences, and, insofar as not prohibited under this Regulation, for assessing the risk of a natural person offending or reoffending not solely on the basis of the profiling of natural persons or the assessment of personality traits and characteristics or the past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the defence rights of suspects should not be ignored, in particular the difficulty in obtaining meaningful information on the functioning of those systems and the resulting difficulty in challenging their results in court, in particular by natural persons under investigation.
(59)
I bhfianaise a róil agus a bhfreagrachta, gníomhaíochtaí a dhéanann údaráis forfheidhmithe dlí a bhaineann le húsáidí áirithe córas intleachta saorga, is iad saintréithe na ngníomhaíochtaí sin éagothroime cumhachta shuntasach agus dá thoradh sin d’fhéadfadh sé go ndéanfaí faireachas ar dhuine nádúrtha, nó é a ghabháil nó saoirse a bhaint dó chomh maith le tionchair dhíobhálacha eile maidir leis na cearta bunúsacha a ráthaítear sa Chairt. Go sonrach, mura gcuirtear oiliúint ar an gcóras intleachta saorga le sonraí ardcháilíochta, mura gcomhlíonann an córas ceanglais leormhaithe i dtéarmaí feidhmíochta, cruinnis nó stóinseachta, nó mura ndéantar é a cheapadh agus a thástáil go hiomchuí sula gcuirfear é ar an margadh nó i mbun seirbhíse ar shlí eile, d’fhéadfadh sé go ndíreofaí ar dhaoine i mbealach idirdhealaithe nó ar shlí eile atá mícheart nó éagórach. Thairis sin, d’fhéadfadh sé go gcuirfí isteach ar fheidhmiú ceart nós imeachta bunúsach, amhail an ceart chun leighis éifeachtaigh agus chun trialach córa chomh maith le cearta na cosanta agus an ceart chun toimhde na neamhchiontachta, i gcás nach bhfuil na córais intleachta saorga sin trédhearcach, inmhínithe agus doiciméadaithe go leor. A mhéid a cheadaítear a n-úsáid faoi dhlí ábhartha an Aontais agus faoin dlí náisiúnta ábhartha, is iomchuí, dá bhrí sin, aicme ardriosca a thabhairt ar roinnt córais intleachta saorga atá beartaithe lena n-úsáid i gcomhthéacs fhorfheidhmiú an dlí, comhthéacs ina bhfuil tábhacht ar leith ag baint le cruinneas, iontaofacht agus trédhearcacht chun tionchair dhíobhálacha a sheachaint, muinín an phobail a chaomhnú agus cuntasacht agus cúiteamh éifeachtach a áirithiú. I bhfianaise chineál na ngníomhaíochtaí agus na rioscaí a bhaineann leo, ba cheart a áirithiú leis na córais intleachta saorga ardriosca sin, go háirithe, córais intleachta saorga atá beartaithe lena n-úsáid ag údaráis forfheidhmithe dlí nó ar a son, nó ag institiúidí, comhlachtaí, oifigí nó gníomhaireachtaí de chuid an Aontais mar thaca le húdaráis forfheidhmithe dlí chun measúnú a dhéanamh ar an riosca go bhféadfadh duine nádúrtha a bheith ina íospartach cionta coiriúla, le brathadóirí éithigh agus uirlisí comhchosúla, chun meastóireacht a dhéanamh ar iontaofacht fianaise in imscrúdú nó ionchúiseamh imeachtaí coiriúla, agus, a mhéid nach dtoirmeasctar faoin Rialachán seo, chun measúnú a dhéanamh ar an riosca go ndéanfaidh duine nádúrtha cion nó go ndéanfaidh sé cion athuair ní ar bhonn an phróifílithe ar dhaoine nádúrtha ná na meastóireachta ar thréithe agus saintréithe pearsantachta nó iompraíocht choiriúil daoine nádúrtha nó grúpaí san am atá thart amháin, chun próifíliú a dhéanamh le linn na cionta coiriúla a bhrath, a imscrúdú nó a ionchúiseamh. Córais intleachta saorga atá beartaithe go sonrach lena n-úsáid le haghaidh imeachtaí riaracháin ag na húdaráis chánach nó chustaim chomh maith le haonaid faisnéise airgeadais atá i mbun cúraimí riaracháin lena ndéantar anailís ar fhaisnéis de bhun dhlí an Aontais chun an sciúradh airgid a chomhrac, níor cheart na córais sin a aicmiú mar chórais intleachta saorga ardriosca a úsáideann údaráis forfheidhmithe dlí chun cionta coiriúla a chosc, a bhrath, a imscrúdú nó a ionchúiseamh. Níor cheart úsáid uirlisí intleachta saorga ag údaráis fhorfheidhmithe dlí agus údaráis ábhartha eile a bheith ina gné den neamhionannas ná den eisiamh. Níor cheart neamhaird a dhéanamh den tionchar atá ag úsáid uirlisí intleachta saorga ar chearta cosanta daoine atá faoi dhrochamhras, go háirithe an deacracht le faisnéis fhiúntach a fháil maidir le feidhmiú na gcóras sin agus an deacracht, dá bharr sin, le torthaí na n-uirlisí a cheistiú sa chúirt, go háirithe ag daoine nádúrtha atá faoi imscrúdú.
(60)
AI systems used in migration, asylum and border control management affect persons who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee respect for the fundamental rights of the affected persons, in particular their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under relevant Union and national law, AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum, for assisting competent public authorities for the examination, including related assessment of the reliability of evidence, of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status, for the purpose of detecting, recognising or identifying natural persons in the context of migration, asylum and border control management, with the exception of verification of travel documents. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Regulation (EC) No 810/2009 of the European Parliament and of the Council (32), the Directive 2013/32/EU of the European Parliament and of the Council (33), and other relevant Union law. The use of AI systems in migration, asylum and border control management should, in no circumstances, be used by Member States or Union institutions, bodies, offices or agencies as a means to circumvent their international obligations under the UN Convention relating to the Status of Refugees done at Geneva on 28 July 1951 as amended by the Protocol of 31 January 1967. Nor should they be used to in any way infringe on the principle of non-refoulement, or to deny safe and effective legal avenues into the territory of the Union, including the right to international protection.
(60)
Le córais intleachta saorga a úsáidtear i mbainistíocht imirce, bainistíocht tearmainn agus bainistíocht rialaithe teorann, déantar difear do dhaoine a bhíonn i riocht soghonta go minic agus a bhíonn ag brath ar thorthaí ghníomhaíochtaí na n-údarás poiblí inniúil. Dá bhrí sin, tá tábhacht ar leith ag baint le cruinneas, neamh-idirdhealaitheacht agus trédhearcacht na gcóras intleachta saorga a úsáidtear sna comhthéacsanna sin chun urramú chearta bunúsacha na ndaoine a ndearnadh difear dóibh a áirithiú, go háirithe a gcearta chun saorghluaiseachta, chun neamh-idirdhealaithe, chun an saol príobháideach agus sonraí pearsanta a chosaint, chun cosaint idirnáisiúnta a fháil agus chun dea-riaracháin. A mhéid a cheadaítear a n-úsáid faoi dhlí ábhartha an Aontais agus faoin dlí náisiúnta ábhartha, is iomchuí, dá bhrí sin, aicme ardriosca a thabhairt ar chórais intleachta saorga atá beartaithe lena n-úsáid ag údaráis phoiblí inniúla nó ar a son nó ag institiúidí, comhlachtaí, oifigí nó gníomhaireachtaí de chuid an Aontais a bhfuil cúraimí orthu i réimse na himirce, an tearmainn agus an bhainistithe rialaithe teorann le brathadóirí éithigh agus uirlisí comhchosúla, chun measúnú a dhéanamh ar rioscaí áirithe a bhaineann le daoine nádúrtha a thagann isteach i gcríoch Ballstáit nó a dhéanann iarratas ar víosa nó ar thearmann, chun cúnamh a thabhairt d’údaráis phoiblí inniúla i dtaca le scrúdú a dhéanamh, lena n-áirítear an measúnú gaolmhar ar iontaofacht na fianaise, ar iarratais ar thearmann, ar víosa nó ar cheadanna cónaithe agus na gearáin lena mbaineann maidir leis an gcuspóir incháilitheacht na ndaoine nádúrtha a bhfuil iarratas ar stádas á dhéanamh acu a dheimhniú, chun daoine nádúrtha a bhrath, a aithint nó a shainaithint i gcomhthéacs na himirce, an tearmainn nó an bhainistithe rialaithe teorann cé is moite de dhoiciméid taistil a fhíorú. Maidir le córais intleachta saorga i réimse na himirce, an tearmainn agus an bhainistithe rialaithe teorann a chumhdaítear leis an Rialachán seo, ba cheart dóibh na ceanglais nós imeachta ábhartha a chomhlíonadh a leagtar síos le Rialachán (CE) Uimh. 810/2009 ó Pharlaimint na hEorpa agus ón gComhairle (32), Treoir 2013/32/AE ó Pharlaimint na hEorpa agus ón gComhairle (33) agus le dlí ábhartha eile de chuid an Aontais. Níor cheart do na Ballstáit ná d’institiúidí, comhlachtaí, oifigí ná gníomhaireachtaí an Aontais, i gcás ar bith, úsáid a bhaint as córais intleachta saorga i réimse na himirce, an tearmainn agus an bhainistithe rialaithe teorann chun teacht timpeall ar a n-oibleagáidí idirnáisiúnta faoi Choinbhinsiún na Náisiún Aontaithe i dtaobh Stádas Dídeanaithe arna dhéanamh sa Ghinéiv an 28 Iúil 1951 arna leasú le Prótacal an 31 Eanáir 1967. Ná níor cheart iad a úsáid chun prionsabal an non-refoulement a shárú ar bhealach ar bith, ná chun bealaí dlíthiúla sábháilte agus éifeachtacha a dhiúltú i gcríoch an Aontais, lena n-áirítear an ceart chun cosaint idirnáisiúnta a fháil.
(61)
Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or on its behalf to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects for the parties. The use of AI tools can support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity. The classification of AI systems as high-risk should not, however, extend to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks.
(61)
Maidir le córais intleachta saorga áirithe atá beartaithe lena n-úsáid chun an ceartas agus na próisis dhaonlathacha a riar, ba cheart iad a aicmiú mar chórais ardriosca, mar gheall ar an tionchar suntasach a d’fhéadfaí a bheith acu ar an daonlathas, an smacht reachta, saoirsí aonair agus ar an gceart chun leighis éifeachtaigh agus chun trialach córa. Go sonrach, chun aghaidh a thabhairt ar rioscaí claonta, earráidí agus doiléireachta a d’fhéadfadh a bheith ann, is iomchuí aicme ardriosca a thabhairt ar chórais intleachta saorga atá beartaithe lena n-úsáid ag údarás breithiúnach nó ar a shon chun cúnamh a thabhairt d’údaráis bhreithiúnacha maidir le taighde agus léirmhíniú fíoras agus an dlí agus maidir leis an dlí a chur i bhfeidhm ar thacar coincréiteach fíoras. Córais intleachta saorga atá beartaithe lena n-úsáid ag comhlachtaí réiteach malartach díospóidí chun na críocha sin, ba cheart a mheas gur córais ardriosca iad freisin i gcás ina mbeidh éifeachtaí dlíthiúla ag torthaí na n-imeachtaí réiteach malartach díospóidí do na páirtithe. Is féidir le húsáid uirlisí intleachta saorga tacú le cumhacht cinnteoireachta na mbreithiúna nó le neamhspleáchas na mbreithiúna, ach níor cheart an úsáid sin a chur in ionad na cumhachta sin: ní mór don chinnteoireacht deiridh leanúint de bheith ina gníomhaíocht atá faoi stiúir daoine. Níor cheart aicmiú na gcóras intleachta saorga mar chórais ardriosca a leathnú, áfach, chun córais intleachta saorga a chumhdach atá beartaithe lena n-úsáid le haghaidh gníomhaíochtaí riaracháin atá coimhdeach amháin agus nach ndéanann difear do riaradh iarbhír an cheartais i gcásanna aonair, amhail anaithnidiú nó bréagainmniú cinntí breithiúnacha, doiciméad nó sonraí, cumarsáid idir pearsanra, cúraimí riaracháin.
(62)
Without prejudice to the rules provided for in Regulation (EU) 2024/900 of the European Parliament and of the Council (34), and in order to address the risks of undue external interference with the right to vote enshrined in Article 39 of the Charter, and of adverse effects on democracy and the rule of law, AI systems intended to be used to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda should be classified as high-risk AI systems with the exception of AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistical point of view.
(62)
Gan dochar do na rialacha dá bhforáiltear i Rialachán (AE) 2024/900 ó Pharlaimint na hEorpa agus ón gComhairle (34), agus chun aghaidh a thabhairt ar na rioscaí a bhaineann le cur isteach seachtrach míchuí ar an gceart vótála a chumhdaítear in Airteagal 39 den Chairt, agus le héifeachtaí díobhálacha ar an daonlathas agus ar an smacht reachta, ba cheart córais intleachta saorga atá beartaithe lena n-úsáid chun tionchar a imirt ar thoradh toghcháin nó reifrinn nó ar iompraíocht vótála daoine nádúrtha i bhfeidhmiú vóta a chaitheamh i dtoghcháin nó reifrinn a aicmiú mar chórais intleachta saorga ardriosca, cé is moite de chórais intleachta saorga nach bhfuil daoine nádúrtha neamhchosanta go díreach ar a n-aschur, amhail uirlisí a úsáidtear chun feachtais pholaitiúla a eagrú, a bharrfheabhsú agus a struchtúrú ó thaobh riaracháin agus lóistíochta de.
(63)
The fact that an AI system is classified as a high-risk AI system under this Regulation should not be interpreted as indicating that the use of the system is lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant, unless it is specifically otherwise provided for in this Regulation.
(63)
Ós rud é go n-aicmítear córas intleachta saorga mar chóras intleachta saorga ardriosca faoin Rialachán seo, níor cheart a léirmhíniú leis sin go léirítear gur gá úsáid an chórais a bheith dleathach faoi ghníomhartha eile de dhlí an Aontais nó faoi dhlí náisiúnta atá comhoiriúnach le dlí an Aontais, amhail cosaint sonraí pearsanta, úsáid brathadóirí éithigh agus uirlisí comhchosúla nó córais eile chun staid mhothúchánach daoine nádúrtha a bhrath. Ba cheart leanúint den úsáid sin i gcomhréir leis na ceanglais is infheidhme atá mar thoradh ar an gCairt agus ar na gníomhartha is infheidhme de dhlí tánaisteach an Aontais agus den dlí náisiúnta agus i gcomhréir leis na ceanglais sin amháin. Níor cheart a thuiscint leis an Rialachán seo go bhforáiltear maidir leis an bhforas dlíthiúil le sonraí pearsanta a phróiseáil, lena n-áirítear catagóirí speisialta sonraí pearsanta, i gcás inarb ábhartha, mura bhforáiltear go sonrach dá mhalairt sa Rialachán seo.
(64)
To mitigate the risks from high-risk AI systems placed on the market or put into service and to ensure a high level of trustworthiness, certain mandatory requirements should apply to high-risk AI systems, taking into account the intended purpose and the context of use of the AI system and according to the risk-management system to be established by the provider. The measures adopted by the providers to comply with the mandatory requirements of this Regulation should take into account the generally acknowledged state of the art on AI, be proportionate and effective to meet the objectives of this Regulation. Based on the New Legislative Framework, as clarified in Commission notice ‘The “Blue Guide” on the implementation of EU product rules 2022’, the general rule is that more than one legal act of Union harmonisation legislation may be applicable to one product, since the making available or putting into service can take place only when the product complies with all applicable Union harmonisation legislation. The hazards of AI systems covered by the requirements of this Regulation concern different aspects than the existing Union harmonisation legislation and therefore the requirements of this Regulation would complement the existing body of the Union harmonisation legislation. For example, machinery or medical devices products incorporating an AI system might present risks not addressed by the essential health and safety requirements set out in the relevant Union harmonised legislation, as that sectoral law does not deal with risks specific to AI systems. This calls for a simultaneous and complementary application of the various legislative acts. To ensure consistency and to avoid an unnecessary administrative burden and unnecessary costs, providers of a product that contains one or more high-risk AI system, to which the requirements of this Regulation and of the Union harmonisation legislation based on the New Legislative Framework and listed in an annex to this Regulation apply, should have flexibility with regard to operational decisions on how to ensure compliance of a product that contains one or more AI systems with all the applicable requirements of that Union harmonised legislation in an optimal manner. That flexibility could mean, for example a decision by the provider to integrate a part of the necessary testing and reporting processes, information and documentation required under this Regulation into already existing documentation and procedures required under existing Union harmonisation legislation based on the New Legislative Framework and listed in an annex to this Regulation. This should not, in any way, undermine the obligation of the provider to comply with all the applicable requirements.
(64)
Chun na rioscaí a bhaineann le córais intleachta saorga ardriosca a chuirtear ar an margadh nó a chuirtear i mbun seirbhíse a mhaolú agus chun ardleibhéal iontaofachta a áirithiú, ba cheart feidhm a bheith ag ceanglais shainordaitheacha áirithe maidir le córais intleachta saorga ardriosca, agus an chríoch atá beartaithe agus comhthéacs úsáid an chórais intleachta saorga á gcur san áireamh agus de réir an chórais bainistíochta riosca atá le bunú ag an soláthraí. Ba cheart a chur san áireamh, sna bearta arna nglacadh ag na soláthraithe chun ceanglais shainordaitheacha an Rialacháin seo a chomhlíonadh, staid reatha na teicneolaíochta a aithnítear go ginearálta maidir le hintleacht shaorga, ba cheart dóibh a bheith comhréireach agus éifeachtach chun cuspóirí an Rialacháin seo a chomhlíonadh. Bunaithe ar an gCreat Reachtach Nua, mar a soiléiríodh san fhógra ón gCoimisiún ‘An “Treoir Ghorm” maidir le cur chun feidhme rialacha an Aontais i ndáil le táirgí 2022’, is é an riail ghinearálta go bhféadfadh feidhm a bheith ag níos mó ná píosa reachtaíochta comhchuibhithe amháin de chuid an Aontais maidir le táirge amháin, ós rud é nach féidir an cur ar fáil nó cur i mbun seirbhíse a dhéanamh ach amháin nuair a chomhlíonann an táirge an reachtaíocht chomhchuibhithe uile is infheidhme de chuid an Aontais. Baineann guaiseacha na gcóras intleachta saorga a chumhdaítear le ceanglais an Rialacháin seo le gnéithe atá éagsúil le reachtaíocht chomhchuibhithe an Aontais atá ann cheana agus, dá bhrí sin, bheadh ceanglais an Rialacháin seo ina gcomhlánú ar chorpas reachtaíochta comhchuibhithe an Aontais atá ann cheana. Mar shampla, d’fhéadfadh rioscaí a bheith ag baint le hinnealra nó táirgí feistí leighis a chorpraíonn córas intleachta saorga ar rioscaí iad nach dtugtar aghaidh orthu leis na ceanglais riachtanacha sláinte agus sábháilteachta a leagtar amach i reachtaíocht chomhchuibhithe ábhartha an Aontais, toisc nach ndéileáiltear sa dlí earnála sin le rioscaí a bhaineann go sonrach le córais intleachta saorga. Chuige sin, ní mór na gníomhartha reachtacha éagsúla a chur i bhfeidhm go comhuaineach agus go comhlántach. Chun comhsheasmhacht a áirithiú agus chun ualach riaracháin neamhriachtanach agus costais neamhriachtanacha a sheachaint, ba cheart do sholáthraithe táirge ina bhfuil córas intleachta saorga ardriosca amháin nó níos mó, a bhfuil feidhm ag ceanglais an Rialacháin seo agus ag ceanglais reachtaíocht chomhchuibhithe an Aontais bunaithe ar an gCreat Reachtach Nua agus a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo maidir leo, ba cheart solúbthacht a bheith acu maidir le cinntí oibríochtúla maidir le conas a áirithiú go gcomhlíonfaidh táirge ina bhfuil córas intleachta saorga amháin nó níos mó na ceanglais uile is infheidhme reachtaíocht chomhchuibhithe an Aontais ar bhealach optamach. Leis an tsolúbthacht sin, mar shampla, d’fhéadfadh cinneadh ón soláthraí cuid de na próisis tástála agus tuairiscithe, den fhaisnéis agus den doiciméadacht is gá a cheanglaítear faoin Rialachán seo a chomhtháthú i ndoiciméadacht agus i nósanna imeachta atá ann cheana agus a cheanglaítear faoi reachtaíocht chomhchuibhithe an Aontais atá ann cheana bunaithe ar an gCreat Reachtach Nua agus a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo. Níor cheart dó sin, ar bhealach ar bith, an bonn a bhaint ón oibleagáid atá ar an soláthraí na ceanglais uile is infheidhme a chomhlíonadh.
(65)
The risk-management system should consist of a continuous, iterative process that is planned and run throughout the entire lifecycle of a high-risk AI system. That process should be aimed at identifying and mitigating the relevant risks of AI systems on health, safety and fundamental rights. The risk-management system should be regularly reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any significant decisions and actions taken subject to this Regulation. This process should ensure that the provider identifies risks or adverse impacts and implements mitigation measures for the known and reasonably foreseeable risks of AI systems to the health, safety and fundamental rights in light of their intended purpose and reasonably foreseeable misuse, including the possible risks arising from the interaction between the AI system and the environment within which it operates. The risk-management system should adopt the most appropriate risk-management measures in light of the state of the art in AI. When identifying the most appropriate risk-management measures, the provider should document and explain the choices made and, when relevant, involve experts and external stakeholders. In identifying the reasonably foreseeable misuse of high-risk AI systems, the provider should cover uses of AI systems which, while not directly covered by the intended purpose and provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable human behaviour in the context of the specific characteristics and use of a particular AI system. Any known or foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights should be included in the instructions for use that are provided by the provider. This is to ensure that the deployer is aware and takes them into account when using the high-risk AI system. Identifying and implementing risk mitigation measures for foreseeable misuse under this Regulation should not require specific additional training for the high-risk AI system by the provider to address foreseeable misuse. The providers however are encouraged to consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.
(65)
Is éard ar cheart a bheith sa chóras bainistithe riosca próiseas atriallach leanúnach a phleanáiltear agus a reáchtáiltear ar feadh shaolré iomlán an chórais intleachta saorga ardriosca. Ba cheart an próiseas sin a bheith dírithe ar rioscaí ábhartha na gcóras intleachta saorga maidir le sláinte, sábháilteacht agus cearta bunúsacha a shainaithint agus a mhaolú. Ba cheart an córas bainistithe riosca a athbhreithniú agus a nuashonrú go tráthrialta chun a éifeachtacht leanúnach a áirithiú, chomh maith le bonn cirt agus doiciméadú aon chinntí suntasacha agus aon ghníomhaíochtaí arna ndéanamh faoi réir an Rialacháin seo. Leis an bpróiseas sin, ba cheart a áirithiú go sainaithníonn an soláthraí rioscaí nó drochthionchair agus go gcuirfidh sé bearta maolaithe chun feidhme maidir leis na rioscaí aitheanta agus measartha intuartha a bhaineann le córais intleachta saorga don tsláinte, don tsábháilteacht agus do na cearta bunúsacha i bhfianaise na críche atá beartaithe dóibh agus mí-úsáid atá measartha intuartha, lena n-áirítear na rioscaí a d’fhéadfadh eascairt as an idirghníomhaíocht idir an córas intleachta saorga agus an timpeallacht ina n-oibríonn sé. Ba cheart don chóras bainistithe riosca na bearta bainistithe riosca is iomchuí a ghlacadh i bhfianaise staid na teicneolaíochta san intleacht shaorga. Agus na bearta bainistithe riosca is iomchuí á sainaithint aige, ba cheart don soláthraí na cinntí a rinneadh a dhoiciméadú agus a mhíniú agus, nuair is ábhartha, saineolaithe agus geallsealbhóirí seachtracha a thabhairt isteach. Agus mí-úsáid measartha intuartha na gcóras intleachta saorga ardriosca á sainaithint, ba cheart don soláthraí úsáidí córas intleachta saorga a chumhdach ar córais iad nach gcumhdaítear go díreach leis an gcríoch atá beartaithe agus dá bhforáiltear sa treoir maidir le húsáid, ach a bhféadfaí a bheith ag súil leis go réasúnta mar sin féin go mbeidh siad mar thoradh ar iompraíocht an duine atá intuartha go héasca i gcomhthéacs saintréithe sonracha agus úsáid córais intleachta saorga ar leith. Aon imthosca atá ar eolas nó intuartha, a bhaineann le húsáid an chórais intleachta saorga ardriosca i gcomhréir leis an gcríoch atá beartaithe dó nó faoi dhálaí mí-úsáide atá measartha intuartha, a bhféadfadh rioscaí a bheith ann mar thoradh air don tsláinte agus don tsábháilteacht, nó do chearta bunúsacha, ba cheart na himthosca sin a áireamh sna treoracha úsáide arna soláthar ag an soláthraí. Déantar an méid sin chun a áirithiú go bhfuil an t-úsáideoir gairmiúil ar an eolas agus go gcuirfidh sé san áireamh iad agus an córas intleachta saorga ardriosca á úsáid aige. Maidir le bearta maolaithe riosca a shainaithint agus a chur chun feidhme le haghaidh mí-úsáid intuartha faoin Rialachán seo, níor cheart oiliúint bhreise shonrach don chóras intleachta saorga ardriosca a bheith ag teastáil ón soláthraí chun aghaidh a thabhairt orthu. Moltar do na soláthraithe, áfach, machnamh a dhéanamh ar na bearta breise oiliúna sin chun mí-úsáidí intuartha réasúnta a mhaolú de réir mar is gá agus is iomchuí.
(66)
Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights. As no other less trade restrictive measures are reasonably available those requirements are not unjustified restrictions to trade.
(66)
Ba cheart feidhm a bheith ag na ceanglais ar chórais intleachta saorga ardriosca maidir le bainistiú rioscaí, cáilíocht agus ábharthacht na dtacar sonraí a úsáidtear, doiciméadacht theicniúil agus coimeád taifead, trédhearcacht agus soláthar faisnéise d’úsáideoirí gairmiúla, formhaoirseacht dhaonna agus stóinseacht, cruinneas agus cibearshlándáil. Tá gá leis na ceanglais sin chun na rioscaí i dtaca leis an tsláinte, an tsábháilteacht agus cearta bunúsacha a mhaolú go héifeachtach. Ós rud é nach bhfuil aon bheart trádála is lú srianadh ar fáil go réasúnta, ní srianta trádála nach bhfuil údar leo iad na ceanglais sin.
(67)
High-quality data and access to high-quality data plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. High-quality data sets for training, validation and testing require the implementation of appropriate data governance and management practices. Data sets for training, validation and testing, including the labels, should be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose of the system. In order to facilitate compliance with Union data protection law, such as Regulation (EU) 2016/679, data governance and management practices should include, in the case of personal data, transparency about the original purpose of the data collection. The data sets should also have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the data sets, that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (feedback loops). Biases can for example be inherent in underlying data sets, especially when historical data is being used, or generated when the systems are implemented in real world settings. Results provided by AI systems could be influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain vulnerable groups, including racial or ethnic groups. The requirement for the data sets to be to the best extent possible complete and free of errors should not affect the use of privacy-preserving techniques in the context of the development and testing of AI systems. In particular, data sets should take into account, to the extent required by their intended purpose, the features, characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting which the AI system is intended to be used. The requirements related to data governance can be complied with by having recourse to third parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices, as far as compliance with the data requirements of this Regulation are ensured.
(67)
Tá ról ríthábhachtach ag sonraí ardcháilíochta agus rochtain ar shonraí ardcháilíochta maidir le struchtúr a sholáthar agus feidhmíocht roinnt mhaith córas intleachta saorga a áirithiú, go háirithe i gcásanna ina n-úsáidtear teicnící a bhaineann le hoiliúint a chur ar shamhlacha, chun a áirithiú go bhfeidhmíonn an córas intleachta saorga ardriosca go sábháilte agus de réir mar atá beartaithe dó agus nach mbeidh sé ina fhoinse d’idirdhealú atá toirmiscthe faoi dhlí an Aontais. Chun tacair sonraí ardcháilíochta a fháil le haghaidh oiliúna, bailíochtaithe agus tástála, teastaíonn cleachtais iomchuí rialachais sonraí agus bainistíochta a chur chun feidhme. Maidir leis na tacair sonraí le haghaidh oiliúna, bailíochtaithe agus tástála, lena n-áirítear na lipéid, ba cheart dóibh a bheith ábhartha, sách ionadaíoch, agus a mhéid is mó is féidir, saor ó earráidí agus iomlán, i bhfianaise na críche atá beartaithe don chóras. Chun comhlíonadh dhlí an Aontais maidir le cosaint sonraí a éascú, amhail Rialachán (AE) 2016/679, ba cheart a áireamh i gcleachtais rialachais sonraí agus bainistíochta, i gcás sonraí pearsanta, trédhearcacht maidir le cuspóir bunaidh an bhailithe sonraí. Ba cheart na hairíonna staidrimh iomchuí a bheith ag na tacair sonraí freisin, lena n-áirítear a mhéid a bhaineann leis na daoine nó na grúpaí daoine a bhfuil sé beartaithe an córas intleachta saorga ardriosca a úsáid ina leith, agus aird ar leith á tabhairt ar aon chlaontacht a d’fhéadfadh a bheith sna tacair sonraí a mhaolú, ar claontacht í ar dócha go ndéanfadh sí difear do shláinte agus do shábháilteacht daoine, go mbeadh tionchar diúltach aici ar chearta bunúsacha nó go mbeadh idirdhealú a thoirmisctear faoi dhlí an Aontais mar thoradh uirthi go háirithe i gcás ina mbíonn tionchar ag aschuir sonraí ar ionchuir le haghaidh oibríochtaí amach anseo (lúba aiseolais). Mar shampla, d’fhéadfadh claontachtaí a bheith ina ndlúthchuid de bhuntacair sonraí, go háirithe i gcás ina mbaintear úsáid as sonraí stairiúla, nó a ghiniúint nuair a chuirtear na córais chun feidhme i bhfíordhálaí. D’fhéadfadh tionchar a bheith ag na claontachtaí bunúsacha sin, a bhfuil sé mar nós leo méadú de réir a chéile, ar thorthaí a sholáthraíonn córais intleachta saorga agus, ar an gcaoi sin, déantar idirdhealú atá ann cheana a bhuanú agus a mhéadú, go háirithe i gcás daoine ar de ghrúpaí leochaileacha áirithe iad, lena n-áirítear grúpaí ciníocha nó eitneacha. An ceanglas go mbeadh na tacair sonraí iomlán agus saor ó earráidí, a mhéid is mó is féidir, níor cheart dó difear a dhéanamh d’úsáid teicnící caomhnaithe príobháideachais i gcomhthéacs fhorbairt agus thástáil na gcóras intleachta saorga. Go háirithe, ba cheart a chur san áireamh sna tacair sonraí, a mhéid a cheanglaítear i bhfianaise na críche atá beartaithe dóibh, na saintréithe, tréithe nó eilimintí a bhaineann go sonrach leis an suíomh geografach, comhthéacsúil, iompraíochta nó feidhmiúil ina bhfuil sé beartaithe an córas intleachta saorga ardriosca a úsáid. Is féidir na ceanglais a bhaineann le rialachas sonraí a chomhlíonadh trí dhul ar iontaoibh tríú páirtithe a thairgeann seirbhísí comhlíontachta deimhnithe lena n-áirítear fíorú ar rialachas sonraí, sláine tacair sonraí, agus cleachtais oiliúna, bailíochtaithe agus tástála sonraí, a mhéid a áirithítear comhlíonadh cheanglais sonraí an Rialacháin seo.
(68)
For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as European Digital Innovation Hubs, testing experimentation facilities and researchers, should be able to access and use high-quality data sets within the fields of activities of those actors which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high-quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of AI algorithms on those data sets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.
(68)
Maidir le forbairt na gcóras intleachta saorga ardriosca agus measúnú a dhéanamh orthu, ba cheart do ghníomhaithe áirithe, amhail soláthraithe, comhlachtaí faoina dtugtar fógra agus eintitis ábhartha eile, amhail Moil Eorpacha maidir leis an Nuálaíocht Dhigiteach, saoráidí agus taighdeoirí tástála agus turgnamhaíochta, rochtain a bheith acu ar thacair sonraí ardcháilíochta agus a bheith in ann iad a úsáid laistigh dá réimsí gníomhaíochtaí faoi seach a bhaineann leis an Rialachán seo. Beidh tábhacht ar leith ag baint le spásanna coiteanna sonraí Eorpacha arna mbunú ag an gCoimisiún agus le héascú comhroinnte sonraí idir gnólachtaí agus leis an rialtas ar mhaithe le leas an phobail chun rochtain iontaofa, chuntasach agus neamh-idirdhealaitheach a sholáthar ar shonraí ardcháilíochta chun córais intleachta saorga a bhailíochtú, a thástáil agus oiliúint a chur orthu. Mar shampla, i gcás cúrsaí sláinte, leis an spás Eorpach sonraí sláinte, éascófar rochtain neamh-idirdhealaitheach ar shonraí sláinte agus oiliúint na n-algartam intleachta saorga maidir leis na tacair sonraí sin, ar bhealach slán tráthúil trédhearcach agus iontaofa a chosnaíonn an phríobháideacht, agus le rialachas iomchuí institiúideach. Údaráis inniúla ábhartha, lena n-áirítear údaráis earnála, a sholáthraíonn an rochtain ar shonraí nó a thacaíonn léi, féadfaidh siad tacú freisin leis an soláthar sonraí ardcháilíochta chun córais intleachta saorga a bhailíochtú, a thástáil agus oiliúint a chur orthu.
(69)
The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in Union data protection law, are applicable when personal data are processed. Measures taken by providers to ensure compliance with those principles may include not only anonymisation and encryption, but also the use of technology that permits algorithms to be brought to the data and allows training of AI systems without the transmission between parties or copying of the raw or structured data themselves, without prejudice to the requirements on data governance provided for in this Regulation.
(69)
Ní mór an ceart chun príobháideachta agus chun cosaint sonraí pearsanta a ráthú ar feadh shaolré iomlán an chórais intleachta saorga. I ndáil leis an méid sin, tá prionsabail an íoslaghdaithe sonraí agus na cosanta sonraí trí dhearadh agus trí réamhshocrú, mar a leagtar amach i ndlí an Aontais maidir le cosaint sonraí iad, infheidhme nuair a dhéantar sonraí pearsanta a phróiseáil. Féadfar a áireamh ar na bearta arna nglacadh ag soláthraithe chun comhlíonadh na bprionsabal sin a áirithiú, ní hamháin anaithnidiú agus criptiú, ach úsáid teicneolaíochta freisin lena gceadaítear algartaim a thabhairt chuig na sonraí agus lenar féidir oiliúint a chur ar chórais intleachta saorga gan tarchur idir páirtithe ná cóipeáil na sonraí loma nó struchtúrtha iad féin, gan dochar do na ceanglais maidir le rialachas sonraí dá bhforáiltear sa Rialachán seo.
(70)
In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should, exceptionally, to the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons and following the application of all applicable conditions laid down under this Regulation in addition to the conditions laid down in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, be able to process also special categories of personal data, as a matter of substantial public interest within the meaning of Article 9(2), point (g) of Regulation (EU) 2016/679 and Article 10(2), point (g) of Regulation (EU) 2018/1725.
(70)
Chun ceart daoine eile a chosaint ar an idirdhealú a d’fhéadfadh eascairt as an gclaontacht i gcórais intleachta saorga, ba cheart do na soláthraithe, go heisceachtúil, a mhéid a bhfuil géarghá leis chun brath agus ceartú claontachta a áirithiú maidir leis na córais intleachta saorga ardriosca, faoi réir coimircí iomchuí maidir le cearta bunúsacha agus saoirsí bunúsacha daoine nádúrtha agus tar éis chur i bhfeidhm na gcoinníollacha uile is infheidhme a leagtar síos faoin Rialachán seo i dteannta na gcoinníollacha a leagtar síos i Rialacháin (AE) 2016/679 agus (AE) 2018/1725 agus i dTreoir (AE) 2016/680, ba cheart do na soláthraithe a bheith in ann catagóirí speisialta sonraí pearsanta a phróiseáil freisin, mar ábhar leasa phoiblí shubstaintiúil de réir bhrí Airteagal 9(2), pointe (g) de Rialachán (AE) 2016/679 agus Airteagal 10(2), pointe (g) de Rialachán (AE) 2018/1725.
(71)
Having comprehensible information on how high-risk AI systems have been developed and how they perform throughout their lifetime is essential to enable traceability of those systems, verify compliance with the requirements under this Regulation, as well as monitoring of their operations and post market monitoring. This requires keeping records and the availability of technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements and facilitate post market monitoring. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk-management system and drawn in a clear and comprehensive form. The technical documentation should be kept up to date, appropriately throughout the lifetime of the AI system. Furthermore, high-risk AI systems should technically allow for the automatic recording of events, by means of logs, over the duration of the lifetime of the system.
(71)
Tá sé ríthábhachtach go mbeadh faisnéis intuigthe ann maidir leis an gcaoi ar forbraíodh córais intleachta saorga ardriosca agus maidir leis an gcaoi a bhfeidhmíonn siad ar feadh a saolré chun inrianaitheacht na gcóras sin a éascú, chun comhlíonadh na gceanglas faoin Rialachán seo a fhíorú, agus chun faireachán a dhéanamh ar a n-oibríochtaí agus ar fhaireachán iarmhargaidh. Chuige sin, is gá taifid a choimeád agus doiciméadacht theicniúil a chur ar fáil, ina bhfuil an fhaisnéis is gá chun measúnú a dhéanamh ar chomhlíontacht an chórais intleachta saorga leis na ceanglais ábhartha agus chun faireachán iarmhargaidh a éascú. Ba cheart a chur san áireamh, leis an bhfaisnéis sin, na saintréithe, cumais agus teorainneacha ginearálta maidir leis na córais, na halgartaim, na sonraí, na nósanna imeachta tástála agus bailíochtaithe arna n-úsáid chomh maith le doiciméadacht i dtaca leis an gcóras ábhartha bainistíochta riosca agus ba cheart an fhaisnéis a leagan amach go soiléir agus go cuimsitheach. Ba cheart an doiciméadacht theicniúil a choinneáil cothrom le dáta go hiomchuí feadh shaolré an chórais intleachta saorga. Ina theannta sin, ba cheart, ó thaobh na teicneolaíochta de, go gceadódh le córais intleachta saorga ardriosca taifeadadh uathoibríoch a dhéanamh ar theagmhais, trí bhíthin logaí, feadh shaolré an chórais.
(72)
To address concerns related to opacity and complexity of certain AI systems and help deployers to fulfil their obligations under this Regulation, transparency should be required for high-risk AI systems before they are placed on the market or put it into service. High-risk AI systems should be designed in a manner to enable deployers to understand how the AI system works, evaluate its functionality, and comprehend its strengths and limitations. High-risk AI systems should be accompanied by appropriate information in the form of instructions of use. Such information should include the characteristics, capabilities and limitations of performance of the AI system. Those would cover information on possible known and foreseeable circumstances related to the use of the high-risk AI system, including deployer action that may influence system behaviour and performance, under which the AI system can lead to risks to health, safety, and fundamental rights, on the changes that have been pre-determined and assessed for conformity by the provider and on the relevant human oversight measures, including the measures to facilitate the interpretation of the outputs of the AI system by the deployers. Transparency, including the accompanying instructions for use, should assist deployers in the use of the system and support informed decision making by them. Deployers should, inter alia, be in a better position to make the correct choice of the system that they intend to use in light of the obligations applicable to them, be educated about the intended and precluded uses, and use the AI system correctly and as appropriate. In order to enhance legibility and accessibility of the information included in the instructions of use, where appropriate, illustrative examples, for instance on the limitations and on the intended and precluded uses of the AI system, should be included. Providers should ensure that all documentation, including the instructions for use, contains meaningful, comprehensive, accessible and understandable information, taking into account the needs and foreseeable knowledge of the target deployers. Instructions for use should be made available in a language which can be easily understood by target deployers, as determined by the Member State concerned.
(72)
Chun aghaidh a thabhairt ar ábhair imní a bhaineann le doiléireacht agus castacht córas intleachta saorga áirithe agus chun cuidiú le húsáideoirí gairmiúla a n-oibleagáidí faoin Rialachán seo a chomhlíonadh, ba cheart córais intleachta saorga ardriosca a bheith trédhearcach sula gcuirfear ar an margadh iad nó sula gcuirfear i mbun seirbhíse iad. Ba cheart córais intleachta saorga ardriosca a cheapadh ar bhealach a chuirfidh ar chumas úsáideoirí gairmiúla tuiscint a fháil ar an gcaoi a n-oibríonn an córas intleachta saorga, meastóireacht a dhéanamh ar a fheidhmiúlacht, agus a láidreachtaí agus a theorainneacha a thuiscint. Ba cheart faisnéis iomchuí i bhfoirm treoracha úsáide a bheith ag gabháil le córais intleachta saorga ardriosca. Ba cheart saintréithe, cumais agus teorainneacha feidhmíochta an chórais intleachta saorga a bheith san áireamh san fhaisnéis sin. Chumhdófaí leo faisnéis maidir le himthosca aitheanta agus intuartha a d’fhéadfadh a bheith ann a bhaineann le húsáid an chórais intleachta saorga ardriosca, lena n-áirítear gníomhaíocht de chuid an úsáideora ghairmiúil a d’fhéadfadh tionchar a imirt ar iompraíocht agus feidhmíocht an chórais, faoina bhféadfadh rioscaí don tsláinte, don tsábháilteacht agus do chearta bunúsacha a bheith mar thoradh ar an gcóras intleachta saorga, maidir leis na hathruithe a réamhchinntíodh agus a ndearnadh measúnú orthu le haghaidh comhréireachta ag an soláthraí agus maidir leis na bearta ábhartha formhaoirseachta daonna, lena n-áirítear na bearta chun léirmhíniú na n-úsáideoirí gairmiúla ar aschur an chórais intleachta saorga a éascú. Ba cheart go gcuideoidh trédhearcacht, lena n-áirítear na treoracha úsáide a ghabhann leis, le húsáideoirí gairmiúla an córas a úsáid agus cinnteoireacht eolasach a dhéanamh. Ba cheart gur fearr a bheidh na húsáideoirí gairmiúla, inter alia, in ann an córas a n-úsáidfidh siad é a roghnú i bhfianaise na n-oibleagáidí is infheidhme maidir leo, ba cheart oideachas a bheith faighte acu faoi na húsáidí atá beartaithe agus coiscthe, ba cheart dóibh a bheith in ann an córas intleachta saorga a úsáid i gceart agus de réir mar is iomchuí. Chun inléiteacht agus inrochtaineacht na faisnéise a áirítear sna treoracha úsáide a fheabhsú, i gcás inarb iomchuí, ba cheart samplaí léiritheacha a chur san áireamh, mar shampla maidir le teorainneacha agus úsáidí beartaithe agus coiscthe an chórais intleachta saorga. Ba cheart do sholáthraithe a áirithiú go mbeidh sa doiciméadacht uile, lena n-áirítear na treoracha úsáide, faisnéis fhóinteach, chuimsitheach, inrochtana agus intuigthe, agus riachtanais agus eolas intuartha na sprioc-úsáideoirí gairmiúla á gcur san áireamh. Ba cheart treoracha úsáide a chur ar fáil i dteanga is féidir leis na sprioc-úsáideoirí gairmiúla a thuiscint go héasca, de réir mar a chinnfidh an Ballstát lena mbaineann.
(73)
High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning, ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle. To that end, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. It is also essential, as appropriate, to ensure that high-risk AI systems include mechanisms to guide and inform a natural person to whom human oversight has been assigned to make informed decisions if, when and how to intervene in order to avoid negative consequences or risks, or stop the system if it does not perform as intended. Considering the significant consequences for persons in the case of an incorrect match by certain biometric identification systems, it is appropriate to provide for an enhanced human oversight requirement for those systems so that no action or decision may be taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. Those persons could be from one or more entities and include the person operating or using the system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate verifications by the different persons are automatically recorded in the logs generated by the system. Given the specificities of the areas of law enforcement, migration, border control and asylum, this requirement should not apply where Union or national law considers the application of that requirement to be disproportionate.
(73)
Ba cheart córais intleachta saorga ardriosca a cheapadh agus a fhorbairt sa chaoi gur féidir le daoine nádúrtha formhaoirseacht a dhéanamh ar a bhfeidhmiú, a áirithiú go n-úsáidfear iad mar a bhí beartaithe agus go dtabharfar aghaidh ar a dtionchar thar shaolré an chórais. Chuige sin, ba cheart do sholáthraí an chórais bearta iomchuí maidir le formhaoirseacht dhaonna a shainaithint sula gcuirfear an córas ar an margadh nó i mbun seirbhíse. Go sonrach, i gcás inarb iomchuí, ba cheart a ráthú leis na bearta sin go bhfuil an córas faoi réir srianta oibríochtúla ionsuite nach féidir leis an gcóras é féin a shárú agus atá freagrúil don oibreoir daonna, agus go bhfuil an inniúlacht, oiliúint agus údarás is gá ag na daoine nádúrtha ar sannadh an fhormhaoirseacht dhaonna orthu chun an ról sin a dhéanamh. Tá sé ríthábhachtach freisin, de réir mar is iomchuí, a áirithiú go n-áireofar i gcórais intleachta saorga ardriosca sásraí chun duine nádúrtha ar sannadh formhaoirseacht dhaonna dó a threorú agus a chur ar an eolas chun cinntí eolasacha a dhéanamh ar cheart, cathain agus conas idirghabháil a dhéanamh chun iarmhairtí diúltacha nó rioscaí a sheachaint, nó chun stop a chur leis an gcóras mura bhfeidhmíonn sé mar a bhí beartaithe. I bhfianaise na n-iarmhairtí suntasacha do dhaoine i gcás meaitseáil mhícheart ó chórais sainaitheanta bithmhéadraí áirithe, is iomchuí foráil a dhéanamh maidir le ceanglas feabhsaithe formhaoirseachta daonna do na córais sin ionas nach bhféadfaidh an t-úsáideoir gairmiúil aon ghníomh ná cinneadh a dhéanamh ar bhonn an tsainaitheantais a d’eascair as an gcóras murar rud é go ndearna beirt daoine nádúrtha ar a laghad é sin a fhíorú agus a dhearbhú ar leithligh. D’fhéadfadh na daoine sin a bheith ó eintiteas amháin nó níos mó agus ba cheart dóibh an duine a oibríonn nó a úsáideann an córas a áireamh. Níor cheart ualach ná moilleanna nach bhfuil gá leo a chruthú leis an gceanglas sin agus d’fhéadfadh sé a bheith leordhóthanach go ndéanfaí na fíoruithe ar leithligh a dhéanann na daoine éagsúla a thaifeadadh go huathoibríoch sna logaí arna nginiúint ag an gcóras. I bhfianaise shainiúlachtaí réimsí fhorfheidhmiú an dlí, na himirce, an rialaithe teorann agus an tearmainn, níor cheart feidhm a bheith ag an gceanglas sin i gcás ina measann dlí an Aontais nó an dlí náisiúnta go bhfuil cur i bhfeidhm an cheanglais sin díréireach.
(74)
High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity, in light of their intended purpose and in accordance with the generally acknowledged state of the art. The Commission and relevant organisations and stakeholders are encouraged to take due consideration of the mitigation of risks and the negative impacts of the AI system. The expected level of performance metrics should be declared in the accompanying instructions of use. Providers are urged to communicate that information to deployers in a clear and easily understandable way, free of misunderstandings or misleading statements. Union law on legal metrology, including Directives 2014/31/EU (35) and 2014/32/EU (36) of the European Parliament and of the Council, aims to ensure the accuracy of measurements and to help the transparency and fairness of commercial transactions. In that context, in cooperation with relevant stakeholders and organisation, such as metrology and benchmarking authorities, the Commission should encourage, as appropriate, the development of benchmarks and measurement methodologies for AI systems. In doing so, the Commission should take note and collaborate with international partners working on metrology and relevant measurement indicators relating to AI.
(74)
Ba cheart do chórais intleachta saorga ardriosca feidhmiú go comhsheasmhach ar feadh a saolré agus leibhéal iomchuí cruinnis, stóinseachta agus cibearshlándála a bhaint amach, i bhfianaise na críche atá beartaithe dóibh agus i gcomhréir leis an úrscothacht a nglactar léi i gcoitinne. Moltar don Choimisiún agus d’eagraíochtaí agus geallsealbhóirí ábhartha aird chuí a thabhairt ar mhaolú rioscaí agus ar thionchair dhiúltacha an chórais intleachta saorga. Ba cheart an leibhéal méadrachtaí feidhmíochta a bhfuiltear ag súil leis a dhearbhú sna treoracha úsáide a ghabhann leis. Iarrtar ar sholáthraithe an fhaisnéis sin a chur in iúl d’úsáideoirí gairmiúla ar bhealach soiléir sothuigthe a bheidh saor ó mhíthuiscintí nó ó ráitis mhíthreoracha. Is é is aidhm do dhlí an Aontais maidir leis an méadreolaíocht dhlíthiúil, lena n-áirítear Treoracha 2014/31/AE (35) agus 2014/32/AE (36) ó Pharlaimint na hEorpa agus ón gComhairle, cruinneas na dtomhas a áirithiú agus cuidiú le trédhearcacht agus cothroime idirbheart tráchtála. Sa chomhthéacs sin, ba cheart don Choimisiún, agus é ag obair i gcomhar le geallsealbhóirí agus eagraíochtaí ábhartha, amhail údaráis mhéadreolaíochta agus tagarmharcála, ba cheart don Choimisiún a mholadh go bhforbrófaí, de réir mar is iomchuí, tagarmharcanna agus modheolaíochtaí tomhais le haghaidh córais intleachta saorga. Agus é sin á dhéanamh aige, ba cheart don Choimisiún comhpháirtithe idirnáisiúnta atá ag obair ar an méadreolaíocht agus ar tháscairí tomhais ábhartha a bhaineann leis an intleacht shaorga a thabhairt dá aire agus comhoibriú leo.
(75)
Technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to harmful or otherwise undesirable behaviour that may result from limitations within the systems or the environment in which the systems operate (e.g. errors, faults, inconsistencies, unexpected situations). Therefore, technical and organisational measures should be taken to ensure robustness of high-risk AI systems, for example by designing and developing appropriate technical solutions to prevent or minimise harmful or otherwise undesirable behaviour. Those technical solution may include for instance mechanisms enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.
(75)
Is ceanglas bunriachtanach é stóinseacht theicniúil le haghaidh córais intleachta saorga ardriosca. Ba cheart dóibh a bheith athléimneach i ndáil le hiompraíocht dhíobhálach nó neamh-inmhianaithe ar shlí eile a d’fhéadfadh a bheith mar thoradh ar shrianta laistigh de na córais nó den timpeallacht ina n-oibríonn na córais (e.g. earráidí, lochtanna, neamhréireachtaí, cásanna nach raibh coinne leo). Dá bhrí sin, ba cheart bearta teicniúla agus bearta eagraíochtúla a dhéanamh chun stóinseacht na gcóras intleachta saorga ardriosca a áirithiú, mar shampla trí réitigh theicniúla iomchuí a dhearadh agus a fhorbairt chun iompraíocht atá díobhálach nó atá neamh-inmhianaithe ar bhealach eile a chosc nó a íoslaghdú. D’fhéadfaí a áireamh ar na réitigh theicniúla sin, mar shampla, sásraí a chuireann ar chumas an chórais briseadh isteach go sábháilte ar a oibríocht (pleananna ‘slán i gcás teipe’) más ann d’aimhrialtachtaí áirithe nó nuair a sháraíonn an oibríocht teorainneacha réamhchinntithe áirithe. D’fhéadfadh sé go mbeadh tionchair ar an tsábháilteacht nó go mbeadh éifeachtaí diúltacha ar chearta bunúsacha mar thoradh ar mhainneachtain na rioscaí sin a chosaint, mar shampla, de bharr cinntí earráideacha nó aschuir mhíchearta nó chlaonta a ghin an córas intleachta saorga.
(76)
Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
(76)
Tá ról ríthábhachtach ag an gcibearshlándáil chun a áirithiú go bhfuil na córais intleachta saorga athléimneach in aghaidh iarrachtaí a dhéanfadh tríú páirtithe mailíseacha úsáid, iompraíocht nó feidhmíocht na gcóras a athrú nó cur isteach ar a n-airíonna slándála agus leochaileachtaí an chórais á saothrú acu. Féadfaidh cibirionsaithe in aghaidh córais intleachta saorga sócmhainní sonracha intleachta saorga a ghiaráil, amhail tacair sonraí oiliúna (e.g. nimhiú sonraí) nó samhlacha oilte (e.g. ionsaithe sáraíochta nó infeireas ballraíochta), nó leochaileachtaí a shaothrú i sócmhainní digiteacha an chórais nó ina bhonneagar foluiteach TFC. Chun leibhéal cibearshlándála atá i gcomhréir leis na rioscaí a áirithiú, ba cheart do sholáthraithe na gcóras intleachta saorga ardriosca bearta oiriúnacha a dhéanamh, amhail rialuithe slándála, agus an bonneagar foluiteach TFC á cur san áireamh freisin de réir mar is iomchuí.
(77)
Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-risk AI systems which fall within the scope of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, in accordance with that regulation may demonstrate compliance with the cybersecurity requirements of this Regulation by fulfilling the essential cybersecurity requirements set out in that regulation. When high-risk AI systems fulfil the essential requirements of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, they should be deemed compliant with the cybersecurity requirements set out in this Regulation in so far as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof issued under that regulation. To that end, the assessment of the cybersecurity risks, associated to a product with digital elements classified as high-risk AI system according to this Regulation, carried out under a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks, as well as, as relevant, risks to fundamental rights as required by this Regulation.
(77)
Gan dochar do na ceanglais a bhaineann le stóinseacht agus cruinneas a leagtar amach sa Rialachán seo, maidir le córais intleachta saorga ardriosca a thagann faoi raon feidhme rialacháin ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha, i gcomhréir leis an rialachán sin, féadfaidh siad comhlíonadh cheanglais chibearshlándála an Rialacháin seo a léiriú trí chomhlíonadh na gceanglas cibearshlándála fíor-riachtanach a leagtar amach sa rialachán sin. Nuair a chomhlíonann córais intleachta saorga ardriosca ceanglais fhíor-riachtanacha rialacháin ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha, ba cheart a mheas go bhfuil siad i gcomhréir leis na ceanglais chibearshlándála a leagtar amach sa Rialachán seo a mhéid a léirítear gur baineadh amach na ceanglais sin sa dearbhú AE arna eisiúint faoin rialachán sin. maidir le comhréireacht an chórais nó i gcodanna den dearbhú sin. Chuige sin, sa mheasúnú ar na rioscaí cibearshlándála a bhaineann le táirge ag a bhfuil eilimintí digiteacha atá aicmithe, de réir an Rialacháin seo, mar chóras intleachta saorga ardriosca, ar measúnú é a dhéantar faoi rialachán ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha, ba cheart rioscaí do chibear-athléimneacht córais intleachta saorga a mheas a mhéid a bhaineann le hiarrachtaí arna ndéanamh ag tríú páirtithe neamhúdaraithe úsáid, iompraíocht nó feidhmíocht an chórais a athrú, lena n-áirítear leochaileachtaí a bhaineann go sonrach leis an intleacht shaorga amhail nimhiú sonraí nó ionsaithe sáraíochta, chomh maith le, de réir mar is ábhartha, rioscaí do chearta bunúsacha mar a cheanglaítear leis an Rialachán seo.
(78)
The conformity assessment procedure provided by this Regulation should apply in relation to the essential cybersecurity requirements of a product with digital elements covered by a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and classified as a high-risk AI system under this Regulation. However, this rule should not result in reducing the necessary level of assurance for critical products with digital elements covered by a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements. Therefore, by way of derogation from this rule, high-risk AI systems that fall within the scope of this Regulation and are also qualified as important and critical products with digital elements pursuant to a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and to which the conformity assessment procedure based on internal control set out in an annex to this Regulation applies, are subject to the conformity assessment provisions of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements insofar as the essential cybersecurity requirements of that regulation are concerned. In this case, for all the other aspects covered by this Regulation the respective provisions on conformity assessment based on internal control set out in an annex to this Regulation should apply. Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned to ENISA under the Regulation (EU) 2019/881 of the European Parliament and of the Council (37), the Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.
(78)
Ba cheart feidhm a bheith ag an nós imeachta um measúnú comhréireachta dá bhforáiltear sa Rialachán seo maidir leis na ceanglais fhíor-riachtanacha chibearshlándála a bhaineann le táirge ag a bhfuil eilimintí digiteacha a chumhdaítear le rialachán ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha agus atá aicmithe mar chóras intleachta saorga ardriosca faoin Rialachán seo. Níor cheart, áfach, laghdú ar an leibhéal dearbhaithe is gá le haghaidh táirgí criticiúla ag a bhfuil eilimintí digiteacha a chumhdaítear le rialachán ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha a bheith mar thoradh ar an riail sin. Dá bhrí sin, de mhaolú ar an riail sin, maidir le córais intleachta saorga ardriosca a thagann faoi raon feidhme an Rialacháin seo agus atá cáilithe freisin mar tháirgí tábhachtacha criticiúla ag a bhfuil eilimintí digiteacha de bhun rialachán ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha agus a bhfuil feidhm ag an nós imeachta um measúnú comhréireachta bunaithe ar rialú inmheánach maidir leo a leagtar amach in iarscríbhinn a ghabhann leis an Rialachán seo, tá siad faoi réir forálacha measúnaithe comhréireachta de rialachán ó Pharlaimint na hEorpa agus ón gComhairle maidir le ceanglais chothrománacha chibearshlándála le haghaidh táirgí ag a bhfuil eilimintí digiteacha a mhéid a bhaineann le ceanglais chibearshlándála fhíor-riachtanacha an rialacháin sin. Sa chás sin, maidir leis na gnéithe eile go léir a chumhdaítear leis an Rialachán seo, ba cheart feidhm a bheith ag na forálacha ábhartha maidir le measúnú comhréireachta bunaithe ar rialú inmheánach a leagtar amach in iarscríbhinn a ghabhann leis an Rialachán seo. Agus an t-eolas agus an saineolas atá ag ENISA maidir leis an mbeartas cibearshlándála agus na cúraimí cibearshlándála a shanntar dó faoi Rialachán (AE) 2019/881 ó Pharlaimint na hEorpa agus ón gComhairle (37), á chur san áireamh, ba cheart don Choimisiún comhoibriú le ENISA maidir le saincheisteanna a bhaineann le cibearshlándáil na gcóras intleachta saorga.
(79)
It is appropriate that a specific natural or legal person, defined as the provider, takes responsibility for the placing on the market or the putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.
(79)
Is iomchuí go nglacfadh duine sonrach nádúrtha nó dlítheanach, atá sainmhínithe mar an soláthraí, freagracht maidir le córas intleachta saorga ardriosca a chur ar an margadh nó a chur i mbun seirbhíse, gan beann ar cé acu is é an soláthraí sin an duine nádúrtha nó dlítheanach an córas a dhearadh nó a fhorbairt nó nach ea.
(80)
As signatories to the United Nations Convention on the Rights of Persons with Disabilities, the Union and the Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality, to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the growing importance and use of AI systems, the application of universal design principles to all new technologies and services should ensure full and equal access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore essential that providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 of the European Parliament and of the Council (38) and Directive (EU) 2019/882. Providers should ensure compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as possible into the design of the high-risk AI system.
(80)
Agus síniú curtha acu le Coinbhinsiún na Náisiún Aontaithe ar Chearta Daoine faoi Mhíchumas, tá oibleagáid dhlíthiúil ar an Aontas agus ar na Ballstáit daoine faoi mhíchumas a chosaint ar idirdhealú agus a gcomhionannas a chur chun cinn, chun a áirithiú go mbeidh rochtain ag daoine faoi mhíchumas, ar bhonn comhionann le daoine eile, ar theicneolaíochtaí agus córais faisnéise agus cumarsáide, agus chun a áirithiú go n-urramófar príobháideachas do dhaoine faoi mhíchumas. I bhfianaise thábhacht agus úsáid mhéadaitheach na gcóras intleachta saorga, le cur i bhfeidhm phrionsabail an deartha uilíoch maidir le gach teicneolaíocht agus seirbhís nua, ba cheart rochtain iomlán chothrom a áirithiú do gach duine a bhféadfadh teicneolaíochtaí intleachta saorga difear a dhéanamh dóibh nó a úsáideann teicneolaíochtaí intleachta saorga, lena n-áirítear daoine faoi mhíchumas, ar bhealach ina gcuirtear a ndínit bhunúsach agus a n-éagsúlacht bhunúsach san áireamh go hiomlán. Tá sé ríthábhachtach, dá bhrí sin, go n-áiritheoidh soláthraithe go gcomhlíonfaidh siad go hiomlán na ceanglais inrochtaineachta, lena n-áirítear Treoir (AE) 2016/2102 ó Pharlaimint na hEorpa agus ón gComhairle (38) agus Treoir (AE) 2019/882. Ba cheart do sholáthraithe comhlíonadh na gceanglas sin a áirithiú trí dhearadh. Dá bhrí sin, ba cheart na bearta is gá a chomhtháthú a mhéid is féidir i ndearadh an chórais intleachta saorga ardriosca.
(81)
The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Providers of high-risk AI systems that are subject to obligations regarding quality management systems under relevant sectoral Union law should have the possibility to include the elements of the quality management system provided for in this Regulation as part of the existing quality management system provided for in that other sectoral Union law. The complementarity between this Regulation and existing sectoral Union law should also be taken into account in future standardisation activities or guidance adopted by the Commission. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority concerned.
(81)
Ba cheart don soláthraí córas bainistíochta cáilíochta fónta a bhunú, a áirithiú go gcomhlíontar an nós imeachta um measúnú comhréireachta is gá, an doiciméadacht ábhartha a tharraingt suas agus córas faireacháin iarmhargaidh stóinseach a bhunú. Ba cheart an deis a bheith ag soláthraithe córas intleachta saorga ardriosca atá faoi réir oibleagáidí maidir le córais bainistíochta cáilíochta faoi dhlí earnálach ábhartha an Aontais, ba cheart an deis a bheith acu eilimintí den chóras bainistíochta cáilíochta dá bhforáiltear sa Rialachán seo a áireamh mar chuid den chóras bainistíochta cáilíochta atá ann cheana dá bhforáiltear sa dlí earnálach eile sin de chuid an Aontais. Ba cheart an chomhlántacht idir an Rialachán seo agus dlí earnálach an Aontais atá ann cheana a chur san áireamh freisin i ngníomhaíochtaí caighdeánaithe nó i dtreoraíocht a ghlacfaidh an Coimisiún amach anseo. Féadfaidh na húdaráis phoiblí a chuireann córais intleachta saorga ardriosca i mbun seirbhíse le haghaidh a n-úsáide féin na rialacha maidir leis an gcóras bainistíochta cáilíochta a ghlacadh agus a chur chun feidhme mar chuid den chóras bainistíochta cáilíochta arna ghlacadh ar an leibhéal náisiúnta nó réigiúnach, de réir mar is iomchuí, agus sainiúlachtaí na hearnála agus inniúlachtaí agus eagrú an údaráis phoiblí lena mbaineann á gcur san áireamh.
(82)
To enable enforcement of this Regulation and create a level playing field for operators, and, taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, providers established in third countries should, by written mandate, appoint an authorised representative established in the Union. This authorised representative plays a pivotal role in ensuring the compliance of the high-risk AI systems placed on the market or put into service in the Union by those providers who are not established in the Union and in serving as their contact person established in the Union.
(82)
Chun gur féidir an Rialachán seo a fhorfheidhmiú agus chun cothrom na Féinne a chruthú le haghaidh oibreoirí, agus, ag cur san áireamh na bhfoirmeacha éagsúla chun táirgí digiteacha a chur ar fáil, tá sé tábhachtach a áirithiú, i ngach cás, gur féidir le duine atá bunaithe san Aontas an fhaisnéis uile is gá a sholáthar do na húdaráis maidir le comhlíontacht an chórais intleachta saorga. Dá bhrí sin, sula gcuirfidh siad a gcórais intleachta saorga ar fáil san Aontas, ba cheart soláthraithe atá bunaithe i dtríú tíortha ionadaí údaraithe atá bunaithe san Aontas a ainmniú, trí shainordú i scríbhinn. Tá ról ríthábhachtach ag an ionadaí údaraithe maidir le comhlíontacht a áirithiú i gcás na gcóras intleachta saorga ardriosca a chuireann na soláthraithe sin nach bhfuil bunaithe san Aontas ar an margadh nó i mbun seirbhíse san Aontas agus maidir le fónamh mar dhuine teagmhála atá bunaithe san Aontas.
(83)
In light of the nature and complexity of the value chain for AI systems and in line with the New Legislative Framework, it is essential to ensure legal certainty and facilitate the compliance with this Regulation. Therefore, it is necessary to clarify the role and the specific obligations of relevant operators along that value chain, such as importers and distributors who may contribute to the development of AI systems. In certain situations those operators could act in more than one role at the same time and should therefore fulfil cumulatively all relevant obligations associated with those roles. For example, an operator could act as a distributor and an importer at the same time.
(83)
I bhfianaise chineál agus chastacht an tslabhra luacha le haghaidh córais intleachta saorga agus i gcomhréir leis an gCreat Reachtach Nua, tá sé ríthábhachtach deimhneacht dhlíthiúil a áirithiú agus comhlíonadh an Rialacháin seo a éascú. Dá bhrí sin, is gá ról agus oibleagáidí sonracha na n-oibreoirí ábhartha feadh an tslabhra luacha sin a shoiléiriú, amhail allmhaireoirí agus dáileoirí a d’fhéadfadh rannchuidiú le forbairt na gcóras intleachta saorga. I gcásanna áirithe, d’fhéadfadh na hoibreoirí sin gníomhú i níos mó ná ról amháin ag an am céanna agus, dá bhrí sin, ba cheart dóibh na hoibleagáidí ábhartha uile a bhaineann leis na róil sin a chomhlíonadh go carnach. Mar shampla, d’fhéadfadh oibreoir gníomhú mar dháileoir agus mar allmhaireoir ag an am céanna.
(84)
To ensure legal certainty, it is necessary to clarify that, under certain specific conditions, any distributor, importer, deployer or other third-party should be considered to be a provider of a high-risk AI system and therefore assume all the relevant obligations. This would be the case if that party puts its name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are allocated otherwise. This would also be the case if that party makes a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in a way that it remains a high-risk AI system in accordance with this Regulation, or if it modifies the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service, in a way that the AI system becomes a high-risk AI system in accordance with this Regulation. Those provisions should apply without prejudice to more specific provisions established in certain Union harmonisation legislation based on the New Legislative Framework, together with which this Regulation should apply. For example, Article 16(2) of Regulation (EU) 2017/745, establishing that certain changes should not be considered to be modifications of a device that could affect its compliance with the applicable requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that Regulation.
(84)
Chun deimhneacht dhlíthiúil a áirithiú, is gá a shoiléiriú gur cheart, faoi choinníollacha sonracha áirithe, aon duine ar dáileoir, allmhaireoir, úsáideoir gairmiúil nó tríú páirtí eile é a mheas mar sholáthraí córais intleachta saorga ardriosca agus, dá bhrí sin, ba cheart dó na hoibleagáidí ábhartha uile a ghlacadh chuige féin. B’amhlaidh an cás dá gcuirfeadh an páirtí sin a ainm nó a thrádmharc ar chóras intleachta saorga ardriosca a cuireadh ar an margadh nó i mbun seirbhíse cheana féin, gan dochar do shocruithe conarthacha lena sonraítear go leithdháiltear na hoibleagáidí ar bhealach eile. Bheadh sé sin amhlaidh freisin dá ndéanfadh an páirtí sin modhnú substaintiúil ar chóras intleachta saorga ardriosca a cuireadh ar an margadh cheana féin nó a cuireadh i mbun seirbhíse cheana féin ar bhealach a fhágann go bhfuil sé fós ina chóras intleachta saorga ardriosca i gcomhréir leis an Rialachán seo, nó dá modhnódh sé an cuspóir atá beartaithe do chóras intleachta saorga, lena n-áirítear córas intleachta saorga ilchuspóireach, nár aicmíodh mar chóras ardriosca agus a cuireadh ar an margadh nó i mbun seirbhíse cheana féin, ar bhealach ina ndéanfaí córas ardriosca den chóras sin i gcomhréir leis an Rialachán seo. Ba cheart feidhm a bheith ag na forálacha sin gan dochar d’fhorálacha níos sonraí arna mbunú i gcuid áirithe de reachtaíocht an Aontais maidir le comhchuibhiú atá bunaithe ar an gCreat Nua Reachtach, ar in éineacht leo ba cheart feidhm a bheith ag an Rialachán seo. Mar shampla, bunaítear le hAirteagal 16(2) de Rialachán (AE) 2017/745 nár cheart a mheas, maidir le hathruithe áirithe, gur modhnuithe ar fheiste a d’fhéadfadh difear a dhéanamh do chomhlíonadh na gceanglas is infheidhme na hathruithe sin, agus ba cheart feidhm a bheith ag an Airteagal sin fós maidir le córais intleachta saorga ardriosca ar feistí leighis iad de réir bhrí an Rialacháin sin.
(85)
General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.
(85)
Féadfar córais intleachta saorga ilchuspóireacha a úsáid mar chórais intleachta saorga ardriosca leo féin nó mar chomhpháirteanna de chórais intleachta saorga ardriosca eile. Dá bhrí sin, mar gheall ar a gcineál ar leith agus chun comhroinnt chothrom na bhfreagrachtaí feadh an tslabhra luacha intleachta saorga a áirithiú, ba cheart do sholáthraithe na gcóras sin, gan beann ar cé acu is mar chórais ardriosca nó is mar chomhpháirteanna de chórais intleachta saorga ardriosca a d’fhéadfadh soláthraithe eile iad a úsáid, agus mura bhforáiltear a mhalairt faoin Rialachán seo, ba cheart dóibh comhoibriú go dlúth le soláthraithe na gcóras intleachta saorga ardriosca ábhartha chun gur féidir leo na hoibleagáidí ábhartha faoin Rialachán seo a chomhlíonadh agus leis na húdaráis inniúla arna mbunú faoin Rialachán seo.
(86)
Where, under the conditions laid down in this Regulation, the provider that initially placed the AI system on the market or put it into service should no longer be considered to be the provider for the purposes of this Regulation, and when that provider has not expressly excluded the change of the AI system into a high-risk AI system, the former provider should nonetheless closely cooperate and make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems.
(86)
I gcás, faoi na coinníollacha a leagtar síos sa Rialachán seo, nár cheart an soláthraí a chéadchuir an córas intleachta saorga ar an margadh nó i mbun seirbhíse a mheas a thuilleadh mar an soláthraí chun críocha an Rialacháin seo, agus i gcás nach bhfuil eisiata go sainráite ag an soláthraí sin go n-iompófaí an córas intleachta saorga ina chóras intleachta saorga ardriosca, mar sin féin, ba cheart don soláthraí mar a bhí comhoibriú go dlúth agus an fhaisnéis is gá a chur ar fáil agus an rochtain theicniúil agus an cúnamh eile a sholáthar a bhfuiltear ag súil léi go réasúnta agus is gá chun na hoibleagáidí a leagtar amach sa Rialachán seo a chomhlíonadh, go háirithe a mhéid a bhaineann leis an measúnú comhréireachta ar chórais intleachta saorga ardriosca a chomhlíonadh.
(87)
In addition, where a high-risk AI system that is a safety component of a product which falls within the scope of Union harmonisation legislation based on the New Legislative Framework is not placed on the market or put into service independently from the product, the product manufacturer defined in that legislation should comply with the obligations of the provider established in this Regulation and should, in particular, ensure that the AI system embedded in the final product complies with the requirements of this Regulation.
(87)
Ina theannta sin, i gcás ina bhfuil córas intleachta saorga ardriosca ina chomhpháirt sábháilteachta de tháirge a chumhdaítear faoi raon feidhme reachtaíocht chomhchuibhithe an Aontais atá bunaithe ar an gCreat Reachtach Nua agus nach gcuirtear an córas sin ar an margadh ná i mbun seirbhíse go neamhspleách ar an táirge, ba cheart do mhonaróir an táirge, mar a shainmhínítear sa reachtaíocht sin, oibleagáidí an tsoláthraí a bhunaítear sa Rialachán seo a chomhlíonadh agus ba cheart dó a áirithiú, go háirithe, go gcomhlíonann an córas intleachta saorga, atá leabaithe sa táirge deiridh, ceanglais an Rialacháin seo.
(88)
Along the AI value chain multiple parties often supply AI systems, tools and services but also components or processes that are incorporated by the provider into the AI system with various objectives, including the model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. Those parties have an important role to play in the value chain towards the provider of the high-risk AI system into which their AI systems, tools, services, components or processes are integrated, and should provide by written agreement this provider with the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider to fully comply with the obligations set out in this Regulation, without compromising their own intellectual property rights or trade secrets.
(88)
Feadh an luachshlabhra intleachta saorga, is minic eintitis iomadúla ann a sholáthraíonn córais, uirlisí agus seirbhísí intleachta saorga ach freisin comhpháirteanna nó próisis a chorpraíonn an soláthraí sa chóras intleachta saorga agus cuspóirí éagsúla aige, lena n-áirítear oiliúint samhla, athoiliúint samhla, tástáil agus meastóireacht samhla, comhtháthú le bogearraí, nó gnéithe eile d’fhorbairt samhla. Tá ról tábhachtach ag na páirtithe sin sa slabhra luacha i leith sholáthraí an chórais intleachta saorga ardriosca ina ndéantar a gcórais, a n-uirlisí, a seirbhísí, a gcomhpháirteanna nó a bpróisis intleachta saorga a chomhtháthú, agus ba cheart dóibh, trí chomhaontú i scríbhinn, an fhaisnéis, na cumais, an rochtain theicniúil agus an cúnamh eile is gá a sholáthar don soláthraí sin bunaithe ar an úrscothacht a aithnítear i gcoitinne, chun a chur ar chumas an tsoláthraí sin na hoibleagáidí a leagtar amach sa Rialachán seo a chomhlíonadh go hiomlán, gan baint dá gcearta maoine intleachtúla ná dá rúin trádála féin.
(89)
Third parties making accessible to the public tools, services, processes, or AI components other than general-purpose AI models, should not be mandated to comply with requirements targeting the responsibilities along the AI value chain, in particular towards the provider that has used or integrated them, when those tools, services, processes, or AI components are made accessible under a free and open-source licence. Developers of free and open-source tools, services, processes, or AI components other than general-purpose AI models should be encouraged to implement widely adopted documentation practices, such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.
(89)
Maidir le tríú páirtithe a chuireann rochtain ar fáil don phobal d’uirlisí, seirbhísí, próisis, nó comhpháirteanna intleachta saorga cé is moite de shamhlacha ilchuspóireacha intleachta saorga, níor cheart sainordú a bheith orthu ceanglais a chomhlíonadh lena ndírítear ar na freagrachtaí feadh an tslabhra luacha intleachta saorga, go háirithe na cinn a dhírítear ar an soláthraí a d’úsáid nó a chomhtháthaigh na huirlisí, na seirbhísí, na próisis nó na comhpháirteanna intleachta saorga sin, i gcás ina gcuirfear ar fáil faoi cheadúnas saor agus oscailte iad. Mar sin féin, ba cheart forbróirí uirlisí, seirbhísí, próiseas nó comhpháirteanna intleachta saorga atá saor agus ar nithe foinse oscailte iad ach nach samhlacha intleachta saorga ilchuspóireacha iad, ba cheart na forbróirí sin a spreagadh chun cleachtais doiciméadachta a bhfuil glacadh forleathan leo a chur chun feidhme, amhail cártaí samhla agus bileoga sonraí, mar bhealach chun dlús a chur le comhroinnt faisnéise feadh an tslabhra luacha intleachta saorga, rud a cheadódh cur chun cinn na gcóras intleachta saorga iontaofa san Aontas.
(90)
The Commission could develop and recommend voluntary model contractual terms between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used or integrated in high-risk AI systems, to facilitate the cooperation along the value chain. When developing voluntary model contractual terms, the Commission should also take into account possible contractual requirements applicable in specific sectors or business cases.
(90)
D’fhéadfadh an Coimisiún téarmaí conarthacha eiseamláireacha deonacha a fhorbairt agus a mholadh idir soláthraithe córas intleachta saorga ardriosca agus tríú páirtithe a sholáthraíonn uirlisí, seirbhísí, comhpháirteanna nó próisis a úsáidtear nó a chomhtháthaítear i gcórais intleachta saorga ardriosca, chun an comhar feadh an tslabhra luacha a éascú. Nuair a bheidh na téarmaí conarthacha eiseamláireacha deonacha á bhforbairt aige, ba cheart don Choimisiún ceanglais chonarthacha a chur san áireamh a d’fhéadfadh a bheith infheidhme in earnálacha sonracha nó i gcásanna gnó sonracha.
(91)
Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for deployers. Deployers should in particular take appropriate technical and organisational measures to ensure they use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate. Furthermore, deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks. Those obligations should be without prejudice to other deployer obligations in relation to high-risk AI systems under Union or national law.
(91)
I bhfianaise chineál na gcóras intleachta saorga agus na rioscaí maidir leis an tsábháilteacht agus cearta bunúsacha a d’fhéadfadh a bheith ag baint le húsáid na gcóras sin, lena n-áirítear a mhéid a bhaineann leis an ngá faireachán iomchuí ar fheidhmíocht córais intleachta saorga i suíomh fíorshaoil a áirithiú, is iomchuí freagrachtaí sonracha a shocrú le haghaidh úsáideoirí gairmiúla. Ba cheart, go háirithe, d’úsáideoirí gairmiúla bearta teicniúla agus eagrúcháin iomchuí a dhéanamh chun a áirithiú go n-úsáidfidh siad na córais intleachta saorga ardriosca i gcomhréir leis na treoracha maidir le húsáid agus ba cheart a fhoráil maidir le hoibleagáidí eile a mhéid a bhaineann le faireachán a dhéanamh ar fheidhmiú na gcóras intleachta saorga agus a mhéid a bhaineann le coimeád taifead, de réir mar is iomchuí. Thairis sin, ba cheart d’úsáideoirí gairmiúla a áirithiú go bhfuil an inniúlacht is gá ag na daoine a fhágtar orthu na treoracha úsáide agus formhaoirseachta daonna a chur chun feidhme mar a leagtar amach sa Rialachán seo, go háirithe leibhéal leordhóthanach litearthachta, oiliúna agus údaráis intleachta saorga chun na cúraimí sin a chomhlíonadh i gceart. Ba cheart na hoibleagáidí sin a bheith gan dochar d’oibleagáidí eile ar úsáideoirí gairmiúla i ndáil le córais intleachta saorga ardriosca faoi dhlí an Aontais nó faoin dlí náisiúnta.
(92)
This Regulation is without prejudice to obligations for employers to inform or to inform and consult workers or their representatives under Union or national law and practice, including Directive 2002/14/EC of the European Parliament and of the Council (39), on decisions to put into service or use AI systems. It remains necessary to ensure information of workers and their representatives on the planned deployment of high-risk AI systems at the workplace where the conditions for those information or information and consultation obligations in other legal instruments are not fulfilled. Moreover, such information right is ancillary and necessary to the objective of protecting fundamental rights that underlies this Regulation. Therefore, an information requirement to that effect should be laid down in this Regulation, without affecting any existing rights of workers.
(92)
Tá an Rialachán seo gan dochar do na hoibleagáidí ar fhostóirí a n-oibrithe a chur ar an eolas maidir le cinntí chun córais intleachta saorga a chur i mbun seirbhíse nó a úsáid, nó dul i gcomhairle leo nó lena n-ionadaithe faoi sin, ar oibleagáidí iad siúd atá orthu faoi dhlí agus cleachtas an Aontais nó faoi dhlí agus cleachtas náisiúnta, lena n-áirítear Treoir 2002/14/CE ó Pharlaimint na hEorpa agus ón gComhairle (39). Tá sé riachtanach fós a áirithiú go gcuirfear oibrithe agus a n-ionadaithe ar an eolas maidir le cur in úsáid beartaithe córas intleachta saorga ardriosca san ionad oibre i gcás nach gcomhlíontar na coinníollacha maidir leis na hoibleagáidí faisnéise nó na hoibleagáidí faisnéise agus comhairliúcháin sin in ionstraimí dlí eile. Thairis sin, tá an ceart faisnéise sin coimhdeach agus riachtanach maidir leis an gcuspóir atá mar bhonn leis an Rialachán seo, is é sin cearta bunúsacha a chosaint. Dá bhrí sin, ba cheart ceanglas faisnéise chuige sin a leagan síos sa Rialachán seo, gan difear a dhéanamh d’aon chearta atá ag oibrithe cheana.
(93)
Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the persons or groups of persons likely to be affected, including vulnerable groups. Deployers of high-risk AI systems listed in an annex to this Regulation also play a critical role in informing natural persons and should, when they make decisions or assist in making decisions related to natural persons, where applicable, inform the natural persons that they are subject to the use of the high-risk AI system. This information should include the intended purpose and the type of decisions it makes. The deployer should also inform the natural persons about their right to an explanation provided under this Regulation. With regard to high-risk AI systems used for law enforcement purposes, that obligation should be implemented in accordance with Article 13 of Directive (EU) 2016/680.
(93)
Cé gur féidir rioscaí a bhaineann le córais intleachta saorga a bheith ann mar thoradh ar an gcaoi a ndeartar córais den sórt sin, is féidir rioscaí a bheith ann freisin ón gcaoi a n-úsáidtear na córais intleachta saorga sin. Dá bhrí sin, tá ról ríthábhachtach ag úsáideoirí gairmiúla córas intleachta saorga ardriosca i dtaobh a áirithiú go gcosnaítear cearta bunúsacha, rud a chomhlánaíonn oibleagáidí an tsoláthraí agus an córas intleachta saorga á fhorbairt. Is fearr is féidir leis na húsáideoirí gairmiúla tuiscint a fháil ar an gcaoi a n-úsáidfear an córas intleachta saorga ardriosca go nithiúil agus, dá bhrí sin, is féidir leo rioscaí suntasacha a d’fhéadfadh a bheith ann nár tuaradh sa chéim forbartha a shainaithint, mar gheall ar eolas níos beaichte ar chomhthéacs na húsáide, ar na daoine nó ar na grúpaí daoine ar dócha go ndéanfar difear dóibh, lena n-áirítear grúpaí leochaileacha. Tá ról ríthábhachtach acu siúd a úsáideann córais intleachta saorga ardriosca go gairmiúil agus a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo freisin maidir le daoine nádúrtha a chur ar an eolas agus ba cheart dóibh, nuair a dhéanann siad cinntí nó nuair a chuidíonn siad le cinntí a dhéanamh a bhaineann le daoine nádúrtha, i gcás inarb infheidhme, na daoine nádúrtha a chur ar an eolas go bhfuil córas intleachta saorga ardriosca á úsáid ina leith. Ba cheart a áireamh san fhaisnéis sin an chríoch atá beartaithe di agus an cineál cinntí a dhéanfaidh sí. Ba cheart don úsáideoir gairmiúil daoine nádúrtha a chur ar an eolas freisin maidir leis an gceart atá acu míniú a fháil faoin Rialachán seo. Maidir le córais intleachta saorga ardriosca a úsáidtear chun críocha fhorfheidhmiú an dlí, ba cheart an oibleagáid sin a chur chun feidhme i gcomhréir le hAirteagal 13 de Threoir (AE) 2016/680.
(94)
Any processing of biometric data involved in the use of AI systems for biometric identification for the purpose of law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that allows such processing only where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and where authorised by Union or Member State law. Such use, when authorised, also needs to respect the principles laid down in Article 4 (1) of Directive (EU) 2016/680 including lawfulness, fairness and transparency, purpose limitation, accuracy and storage limitation.
(94)
Aon phróiseáil a dhéantar ar shonraí bithmhéadracha a bhaineann le húsáid córas intleachta saorga le haghaidh sainaithint bhithmhéadrach chun críoch fhorfheidhmiú an dlí, ní mór di Airteagal 10 de Threoir (AE) 2016/680 a chomhlíonadh, ar Airteagal é nach gceadaítear leis an phróiseáil sin ach amháin i gcás ina bhfuil géarghá leis, faoi réir coimircí iomchuí maidir le cearta agus saoirsí an ábhair sonraí, agus i gcás ina n-údaraítear í le dlí an Aontais nó le dlí Ballstáit. Maidir leis an úsáid sin, nuair a údaraítear í, ní mór di na prionsabail a leagtar síos in Airteagal 4(1) de Threoir (AE) 2016/680 a urramú freisin, lena n-áirítear dlíthiúlacht, cothroime agus trédhearcacht, teorannú de réir cuspóra, cruinneas agus teorannú stórála.
(95)
Without prejudice to applicable Union law, in particular Regulation (EU) 2016/679 and Directive (EU) 2016/680, considering the intrusive nature of post-remote biometric identification systems, the use of post-remote biometric identification systems should be subject to safeguards. Post-remote biometric identification systems should always be used in a way that is proportionate, legitimate and strictly necessary, and thus targeted, in terms of the individuals to be identified, the location, temporal scope and based on a closed data set of legally acquired video footage. In any case, post-remote biometric identification systems should not be used in the framework of law enforcement to lead to indiscriminate surveillance. The conditions for post-remote biometric identification should in any case not provide a basis to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric identification.
(95)
Gan dochar do dhlí an Aontais is infheidhme, go háirithe Rialachán (AE) 2016/679 agus Treoir (AE) 2016/680, i bhfianaise chineál ionrach na gcóras cian-sainaitheanta bithmhéadraí iar-aimseartha, ba cheart úsáid na gcóras sin a bheith faoi réir coimircí. Ba cheart córais cian-sainaitheanta bithmhéadraí iar-aimseartha a úsáid i gcónaí ar bhealach atá comhréireach, dlisteanach agus fíor-riachtanach, agus, dá bhrí sin, ba cheart iad a dhíriú, i dtéarmaí na ndaoine aonair atá le sainaithint, ar an suíomh, ar an raon feidhme ama agus a bheith bunaithe ar thacar sonraí iata d’fhís-sleachta a fuarthas go dleathach. I gcás ar bith, níor cheart córais cian-sainaitheanta bithmhéadraí iar-aimseartha a úsáid faoi chuimsiú fhorfheidhmiú an dlí ar bhealach a n-eascródh faireachas neamh-idirdhealaitheach as. I gcás ar bith, níor cheart do na coinníollacha maidir le cian-sainaithint bhithmhéadrach iar-aimseartha bunús a sholáthar chun dul timpeall ar choinníollacha an toirmisc agus ar na heisceachtaí dochta maidir le hiar-shainaithint bhithmhéadrach fíor-ama.
(96)
In order to efficiently ensure that fundamental rights are protected, deployers of high-risk AI systems that are bodies governed by public law, or private entities providing public services and deployers of certain high-risk AI systems listed in an annex to this Regulation, such as banking or insurance entities, should carry out a fundamental rights impact assessment prior to putting it into use. Services important for individuals that are of public nature may also be provided by private entities. Private entities providing such public services are linked to tasks in the public interest such as in the areas of education, healthcare, social services, housing, administration of justice. The aim of the fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or groups of individuals likely to be affected, identify measures to be taken in the case of a materialisation of those risks. The impact assessment should be performed prior to deploying the high-risk AI system, and should be updated when the deployer considers that any of the relevant factors have changed. The impact assessment should identify the deployer’s relevant processes in which the high-risk AI system will be used in line with its intended purpose, and should include a description of the period of time and frequency in which the system is intended to be used as well as of specific categories of natural persons and groups who are likely to be affected in the specific context of use. The assessment should also include the identification of specific risks of harm likely to have an impact on the fundamental rights of those persons or groups. While performing this assessment, the deployer should take into account information relevant to a proper assessment of the impact, including but not limited to the information given by the provider of the high-risk AI system in the instructions for use. In light of the risks identified, deployers should determine measures to be taken in the case of a materialisation of those risks, including for example governance arrangements in that specific context of use, such as arrangements for human oversight according to the instructions of use or, complaint handling and redress procedures, as they could be instrumental in mitigating risks to fundamental rights in concrete use-cases. After performing that impact assessment, the deployer should notify the relevant market surveillance authority. Where appropriate, to collect relevant information necessary to perform the impact assessment, deployers of high-risk AI system, in particular when AI systems are used in the public sector, could involve relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI system, independent experts, and civil society organisations in conducting such impact assessments and designing measures to be taken in the case of materialisation of the risks. The European Artificial Intelligence Office (AI Office) should develop a template for a questionnaire in order to facilitate compliance and reduce the administrative burden for deployers.
(96)
Chun a áirithiú go héifeachtúil go gcosnófar cearta bunúsacha, ba cheart d’úsáideoirí gairmiúla córas intleachta saorga ardriosca ar comhlachtaí iad a rialaítear leis an dlí poiblí, nó eintitis phríobháideacha a sholáthraíonn seirbhísí poiblí agus úsáideoirí gairmiúla córas intleachta saorga ardriosca áirithe a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo, amhail eintitis bhaincéireachta nó árachais, ba cheart dóibh measúnú tionchair ceart bunúsach a dhéanamh air sula mbainfear úsáid as. Féadfaidh sé gur eintitis phríobháideacha iad na heintitis a sholáthraíonn seirbhísí atá tábhachtach do dhaoine aonair agus ar de chineál poiblí iad. Is cúraimí a bhaineann le leas an phobail amhail i réimsí an oideachais, an chúraim sláinte, na seirbhísí sóisialta, na tithíochta agus riar an cheartais na cúraimí a bhíonn ar eintitis phríobháideacha a sholáthraíonn seirbhísí poiblí den sórt sin. Is é is aidhm don mheasúnú tionchair ceart bunúsach go sainaithneoidh an t-úsáideoir gairmiúil na rioscaí sonracha do chearta daoine aonair nó grúpaí daoine aonair ar dócha go ndéanfar difear dóibh agus bearta a shainaithint a bheidh le déanamh i gcás ina dtiocfaidh na rioscaí sin chun cinn. Ba cheart an measúnú tionchair a dhéanamh sula gcuirfear an córas intleachta saorga ardriosca in úsáid, agus ba cheart é a thabhairt cothrom le dáta nuair a mheasann an t-úsáideoir gairmiúil go bhfuil athrú tagtha ar aon cheann de na tosca ábhartha. Ba cheart a shainaithint sa mheasúnú tionchair próisis ábhartha an úsáideora ghairmiúil ina n-úsáidfear an córas intleachta saorga ardriosca i gcomhréir leis an gcríoch atá beartaithe dó, agus ba cheart a áireamh ann tuairisc ar an tréimhse ama agus ar an minicíocht ina bhfuil sé beartaithe an córas a úsáid mar aon le catagóirí sonracha de dhaoine nádúrtha agus de ghrúpaí ar dócha go ndéanfar difear dóibh i gcomhthéacs sonrach na húsáide. Ba cheart a áireamh sa mheasúnú freisin sainaithint na rioscaí sonracha díobhála ar dócha go mbeidh tionchar acu ar chearta bunúsacha na ndaoine nó na ngrúpaí sin. Agus an measúnú sin á dhéanamh, ba cheart don úsáideoir gairmiúil faisnéis atá ábhartha do mheasúnú tionchair cuí a chur san áireamh, lena n-áirítear an fhaisnéis a thugann soláthraí an chórais intleachta saorga ardriosca sna treoracha úsáide, ach gan a bheith teoranta don fhaisnéis sin. I bhfianaise na rioscaí a sainaithníodh, ba cheart don úsáideoir gairmiúil na bearta a chinneadh a bheidh le déanamh i gcás ina dtiocfaidh na rioscaí sin chun cinn, lena n-áirítear, amhail socruithe rialachais sa chomhthéacs sonrach úsáide sin, mar shampla socruithe maidir le formhaoirseacht dhaonna de réir na dtreoracha úsáide nó, nósanna imeachta láimhseála gearán agus sásaimh, toisc go bhféadfaidís a bheith ríthábhachtach chun rioscaí do chearta bunúsacha i gcásanna úsáide nithiúla a mhaolú. Tar éis dó an measúnú tionchair sin a dhéanamh, ba cheart don úsáideoir gairmiúil fógra a thabhairt don údarás faireachais margaidh ábhartha. I gcás inarb iomchuí, chun faisnéis ábhartha a bhailiú a theastaíonn chun an measúnú tionchair a dhéanamh, d’fhéadfadh úsáideoirí gairmiúla córais intleachta saorga ardriosca, go háirithe nuair a úsáidtear córais intleachta saorga san earnáil phoiblí, ról a thabhairt do gheallsealbhóirí ábhartha, lena n-áirítear ionadaithe grúpaí daoine ar dócha go ndéanfaidh an córas intleachta saorga difear dóibh, saineolaithe neamhspleácha, agus eagraíochtaí na sochaí sibhialta, is é sin ról maidir leis na measúnuithe tionchair sin a dhéanamh agus bearta a dhearadh a bheidh le déanamh i gcás ina dtiocfaidh na rioscaí chun cinn. Ba cheart don Oifig Eorpach um an Intleacht Shaorga teimpléad a fhorbairt le haghaidh ceistneora chun comhlíonadh a éascú agus chun an t-ualach riaracháin ar úsáideoirí gairmiúla a laghdú.
(97)
The notion of general-purpose AI models should be clearly defined and set apart from the notion of AI systems to enable legal certainty. The definition should be based on the key functional characteristics of a general-purpose AI model, in particular the generality and the capability to competently perform a wide range of distinct tasks. These models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. General-purpose AI models may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy. These models may be further modified or fine-tuned into new models. Although AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems. This Regulation provides specific rules for general-purpose AI models and for general-purpose AI models that pose systemic risks, which should apply also when these models are integrated or form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI models should apply once the general-purpose AI models are placed on the market. When the provider of a general-purpose AI model integrates an own model into its own AI system that is made available on the market or put into service, that model should be considered to be placed on the market and, therefore, the obligations in this Regulation for models should continue to apply in addition to those for AI systems. The obligations laid down for models should in any case not apply when an own model is used for purely internal processes that are not essential for providing a product or a service to third parties and the rights of natural persons are not affected. Considering their potential significantly negative effects, the general-purpose AI models with systemic risk should always be subject to the relevant obligations under this Regulation. The definition should not cover AI models used before their placing on the market for the sole purpose of research, development and prototyping activities. This is without prejudice to the obligation to comply with this Regulation when, following such activities, a model is placed on the market.
(97)
Ba cheart coincheap na samhlacha intleachta saorga ilchuspóireacha a shainiú go soiléir agus a scaradh ó choincheap na gcóras intleachta saorga chun gur féidir deimhneacht dhlíthiúil a bheith ann. Ba cheart an sainmhíniú a bheith bunaithe ar na príomh-shaintréithe feidhmiúla a bhaineann le samhail intleachta saorga ilchuspóireach, go háirithe an ghinearáltacht agus an cumas raon leathan cúraimí leithleacha a dhéanamh go hinniúil. Is iondúil go gcuirtear oiliúint ar na samhlacha sin le méideanna móra sonraí, trí mhodhanna éagsúla, amhail foghlaim fhéinmhaoirsithe, foghlaim neamh-mhaoirsithe nó foghlaim atreisiúcháin. Féadfar samhlacha intleachta saorga ilchuspóireacha a chur ar an margadh ar bhealaí éagsúla, lena n-áirítear trí leabharlanna, comhéadain feidhmchláir (APInna), mar íoslódáil dhíreach, nó mar chóip fhisiciúil. Féadfar na samhlacha sin a mhodhnú nó a mhionchoigeartú a thuilleadh chun samhlacha nua a dhéanamh díobh. Cé gur comhpháirteanna bunriachtanacha de chórais intleachta saorga iad samhlacha intleachta saorga, ní córais intleachta saorga iad iontu féin. Ní mór comhpháirteanna breise a chur le samhlacha intleachta saorga, amhail comhéadan úsáideora mar shampla, sula mbeidh siad ina gcórais intleachta saorga. De ghnáth, comhtháthaítear samhlacha intleachta saorga i gcórais intleachta saorga agus is comhpháirteanna de na córais sin iad. Déantar foráil sa Rialachán seo maidir le rialacha sonracha le haghaidh samhlacha intleachta saorga ilchuspóireacha agus le haghaidh samhlacha intleachta saorga ilchuspóireacha lena mbaineann rioscaí sistéamacha, ar cheart feidhm a bheith acu freisin nuair is samhlacha comhtháite i gcóras intleachta saorga iad nó nuair is comhpháirt de chóras intleachta saorga iad. Ba cheart a thuiscint gur cheart feidhm a bheith ag na hoibleagáidí ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha a luaithe a chuirfear na samhlacha intleachta saorga ilchuspóireacha ar an margadh. I gcás ina gcomhtháthaíonn soláthraí samhla intleachta saorga ilchuspóirí samhail dá chuid féin ina chóras intleachta saorga féin a chuirtear ar fáil ar an margadh nó a chuirtear i mbun seirbhíse, ba cheart a mheas go bhfuil an tsamhail sin curtha ar an margadh agus, dá bhrí sin, ba cheart feidhm a bheith fós ag na hoibleagáidí sa Rialachán seo ar shamhlacha, sa bhreis orthu sin le haghaidh córais intleachta saorga. I gcás ar bith, níor cheart feidhm a bheith ag na hoibleagáidí a leagtar síos maidir le samhlacha nuair nach n-úsáidtear samhail dhílis ach amháin le haghaidh próisis inmheánacha nach bhfuil fíor-riachtanach chun táirge nó seirbhís a sholáthar do thríú páirtithe agus nach ndéanann difear do chearta daoine nádúrtha. I bhfianaise na n-éifeachtaí diúltacha suntasacha a d’fhéadfadh a bheith acu, ba cheart na samhlacha intleachta saorga ilchuspóireacha lena mbaineann riosca sistéamach a bheith faoi réir na n-oibleagáidí ábhartha faoin Rialachán seo i gcónaí. Níor cheart samhlacha intleachta saorga a úsáidtear sula gcuirtear ar an margadh iad a chumhdach sa sainmhíniú más rud é nach n-úsáidtear ach chun críche gníomhaíochtaí taighde, forbartha agus fréamhshamhaltaithe iad. Tá an méid sin gan dochar don oibleagáid an Rialachán seo a chomhlíonadh nuair a chuirtear samhail ar an margadh tar éis na ngníomhaíochtaí sin.
(98)
Whereas the generality of a model could, inter alia, also be determined by a number of parameters, models with at least a billion of parameters and trained with a large amount of data using self-supervision at scale should be considered to display significant generality and to competently perform a wide range of distinctive tasks.
(98)
Cé go bhféadfaí roinnt pharaiméadar, inter alia, a úsáid, freisin, chun ginearáltacht samhla a chinneadh, ba cheart a mheas go bhfuil ginearáltacht shuntasach á léiriú agus raon leathan cúraimí sainiúla á ndéanamh go hinniúil ag samhlacha ag a bhfuil ar a laghad billiún paraiméadar agus oiliúint le méid mór sonraí ar a ndéantar féinmhaoirseacht ar mórscála.
(99)
Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.
(99)
Is sampla tipiciúil de shamhail intleachta saorga ilchuspóireach an tsamhail mhór giniúna intleachta saorga, ós rud é go gceadaítear léi ábhar a ghiniúint ar bhealach solúbtha, amhail i bhfoirm téacs, fuaime, íomhánna nó físeáin, ar féidir leo freastal go héasca ar raon leathan cúraimí leithleacha.
(100)
When a general-purpose AI model is integrated into or forms part of an AI system, this system should be considered to be general-purpose AI system when, due to this integration, this system has the capability to serve a variety of purposes. A general-purpose AI system can be used directly, or it may be integrated into other AI systems.
(100)
I gcás ina bhfuil samhail intleachta saorga ilchuspóireach comhtháite i gcóras intleachta saorga nó inar comhpháirt de chóras intleachta saorga í, ba cheart an córas sin a mheas mar chóras intleachta saorga ilchuspóireach más rud é, mar gheall ar an gcomhtháthú sin, go bhfuil sé de chumas ag an gcóras sin freastal ar chuspóirí éagsúla. Féadfar córas intleachta saorga ilchuspóireach a úsáid go díreach, nó is féidir é a bheith comhtháite i gcórais intleachta saorga eile.
(101)
Providers of general-purpose AI models have a particular role and responsibility along the AI value chain, as the models they provide may form the basis for a range of downstream systems, often provided by downstream providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products, and to fulfil their obligations under this or other regulations. Therefore, proportionate transparency measures should be laid down, including the drawing up and keeping up to date of documentation, and the provision of information on the general-purpose AI model for its usage by the downstream providers. Technical documentation should be prepared and kept up to date by the general-purpose AI model provider for the purpose of making it available, upon request, to the AI Office and the national competent authorities. The minimal set of elements to be included in such documentation should be set out in specific annexes to this Regulation. The Commission should be empowered to amend those annexes by means of delegated acts in light of evolving technological developments.
(101)
Tá ról agus freagracht ar leith ag soláthraithe samhlacha intleachta saorga ilchuspóireacha feadh an tslabhra luacha intleachta saorga, cionn is go bhféadfadh na samhlacha a sholáthraíonn siad a bheith mar bhonn do raon córas iartheachtach ar minic is soláthraithe iartheachtacha a sholáthraíonn iad agus dar gá tuiscint mhaith ar na samhlacha agus ar a gcumais, chun comhtháthú na samhlacha sin ina dtáirgí a chumasú, agus chun a n-oibleagáidí faoin Rialachán seo nó faoi Rialacháin eile a chomhlíonadh. Dá bhrí sin, ba cheart bearta trédhearcachta comhréireacha a leagan síos, lena n-áirítear doiciméadacht a tharraingt suas agus a choinneáil cothrom le dáta, agus faisnéis a sholáthar maidir leis an tsamhail intleachta saorga ilchuspóireach maidir lena húsáid ag na soláthraithe iartheachtacha. Ba cheart do sholáthraí na samhla intleachta saorga ilchuspóirí an doiciméadacht theicniúil a ullmhú agus a choinneáil cothrom le dáta chun í a chur ar fáil, arna iarraidh sin, ag an Oifig intleachta saorga agus ag na húdaráis inniúla náisiúnta. Ba cheart an tacar íosta eilimintí atá le háireamh sa doiciméadacht sin a leagan amach in iarscríbhinní sonracha a ghabhann leis an Rialachán seo. Ba cheart an chumhacht a thabhairt don Choimisiún na hiarscríbhinní sin a leasú trí bhíthin gníomhartha tarmligthe i bhfianaise forbairtí teicneolaíocha a thiocfaidh chun cinn.
(102)
Software and data, including models, released under a free and open-source licence that allows them to be openly shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can contribute to research and innovation in the market and can provide significant growth opportunities for the Union economy. General-purpose AI models released under free and open-source licences should be considered to ensure high levels of transparency and openness if their parameters, including the weights, the information on the model architecture, and the information on model usage are made publicly available. The licence should be considered to be free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, including models under the condition that the original provider of the model is credited, the identical or comparable terms of distribution are respected.
(102)
Maidir le bogearraí agus sonraí, lena n-áirítear samhlacha, a scaoiltear faoi cheadúnas saor agus foinse oscailte lenar féidir iad a chomhroinnt go hoscailte agus i gcás inar féidir le húsáideoirí iad a rochtain, a úsáid, a mhodhnú agus a athdháileadh gan bhac, is é sin na samhlacha sin nó leaganacha modhnaithe díobh, is féidir leis na bogearraí agus na sonraí sin rannchuidiú le taighde agus nuálaíocht sa mhargadh agus deiseanna suntasacha fáis a chur ar fáil do gheilleagar an Aontais. Ba cheart breathnú ar shamhlacha intleachta saorga ilchuspóireacha a scaoiltear faoi cheadúnais shaora agus foinse oscailte chun ardleibhéil trédhearcachta agus oscailteachta a áirithiú má chuirtear ar fáil go poiblí a bparaiméadar, lena n-áirítear na hualaí, an fhaisnéis maidir le hailtireacht na samhla, agus an fhaisnéis maidir le húsáid na samhla. Ba cheart a mheas gur ceadúnas saor agus foinse oscailte é freisin nuair a cheadaíonn sé d’úsáideoirí bogearraí agus sonraí a rith, a chóipeáil, a dháileadh, staidéar a dhéanamh orthu, iad a athrú agus iad a fheabhsú, lena n-áirítear samhlacha ar an gcoinníoll go dtugtar creidiúint do sholáthraí bunaidh na samhla agus go n-urramaítear na téarmaí dáilte comhionanna nó inchomparáide.
(103)
Free and open-source AI components covers the software and data, including models and general-purpose AI models, tools, services or processes of an AI system. Free and open-source AI components can be provided through different channels, including their development on open repositories. For the purposes of this Regulation, AI components that are provided against a price or otherwise monetised, including through the provision of technical support or other services, including through a software platform, related to the AI component, or the use of personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the software, with the exception of transactions between microenterprises, should not benefit from the exceptions provided to free and open-source AI components. The fact of making AI components available through open repositories should not, in itself, constitute a monetisation.
(103)
Tagraíonn comhpháirteanna córais intleachta saorga saor agus foinse oscailte do na bogearraí agus do na sonraí, lena n-áirítear samhlacha agus samhlacha, uirlisí, seirbhísí nó próisis intleachta saorga ilchuspóireacha de chuid córais intleachta saorga. Is féidir comhpháirteanna córais intleachta saorga saor agus foinse oscailte a sholáthar trí bhealaí éagsúla, lena n-áirítear iad a fhorbairt ar stórais oscailte. Chun críocha an Rialacháin seo, maidir le comhpháirteanna intleachta saorga a sholáthraítear ar phraghas nó a gcuirtear luach airgid orthu ar bhealach eile, lena n-áirítear trí thacaíocht theicniúil nó seirbhísí eile a sholáthar, lena n-áirítear trí ardán bogearraí, a bhaineann leis an gcomhpháirt de chóras intleachta saorga, nó úsáid sonraí pearsanta ar chúiseanna seachas go heisiach chun slándáil, comhoiriúnacht nó idir-inoibritheacht na mbogearraí a fheabhsú, cé is moite d’idirbhearta idir micrifhiontair, níor cheart dóibh tairbhe a bhaint as na heisceachtaí a sholáthraítear le comhpháirteanna córais intleachta saorga saor agus foinse oscailte. Níor cheart a thuiscint as comhpháirteanna córais intleachta saorga a chur ar fáil trí stórais oscailte go bhfuil luach airgid á chur orthu.
(104)
The providers of general-purpose AI models that are released under a free and open-source licence, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available should be subject to exceptions as regards the transparency-related requirements imposed on general-purpose AI models, unless they can be considered to present a systemic risk, in which case the circumstance that the model is transparent and accompanied by an open-source license should not be considered to be a sufficient reason to exclude compliance with the obligations under this Regulation. In any case, given that the release of general-purpose AI models under free and open-source licence does not necessarily reveal substantial information on the data set used for the training or fine-tuning of the model and on how compliance of copyright law was thereby ensured, the exception provided for general-purpose AI models from compliance with the transparency-related requirements should not concern the obligation to produce a summary about the content used for model training and the obligation to put in place a policy to comply with Union copyright law, in particular to identify and comply with the reservation of rights pursuant to Article 4(3) of Directive (EU) 2019/790 of the European Parliament and of the Council (40).
(104)
Soláthraithe samhlacha intleachta saorga ilchuspóireacha a scaoiltear faoi cheadúnas saor agus foinse oscailte, agus a ndéantar a bparaiméadair, lena n-áirítear na hualaí, an fhaisnéis maidir le hailtireacht na samhla, agus an fhaisnéis maidir le húsáid na samhla, a chur ar fáil go poiblí, ba cheart iad a bheith faoi réir eisceachtaí a mhéid a bhaineann leis na ceanglais maidir le trédhearcacht arna bhforchur ar shamhlacha intleachta saorga ilchuspóireacha, ach amháin más féidir a mheas go bhfuil riosca sistéamach ag baint leo, ar sa chás sin nár cheart a mheas gur cúis leordhóthanach í an tsamhail a bheith trédhearcach agus ceadúnas foinse oscailte a bheith ag gabháil léi chun comhlíonadh na n-oibleagáidí faoin Rialachán seo a eisiamh. In aon chás, ós rud é nach gá go nochtann scaoileadh samhlacha intleachta saorga ilchuspóireacha faoi cheadúnas saor agus foinse oscailte faisnéis shubstaintiúil maidir leis an tacar sonraí a úsáidtear chun an tsamhail a oiliúint nó a mhionchoigeartú agus maidir leis an gcaoi a ndearnadh comhlíonadh dhlí an chóipchirt a áirithiú dá bhrí sin, níor cheart, maidir leis an eisceacht dá bhforáiltear le haghaidh samhlacha intleachta saorga ilchuspóireacha ó chomhlíonadh na gceanglas a bhaineann le trédhearcacht, baint a bheith aici leis an oibleagáid achoimre a sholáthar ar an ábhar a úsáidtear le haghaidh oiliúint samhla ná leis an oibleagáid beartas a chur i bhfeidhm chun dlí cóipchirt an Aontais a chomhlíonadh, go háirithe chun forchoimeádas na gceart de bhun Airteagal 4(3) de Threoir (AE) 2019/790 ó Pharlaimint na hEorpa agus ón gComhairle a shainaithint agus a chomhlíonadh (40).
(105)
General-purpose AI models, in particular large generative AI models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed. The development and training of such models require access to vast amounts of text, images, videos and other data. Text and data mining techniques may be used extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and related rights. Any use of copyright protected content requires the authorisation of the rightsholder concerned unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions of works or other subject matter, for the purpose of text and data mining, under certain conditions. Under these rules, rightsholders may choose to reserve their rights over their works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of general-purpose AI models need to obtain an authorisation from rightsholders if they want to carry out text and data mining over such works.
(105)
Le samhlacha ilchuspóireacha intleachta saorga, go háirithe mórshamhlacha giniúnacha intleachta saorga, atá in ann téacs, íomhánna agus ábhar eile a ghiniúint, cuirtear deiseanna nuálaíochta uathúla ach dúshláin lena chois sin roimh ealaíontóirí, údair agus cruthaitheoirí eile agus faoin gcaoi a ndéantar a n-ábhar cruthaitheach a chruthú, a dháileadh, a úsáid agus a thomhailt. Chun na samhlacha sin a fhorbairt agus a oiliúint, ní mór rochtain a fháil ar mhéideanna ollmhóra de théacsanna, d’íomhánna, d’fhíseáin agus de shonraí eile. Féadfar teicnící mianadóireachta téacsanna agus sonraí a úsáid go forleathan sa chomhthéacs sin chun ábhar den sórt sin a aisghabháil agus a anailísiú, ábhar a d’fhéadfadh a bheith á chosaint le cóipcheart agus le cearta gaolmhara. Ní mór údarú a bheith faighte ó shealbhóir an chirt lena mbaineann chun aon úsáid a bhaint as ábhar atá faoi chosaint cóipchirt ach amháin má tá feidhm ag eisceachtaí agus teorainneacha cóipchirt ábhartha. Le Treoir (AE) 2019/790 tugtar isteach eisceachtaí agus teorainneacha lena gceadaítear saothair nó ábhar eile a atáirgeadh agus a asbhaint, chun críche mianadóireachta téacsanna agus sonraí, faoi choinníollacha áirithe. Faoi na rialacha sin, féadfaidh sealbhóirí cirt a roghnú a gcearta ar a saothair nó ar ábhar eile a fhorchoimeád chun mianadóireacht téacsanna agus sonraí a chosc, ach amháin más chun críoch taighde eolaíoch an mhianadóireacht sin. I gcás inar forchoimeádadh na cearta chun rogha an diúltaithe a úsáid go sainráite ar bhealach iomchuí, ní mór do sholáthraithe samhlacha intleachta saorga ilchuspóireacha údarú a fháil ó na sealbhóirí cirt más mian leo mianadóireacht téacsanna agus sonraí a dhéanamh sna saothair sin.
(106)
Providers that place general-purpose AI models on the Union market should ensure compliance with the relevant obligations in this Regulation. To that end, providers of general-purpose AI models should put in place a policy to comply with Union law on copyright and related rights, in particular to identify and comply with the reservation of rights expressed by rightsholders pursuant to Article 4(3) of Directive (EU) 2019/790. Any provider placing a general-purpose AI model on the Union market should comply with this obligation, regardless of the jurisdiction in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place. This is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the Union market by applying lower copyright standards than those provided in the Union.
(106)
Soláthraithe a chuireann samhlacha intleachta saorga ilchuspóireacha ar mhargadh an Aontais, ba cheart dóibh comhlíonadh na n-oibleagáidí ábhartha sa Rialachán seo a áirithiú. Chuige sin, ba cheart do sholáthraithe samhlacha intleachta saorga ilchuspóireacha beartas a chur i bhfeidhm chun dlí an Aontais maidir le cóipcheart agus cearta gaolmhara a chomhlíonadh, go háirithe chun forchoimeádas na gceart arna gcur in iúl ag sealbhóirí cirt de bhun Airteagal 4(3) de Threoir (AE) 2019/790 a shainaithint agus a chomhlíonadh. Aon soláthraí a chuireann samhail intleachta saorga ilchuspóireach ar mhargadh an Aontais, ba cheart dó an oibleagáid sin a chomhlíonadh, gan beann ar an dlínse ina ndéantar na gníomhartha atá ábhartha ó thaobh cóipchirt de agus atá mar bhonn agus mar thaca le hoiliúint na samhlacha intleachta saorga ilchuspóireacha sin. Tá sin riachtanach, chun cothrom iomaíochta a áirithiú idir soláthraithe samhlacha intleachta saorga ilchuspóireacha, nár cheart d’aon soláthraí a bheith in ann buntáiste iomaíoch a fháil i margadh an Aontais trí chaighdeáin chóipchirt atá níos ísle ná na caighdeáin dá bhforáiltear san Aontas a chur i bhfeidhm.
(107)
In order to increase transparency on the data that is used in the pre-training and training of general-purpose AI models, including text and data protected by copyright law, it is adequate that providers of such models draw up and make publicly available a sufficiently detailed summary of the content used for training the general-purpose AI model. While taking into due account the need to protect trade secrets and confidential business information, this summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by listing the main data collections or sets that went into training the model, such as large private or public databases or data archives, and by providing a narrative explanation about other data sources used. It is appropriate for the AI Office to provide a template for the summary, which should be simple, effective, and allow the provider to provide the required summary in narrative form.
(107)
Chun trédhearcacht a mhéadú maidir leis na sonraí a úsáidtear i réamhoiliúint agus oiliúint samhlacha intleachta saorga ilchuspóireacha, lena n-áirítear téacs agus sonraí atá á gcosaint ag dlí an chóipchirt, is leor go ndéanfaidh na soláthraithe samhlacha sin achoimre atá mionsonraithe go leordhóthanach a tharraingt suas agus a chur ar fáil go poiblí ar an ábhar a úsáidtear chun oiliúint a chur ar an tsamhail intleachta saorga ilchuspóireach. Agus aird chuí á tabhairt ar an ngá atá le rúin trádála agus faisnéis rúnda ghnó a chosaint, ba cheart an achoimre sin a bheith cuimsitheach go ginearálta ina raon feidhme seachas a bheith mionsonraithe go teicniúil chun éascú do pháirtithe a bhfuil leasanna dlisteanacha acu, lena n-áirítear na sealbhóirí cóipchirt, a gcearta a fheidhmiú agus a fhorfheidhmiú faoi dhlí an Aontais, mar shampla trí na príomhbhailiúcháin sonraí nó na príomhthacair sonraí a cuireadh faoi oiliúint a liostú, amhail bunachair sonraí mhóra phríobháideacha nó phoiblí nó cartlanna sonraí, agus trí mhíniú insinte a sholáthar faoi fhoinsí sonraí eile a úsáideadh. Is iomchuí don Oifig um Intleacht Shaorga teimpléad a sholáthar le haghaidh na hachoimre sin, ar cheart dó a bheith simplí, éifeachtach, agus a chur ar chumas an tsoláthraí an achoimre is gá a sholáthar i bhfoirm insinte.
(108)
With regard to the obligations imposed on providers of general-purpose AI models to put in place a policy to comply with Union copyright law and make publicly available a summary of the content used for the training, the AI Office should monitor whether the provider has fulfilled those obligations without verifying or proceeding to a work-by-work assessment of the training data in terms of copyright compliance. This Regulation does not affect the enforcement of copyright rules as provided for under Union law.
(108)
Maidir leis na hoibleagáidí a fhorchuirtear ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha chun beartas a chur i bhfeidhm chun dlí cóipchirt an Aontais a chomhlíonadh agus achoimre ar an ábhar a úsáidtear le haghaidh na hoiliúna a chur ar fáil go poiblí, ba cheart don Oifig um Intleacht Shaorga faireachán a dhéanamh ar cibé acu ar chomhlíon nó nár chomhlíon an soláthraí na hoibleagáidí sin gan measúnú ‘saothar ar shaothar’ ar na sonraí oiliúna ó thaobh comhlíonadh cóipchirt de a fhíorú nó leanúint ar aghaidh leis. Ní dhéanann an Rialachán seo difear d’fhorfheidhmiú rialacha cóipchirt dá bhforáiltear faoi dhlí an Aontais.
(109)
Compliance with the obligations applicable to the providers of general-purpose AI models should be commensurate and proportionate to the type of model provider, excluding the need for compliance for persons who develop or use models for non-professional or scientific research purposes, who should nevertheless be encouraged to voluntarily comply with these requirements. Without prejudice to Union copyright law, compliance with those obligations should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including start-ups, that should not represent an excessive cost and not discourage the use of such models. In the case of a modification or fine-tuning of a model, the obligations for providers of general-purpose AI models should be limited to that modification or fine-tuning, for example by complementing the already existing technical documentation with information on the modifications, including new training data sources, as a means to comply with the value chain obligations provided in this Regulation.
(109)
Ba cheart comhlíonadh na n-oibleagáidí is infheidhme maidir le soláthraithe samhlacha intleachta saorga ilchuspóireacha a bheith i gcomhréir agus comhréireach leis an gcineál soláthraí samhla, gan an gá atá le comhlíonadh a chur san áireamh i gcás daoine a fhorbraíonn nó a úsáideann samhlacha chun críocha taighde neamhghairmiúil nó eolaíoch, ba cheart, mar sin féin, a spreagadh chun na ceanglais sin a chomhlíonadh go deonach. Gan dochar do dhlí cóipchirt an Aontais, ba cheart, agus na hoibleagáidí sin á gcomhlíonadh, aird chuí a thabhairt ar mhéid an tsoláthraí agus bealaí simplithe comhlíonta a cheadú do FBManna, lena n-áirítear gnólachtaí nuathionscanta, ar bealaí iad nár cheart a bheith ina gcostas iomarcach agus nár cheart dóibh úsáid na samhlacha sin a dhíspreagadh. I gcás modhnú nó mionchoigeartú ar shamhail, ba cheart na hoibleagáidí ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha a theorannú don mhodhnú nó don mhionchoigeartú sin, mar shampla tríd an doiciméadacht theicniúil atá ann cheana a chomhlánú le faisnéis maidir leis na modhnuithe, lena n-áirítear foinsí sonraí oiliúna nua, mar mhodh chun na hoibleagáidí maidir leis an slabhra luacha dá bhforáiltear sa Rialachán seo a chomhlíonadh.
(110)
General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content. Systemic risks should be understood to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to remove guardrails and other factors. In particular, international approaches have so far identified the need to pay attention to risks from potential intentional misuse or unintended issues of control relating to alignment with human intent; chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such as the ways in vulnerability discovery, exploitation, or operational use can be enabled; the effects of interaction and tool use, including for example the capacity to control physical systems and interfere with critical infrastructure; risks from models of making copies of themselves or ‘self-replicating’ or training other models; the ways in which models can give rise to harmful bias and discrimination with risks to individuals, communities or societies; the facilitation of disinformation or harming privacy with threats to democratic values and human rights; risk that a particular event could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity or an entire community.
(110)
D’fhéadfadh rioscaí sistéamacha a bheith ag baint le samhlacha intleachta saorga ilchuspóireacha lena n-áirítear aon éifeachtaí diúltacha iarbhír nó measartha intuartha i ndáil le mórthionóiscí, cur isteach ar earnálacha criticiúla agus iarmhairtí tromchúiseacha ar an tsláinte phoiblí agus ar an tsábháilteacht phoiblí; aon éifeachtaí diúltacha iarbhír nó measartha intuartha ar phróisis dhaonlathacha, ar an tslándáil phoiblí agus eacnamaíoch; ábhar neamhdhleathach, bréagach nó idirdhealaitheach a scaipeadh, ach gan a bheith teoranta dóibh. Maidir le rioscaí sistéamacha, ba cheart a thuiscint go méadófaí iad le cumais na samhla agus le raon na samhla, gur féidir leo teacht chun cinn feadh shaolré iomlán na samhla, agus go mbeidh tionchar ag coinníollacha mí-úsáide, iontaofacht na samhla, cothroime na samhla agus slándáil na samhla, leibhéal neamhspleáchais na samhla, a rochtain ar uirlisí, módúlachtaí nua nó comhcheangailte, straitéisí scaoilte agus dáilte, an fhéidearthacht ráillí cosanta a bhaint agus fachtóirí eile orthu. Go háirithe, sainaithníodh i gcuir chuige idirnáisiúnta gur gá aird a thabhairt ar rioscaí a eascraíonn as mí-úsáid d’aon ghnó a d’fhéadfadh a bheith ann nó saincheisteanna rialaithe neamhbheartaithe a bhaineann le hailíniú le hintinn an duine; rioscaí ceimiceacha, bitheolaíocha, raideolaíocha agus núicléacha, amhail na bealaí inar féidir bacainní ar iontráil a ísliú, lena n-áirítear chun airm a fhorbairt, a dhearadh, a fháil nó a úsáid; cibearchumais ionsaitheacha, amhail na bealaí inar féidir aimsiú, saothrú nó úsáid oibríochtúil leochaileachta a chumasú; éifeachtaí na hidirghníomhaíochta agus úsáid uirlisí, lena n-áirítear, mar shampla, an cumas córais fhisiceacha a rialú agus cur isteach ar bhonneagar criticiúil; rioscaí ó shamhlacha maidir le cóipeanna a dhéanamh díobh féin nó ‘féinmhacasamhlú’ a dhéanamh nó oiliúint a chur ar shamhlacha eile; na bealaí ina bhféadfadh claontacht dhíobhálach agus idirdhealú díobhálach a bheith mar thoradh ar shamhlacha agus rioscaí a bheith ag baint leo sin do dhaoine aonair, do phobail nó do shochaithe; bréagaisnéis a éascú nó dochar a dhéanamh don phríobháideachas agus bagairtí mar gheall orthu sin ar luachanna daonlathacha agus ar chearta an duine; riosca go bhféadfadh imoibriú slabhrúil a bheith mar thoradh ar theagmhas ar leith a mbeadh éifeachtaí diúltacha suntasacha aige a d’fhéadfadh difear a dhéanamh do chathair iomlán, do ghníomhaíocht fearainn iomlán nó do phobal iomlán.
(111)
It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI model should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better understood after its placing on the market or when deployers interact with the model. According to the state of the art at the time of entry into force of this Regulation, the cumulative amount of computation used for the training of the general-purpose AI model measured in floating point operations is one of the relevant approximations for model capabilities. The cumulative amount of computation used for training includes the computation used across the activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of floating point operations should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be supplemented with benchmarks and indicators for model capability. To inform this, the AI Office should engage with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and associated systemic risk of general-purpose AI models, and could take into account the way the model will be placed on the market or the number of users it may affect. To complement this system, there should be a possibility for the Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set threshold. That decision should be taken on the basis of an overall assessment of the criteria for the designation of a general-purpose AI model with systemic risk set out in an annex to this Regulation, such as quality or size of the training data set, number of business and end users, its input and output modalities, its level of autonomy and scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic risk, the Commission should take the request into account and may decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.
(111)
Is iomchuí modheolaíocht a bhunú chun samhlacha intleachta saorga ilchuspóireacha a aicmiú mar shamhail intleachta saorga ilchuspóireach a bhfuil rioscaí sistéamacha aici. Ós rud é go n-eascraíonn rioscaí sistéamacha as cumais an-ard, ba cheart a mheas go bhfuil rioscaí sistéamacha ag baint le samhail intleachta saorga ilchuspóireach má tá cumais ardtionchair aici, a ndéantar meastóireacht orthu ar bhonn uirlisí agus modheolaíochtaí teicniúla iomchuí, nó tionchar suntasach ar an margadh inmheánach mar gheall ar a rochtain. Ciallaíonn cumais ardtionchair i samhlacha intleachta saorga ilchuspóireacha cumais a mheaitseálann nó a sháraíonn na cumais a thaifeadtar sna samhlacha intleachta saorga ilchuspóireacha is forbartha. D’fhéadfaí raon iomlán na gcumas i samhail a thuiscint ar bhealach níos fearr tar éis í a chur ar an margadh nó nuair a idirghníomhaíonn úsáideoirí gairmiúla leis an tsamhail. De réir staid na teicneolaíochta tráth theacht i bhfeidhm an Rialacháin seo, tá méid carnach an ríomha a úsáidtear chun oiliúint a chur ar an tsamhail intleachta saorga ilchuspóireach a thomhaistear in oibríochtaí snámhphointe ar cheann de na neastacháin ábhartha le haghaidh chumais na samhla. Áirítear le méid an ríomha a úsáidtear le haghaidh oiliúna an ríomh a úsáidtear ar fud na ngníomhaíochtaí agus na modhanna atá beartaithe chun feabhas a chur ar chumais na samhla sula n-úsáidtear í, amhail réamhoiliúint, giniúint sonraí sintéiseacha agus mionchoigeartú. Dá bhrí sin, ba cheart tairseach tosaigh oibríochtaí snámhphointe a leagan síos, rud a fhágann, má chomhlíontar í le samhail intleachta saorga ilchuspóireach, go nglactar leis gur samhail intleachta saorga ilchuspóireach í an tsamhail a bhfuil rioscaí sistéamacha aici. Ba cheart an tairseach sin a choigeartú le himeacht ama chun athruithe teicneolaíocha agus tionsclaíocha a léiriú, ar nós feabhsuithe algartamacha nó éifeachtúlacht crua-earraí mhéadaithe, agus ba cheart í a fhorlíonadh le tagarmharcanna agus táscairí i gcomhair chumas na samhla. Chun bonn eolais a chur leis an méid sin, ba cheart don Oifig um Intleacht Shaorga dul i dteagmháil leis an bpobal eolaíochta, leis an tionscal, leis an tsochaí shibhialta agus le saineolaithe eile. Ba cheart tairseacha, chomh maith le huirlisí agus tagarmharcanna chun measúnú a dhéanamh ar chumais ardtionchair, a bheith ina réamhtháscairí láidre ar ghinearáltacht, ar chumais samhlacha intleachta saorga ilchuspóireacha agus ar an riosca sistéamach gaolmhar a bhaineann leo, agus d’fhéadfaidís an chaoi a gcuirfear an tsamhail ar an margadh nó líon na n-úsáideoirí a bhféadfadh sí difear a dhéanamh dóibh a chur san áireamh. Chun an córas sin a chomhlánú, ba cheart an deis a bheith ag an gCoimisiún cinntí aonair a dhéanamh lena n-ainmneofar samhail intleachta saorga ilchuspóireach mar shamhail intleachta saorga ilchuspóireach lena mbaineann riosca sistéamach má fhaightear go bhfuil cumais nó tionchar ag an tsamhail sin atá coibhéiseach leo siúd a ghabhtar leis an tairseach a leagtar síos. Ba cheart an cinneadh sin a dhéanamh ar bhonn measúnú foriomlán ar na critéir chun samhail intleachta saorga ilchuspóireach a ainmniú lena mbaineann riosca sistéamach a leagtar amach in iarscríbhinn a ghabhann leis an Rialachán seo, amhail cáilíocht nó méid an tacair sonraí oiliúna, líon na n-úsáideoirí gnó agus deiridh, a mhódúlachtaí ionchuir agus aschuir, a leibhéal neamhspleáchais agus inscálaitheachta, nó na huirlisí a bhfuil rochtain aige orthu. Ar iarraidh réasúnaithe a fháil ó sholáthraí ar ainmníodh a shamhail mar shamhail intleachta saorga ilchuspóireach a bhfuil riosca sistéamach aici, ba cheart don Choimisiún an iarraidh a chur san áireamh agus féadfaidh sé a chinneadh athmheasúnú a dhéanamh ar cibé acu is féidir nó nach féidir a mheas fós go bhfuil rioscaí sistéamacha ag baint leis an tsamhail intleachta saorga ilchuspóireach.
(112)
It is also necessary to clarify a procedure for the classification of a general-purpose AI model with systemic risks. A general-purpose AI model that meets the applicable threshold for high-impact capabilities should be presumed to be a general-purpose AI models with systemic risk. The provider should notify the AI Office at the latest two weeks after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements that lead to the presumption. This is especially relevant in relation to the threshold of floating point operations because training of general-purpose AI models takes considerable planning which includes the upfront allocation of compute resources and, therefore, providers of general-purpose AI models are able to know if their model would meet the threshold before the training is completed. In the context of that notification, the provider should be able to demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does not present systemic risks, and that it thus should not be classified as a general-purpose AI model with systemic risks. That information is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with systemic risks and the providers can start to engage with the AI Office early on. That information is especially important with regard to general-purpose AI models that are planned to be released as open-source, given that, after the open-source model release, necessary measures to ensure compliance with the obligations under this Regulation may be more difficult to implement.
(112)
Is iomchuí nós imeachta a shoiléiriú freisin chun samhlacha intleachta saorga ilchuspóireacha a aicmiú mar shamhail intleachta saorga ilchuspóireach a bhfuil rioscaí sistéamacha aici. Maidir le samhail intleachta saorga ilchuspóireach lena gcomhlíontar an tairseach is infheidhme le haghaidh cumais ardtionchair, ba cheart a thoimhdiú gur samhail intleachta saorga ilchuspóireach í lena mbaineann riosca sistéamach. Ba cheart don soláthraí fógra a thabhairt don Oifig um Intleacht Shaorga coicís ar a dhéanaí tar éis na ceanglais a chomhlíonadh nó tar éis dó a bheith ar an eolas go gcomhlíonfaidh samhail intleachta saorga ilchuspóireach na ceanglais as a n-eascraíonn an toimhde. Tá sé sin ábhartha go háirithe maidir le tairseach na n-oibríochtaí snámhphointe toisc go bhfuil pleanáil shuntasach i gceist le hoiliúint samhlacha intleachta saorga ilchuspóireacha lena n-áirítear leithdháileadh tosaigh acmhainní ríomhaireachta agus, dá bhrí sin, tá soláthraithe samhlacha intleachta saorga ilchuspóireacha in ann a fháil amach an gcomhlíonfadh a samhail an tairseach sula gcuirfí an oiliúint i gcrích. I gcomhthéacs an fhógra sin, ba cheart don soláthraí a bheith in ann a léiriú, mar gheall ar a saintréithe sonracha, nach mbaineann rioscaí sistéamacha go heisceachtúil le samhail intleachta saorga ilchuspóireach, agus nár cheart, dá bhrí sin, í a aicmiú mar shamhail intleachta saorga ilchuspóireach a bhfuil rioscaí sistéamacha aici. Tá an fhaisnéis sin luachmhar don Oifig um Intleacht Shaorga chun réamh-mheas a dhéanamh maidir le samhlacha intleachta saorga ilchuspóireacha a bhfuil rioscaí sistéamacha acu a chur ar an margadh agus is féidir leis na soláthraithe tosú ag plé leis an Oifig um Intleacht Shaorga go luath. Tá tábhacht ar leith ag baint leis an bhfaisnéis sin maidir le samhlacha intleachta saorga ilchuspóireacha a bhfuil sé beartaithe iad a scaoileadh mar fhoinse oscailte, ós rud é, tar éis scaoileadh na samhla foinse oscailte, go bhféadfadh sé a bheith níos deacra bearta riachtanacha a chur chun feidhme chun comhlíonadh na n-oibleagáidí faoin Rialachán seo a áirithiú.
(113)
If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as a general-purpose AI model with systemic risk, which previously had either not been known or of which the relevant provider has failed to notify the Commission, the Commission should be empowered to designate it so. A system of qualified alerts should ensure that the AI Office is made aware by the scientific panel of general-purpose AI models that should possibly be classified as general-purpose AI models with systemic risk, in addition to the monitoring activities of the AI Office.
(113)
Má fhaigheann an Coimisiún amach go gcomhlíonann samhail intleachta saorga ilchuspóireach na ceanglais chun aicmiú mar shamhail intleachta saorga ilchuspóireach lena mbaineann riosca sistéamach, nach raibh ar eolas roimhe sin nó nár thug an soláthraí ábhartha fógra ina leith don Choimisiún, ba cheart é a bheith de chumhacht ag an gCoimisiún í a ainmniú amhlaidh. Le córas foláireamh cáilithe, ba cheart a áirithiú go gcuirfidh an painéal eolaíoch an Oifig um Intleacht Shaorga ar an eolas faoi shamhlacha intleachta saorga ilchuspóireacha ar cheart iad a aicmiú, b’fhéidir, mar shamhlacha intleachta saorga ilchuspóireacha lena mbaineann riosca sistéamach, sa bhreis ar ghníomhaíochtaí faireacháin na hOifige um Intleacht Shaorga.
(114)
The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and mitigating those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as a standalone model or embedded in an AI system or a product. To achieve those objectives, this Regulation should require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing. In addition, providers of general-purpose AI models with systemic risks should continuously assess and mitigate systemic risks, including for example by putting in place risk-management policies, such as accountability and governance processes, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle and cooperating with relevant actors along the AI value chain.
(114)
Ba cheart do sholáthraithe samhlacha intleachta saorga ilchuspóireacha a bhfuil rioscaí sistéamacha acu, sa bhreis ar na hoibleagáidí dá bhforáiltear maidir le soláthraithe samhlacha intleachta saorga ilchuspóireacha, a bheith faoi réir oibleagáidí arb é is aidhm dóibh na rioscaí sin a shainaithint agus a mhaolú agus leibhéal leordhóthanach cosanta cibearshlándála a áirithiú, gan beann ar cibé acu a sholáthraítear mar shamhail neamhspleách í nó atá sí leabaithe i gcóras intleachta saorga nó i dtáirge. Chun na cuspóirí sin a bhaint amach, ba cheart leis an Rialachán seo ceangal a chur ar sholáthraithe na meastóireachtaí samhla is gá a dhéanamh, go háirithe sula gcuirtear ar an margadh den chéad uair iad, lena n-áirítear tástáil sáraíochta samhlacha a dhéanamh agus a dhoiciméadú, de réir mar is iomchuí, trí thástáil inmheánach nó trí thástáil sheachtrach neamhspleách. Ina theannta sin, ba cheart do sholáthraithe samhlacha intleachta saorga ilchuspóireacha a bhfuil rioscaí sistéamacha acu rioscaí sistéamacha a mheasúnú agus a mhaolú ar bhonn leanúnach, lena n-áirítear, amhail trí bheartais bainistithe riosca a chur i bhfeidhm, ar nós próisis chuntasachta agus rialachais, faireachán iarmhargaidh a chur chun feidhme, bearta iomchuí a dhéanamh feadh shaolré iomlán na samhla agus comhoibriú le gníomhaithe ábhartha feadh shlabhra luacha na hintleachta saorga.
(115)
Providers of general-purpose AI models with systemic risks should assess and mitigate possible systemic risks. If, despite efforts to identify and prevent risks related to a general-purpose AI model that may present systemic risks, the development or use of the model causes a serious incident, the general-purpose AI model provider should without undue delay keep track of the incident and report any relevant information and possible corrective measures to the Commission and national competent authorities. Furthermore, providers should ensure an adequate level of cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. Cybersecurity protection related to systemic risks associated with malicious use or attacks should duly consider accidental model leakage, unauthorised releases, circumvention of safety measures, and defence against cyberattacks, unauthorised access or model theft. That protection could be facilitated by securing model weights, algorithms, servers, and data sets, such as through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the relevant circumstances and the risks involved.
(115)
Ba cheart do sholáthraithe samhlacha intleachta saorga ilchuspóireacha a bhfuil rioscaí sistéamacha acu measúnú a dhéanamh ar rioscaí sistéamacha a d’fhéadfadh a bheith ann agus iad a mhaolú. Más rud é, in ainneoin iarrachtaí chun rioscaí a bhaineann le samhail intleachta saorga ilchuspóireach a bhféadfadh rioscaí sistéamacha a bheith aici a shainaithint agus a chosc, go bhfuil forbairt nó úsáid na samhla ina cúis le teagmhas tromchúiseach, ba cheart don soláthraí samhlacha intleachta saorga ilchuspóireacha súil a choinneáil ar an teagmhas gan moill mhíchuí agus aon fhaisnéis ábhartha agus aon bhearta ceartaitheacha a d’fhéadfadh a bheith ann a thuairisciú don Choimisiún agus do na húdaráis inniúla náisiúnta. Thairis sin, ba cheart do na soláthraithe leibhéal leordhóthanach cosanta cibearshlándála a áirithiú don tsamhail agus dá bonneagar fisiciúil, más iomchuí, feadh shaolré iomlán na samhla. Le cosaint chibearshlándála a bhaineann le rioscaí sistéamacha a ghabhann le húsáid mhailíseach nó ionsaithe, ba cheart aird chuí a thabhairt ar sceitheadh samhla de thaisme, scaoileadh neamhúdaraithe, teacht timpeall ar bhearta sábháilteachta, agus cosaint ar chibirionsaithe, rochtain neamhúdaraithe nó goid samhla. D’fhéadfaí an chosaint sin a éascú trí ualaí samhlacha, algartaim, freastalaithe agus tacair sonraí a dhaingniú, amhail trí bhearta slándála oibríochtúla le haghaidh slándáil faisnéise, beartais shonracha chibearshlándála, réitigh theicniúla leordhóthanacha agus réitigh bhunaithe leordhóthanacha, agus cibir-rialuithe agus rialuithe rochtana fisiciúla, atá iomchuí do na himthosca ábhartha agus do na rioscaí lena mbaineann.
(116)
The AI Office should encourage and facilitate the drawing up, review and adaptation of codes of practice, taking into account international approaches. All providers of general-purpose AI models could be invited to participate. To ensure that the codes of practice reflect the state of the art and duly take into account a diverse set of perspectives, the AI Office should collaborate with relevant national competent authorities, and could, where appropriate, consult with civil society organisations and other relevant stakeholders and experts, including the Scientific Panel, for the drawing up of such codes. Codes of practice should cover obligations for providers of general-purpose AI models and of general-purpose AI models presenting systemic risks. In addition, as regards systemic risks, codes of practice should help to establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their sources. Codes of practice should also be focused on specific risk assessment and mitigation measures.
(116)
Ba cheart don Oifig um Intleacht Shaorga tarraingt suas, athbhreithniú agus oiriúnú cód cleachtais a spreagadh agus a éascú, agus cuir chuige idirnáisiúnta á gcur san áireamh. D’fhéadfaí cuireadh a thabhairt do gach soláthraí samhlacha intleachta saorga ilchuspóireacha a bheith rannpháirteach. Chun a áirithiú go léireoidh na cóid chleachtais an úrscothacht agus go gcuirfear san áireamh go cuí iontu tacar éagsúil peirspictíochtaí, ba cheart don Oifig um Intleacht Shaorga comhoibriú leis na húdaráis inniúla náisiúnta ábhartha, agus d’fhéadfadh sí, i gcás inarb iomchuí, dul i gcomhairle le heagraíochtaí na sochaí sibhialta agus le geallsealbhóirí agus saineolaithe ábhartha eile, lena n-áirítear an painéal eolaíoch, chun na cóid sin a tharraingt suas. Leis na cóid chleachtais, ba cheart na hoibleagáidí ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha agus samhlacha intleachta saorga ilchuspóireacha a bhfuil rioscaí sistéamacha acu a chumhdach. Ina theannta sin, maidir le rioscaí sistéamacha, ba cheart do chóid chleachtais a bheith ina gcuidiú chun tacsanomaíocht riosca a bhunú de chineál agus de nádúr na rioscaí sistéamacha ar leibhéal an Aontais, lena n-áirítear a bhfoinsí. Ba cheart do chóid chleachtais a bheith dírithe freisin ar mheasúnú riosca sonrach agus ar bhearta maolaithe sonracha.
(117)
The codes of practice should represent a central tool for the proper compliance with the obligations provided for under this Regulation for providers of general-purpose AI models. Providers should be able to rely on codes of practice to demonstrate compliance with the obligations. By means of implementing acts, the Commission may decide to approve a code of practice and give it a general validity within the Union, or, alternatively, to provide common rules for the implementation of the relevant obligations, if, by the time this Regulation becomes applicable, a code of practice cannot be finalised or is not deemed adequate by the AI Office. Once a harmonised standard is published and assessed as suitable to cover the relevant obligations by the AI Office, compliance with a European harmonised standard should grant providers the presumption of conformity. Providers of general-purpose AI models should furthermore be able to demonstrate compliance using alternative adequate means, if codes of practice or harmonised standards are not available, or they choose not to rely on those.
(117)
Ba cheart do na cóid chleachtais a bheith ina n-uirlis lárnach chun na hoibleagáidí ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha dá bhforáiltear faoin Rialachán seo a chomhlíonadh go cuí. Ba cheart do sholáthraithe a bheith in ann brath ar na cóid chleachtais chun comhlíonadh na n-oibleagáidí a léiriú. Trí bhíthin gníomhartha cur chun feidhme, féadfaidh an Coimisiún cinneadh a dhéanamh cód cleachtais a fhormheas agus bailíocht ghinearálta a thabhairt dó laistigh den Aontas, nó, de rogha air sin, rialacha comhchoiteanna a sholáthar maidir le cur chun feidhme na n-oibleagáidí ábhartha, más rud é, faoin tráth a thiocfaidh an Rialachán seo chun bheith infheidhme, nach féidir cód cleachtais a thabhairt chun críche nó nach measann an Oifig um Intleacht Shaorga é a bheith leordhóthanach. A luaithe a fhoilseofar caighdeán comhchuibhithe agus a mheasfar go bhfuil sé oiriúnach chun go gcumhdóidh an Oifig um Intleacht Shaorga na hoibleagáidí ábhartha, ba cheart, le comhlíonadh caighdeáin chomhchuibhithe Eorpaigh, toimhde comhréireachta a dheonú do sholáthraithe. Thairis sin, ba cheart do sholáthraithe samhlacha intleachta saorga ilchuspóireacha a bheith in ann comhlíonadh a léiriú trí mhodhanna leordhóthanacha eile a úsáid, mura bhfuil na cóid chleachtais nó na caighdeáin chomhchuibhithe ar fáil, nó má roghnaíonn siad gan a bheith ag brath orthu sin.
(118)
This Regulation regulates AI systems and AI models by imposing certain requirements and obligations for relevant market actors that are placing them on the market, putting into service or use in the Union, thereby complementing obligations for providers of intermediary services that embed such systems or models into their services regulated by Regulation (EU) 2022/2065. To the extent that such systems or models are embedded into designated very large online platforms or very large online search engines, they are subject to the risk-management framework provided for in Regulation (EU) 2022/2065. Consequently, the corresponding obligations of this Regulation should be presumed to be fulfilled, unless significant systemic risks not covered by Regulation (EU) 2022/2065 emerge and are identified in such models. Within this framework, providers of very large online platforms and very large online search engines are obliged to assess potential systemic risks stemming from the design, functioning and use of their services, including how the design of algorithmic systems used in the service may contribute to such risks, as well as systemic risks stemming from potential misuses. Those providers are also obliged to take appropriate mitigating measures in observance of fundamental rights.
(118)
Leis an Rialachán seo, rialáiltear córais intleachta saorga agus samhlacha intleachta saorga trí cheanglais agus oibleagáidí áirithe a fhorchur ar ghníomhaithe margaidh ábhartha atá á gcur ar an margadh, á gcur i mbun seirbhíse nó á n-úsáid san Aontas, agus ar an gcaoi sin oibleagáidí do sholáthraithe seirbhísí idirghabhálacha a leabaíonn na córais nó na samhlacha sin ina seirbhísí a rialáiltear le Rialachán (AE) 2022/2065 a chomhlánú. A mhéid atá na córais nó na samhlacha sin leabaithe in ardáin an-mhór ar líne ainmnithe nó in innill chuardaigh an-mhór ar líne ainmnithe, tá siad faoi réir an chreata bainistithe riosca dá bhforáiltear i Rialachán (AE) 2022/2065. Dá bhrí sin, ba cheart a thoimhdiú go gcomhlíontar oibleagáidí comhfhreagracha an Rialacháin seo, mura rud é go dtagann rioscaí sistéamacha suntasacha chun cinn nach gcumhdaítear le Rialachán (AE) 2022/2065 agus go sainaithnítear iad sna samhlacha sin. Faoi chuimsiú an mhéid sin, tá sé d’oibleagáid ar sholáthraithe ardán an-mhór ar líne agus inneall cuardaigh an-mhór ar líne measúnú a dhéanamh ar rioscaí sistéamacha féideartha a eascraíonn as ceapadh, feidhmiú agus úsáid a gcuid seirbhísí, lena n-áirítear an chaoi a bhféadfadh ceapadh na gcóras algartamach a úsáidtear sa tseirbhís rannchuidiú leis na rioscaí sin, chomh maith le rioscaí sistéamacha a eascraíonn as mí-úsáidí féideartha. Tá sé d’oibleagáid ar na soláthraithe sin freisin bearta maolaitheacha iomchuí a dhéanamh agus cearta bunúsacha á n-urramú.
(119)
Considering the quick pace of innovation and the technological evolution of digital services in scope of different instruments of Union law in particular having in mind the usage and the perception of their recipients, the AI systems subject to this Regulation may be provided as intermediary services or parts thereof within the meaning of Regulation (EU) 2022/2065, which should be interpreted in a technology-neutral manner. For example, AI systems may be used to provide online search engines, in particular, to the extent that an AI system such as an online chatbot performs searches of, in principle, all websites, then incorporates the results into its existing knowledge and uses the updated knowledge to generate a single output that combines different sources of information.
(119)
I bhfianaise luas tapa na nuálaíochta agus éabhlóid theicneolaíoch na seirbhísí digiteacha i raon feidhme ionstraimí éagsúla dhlí an Aontais, go háirithe agus aird á tabhairt ar úsáid agus dearcadh a bhfaighteoirí, féadfar na córais intleachta saorga atá faoi réir an Rialacháin seo a sholáthar mar sheirbhísí idirghabhálacha nó mar chodanna díobh de réir bhrí Rialachán (AE) 2022/2065, ar cheart iad a léirmhíniú ar bhealach atá neodrach ó thaobh na teicneolaíochta de. Mar shampla, féadfar córais intleachta saorga a úsáid chun innill chuardaigh ar líne a sholáthar, go háirithe, a mhéid a dhéanann córas intleachta saorga, ar nós bota comhrá ar líne, cuardaigh, i bprionsabal, ar na suíomhanna gréasáin uile, agus a chorpraíonn sé ansin na torthaí san eolas atá aige cheana agus a úsáideann sé an t-eolas nuashonraithe chun aschur aonair a ghiniúint ina gcomhcheanglaítear foinsí éagsúla faisnéise.
(120)
Furthermore, obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation.
(120)
Ina theannta sin, na hoibleagáidí a chuirtear ar sholáthraithe agus úsáideoirí gairmiúla córas intleachta saorga áirithe sa Rialachán seo chun gur féidir aschur na gcóras sin a bhrath agus a nochtadh go ndéantar aschur na gcóras sin a ghiniúint nó a chúbláil go saorga, tá siad ábhartha go háirithe chun cur chun feidhme éifeachtach Rialachán (AE) 2022/2065 a éascú. Tá feidhm aige sin go háirithe a mhéid a bhaineann leis na hoibleagáidí atá ar sholáthraithe ardán an-mhór ar líne nó inneall cuardaigh an-mhór ar líne rioscaí sistéamacha a shainaithint agus a mhaolú a d’fhéadfadh eascairt as scaipeadh ábhair a gineadh nó a ionramháladh go saorga, go háirithe an riosca go mbeidh éifeachtaí diúltacha iarbhír nó intuartha ar phróisis dhaonlathacha, ar an dioscúrsa sibhialta agus ar phróisis toghcháin, lena n-áirítear trí bhréagaisnéis.
(121)
Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation, in line with the state of the art, to promote innovation as well as competitiveness and growth in the single market. Compliance with harmonised standards as defined in Article 2, point (1)(c), of Regulation (EU) No 1025/2012 of the European Parliament and of the Council (41), which are normally expected to reflect the state of the art, should be a means for providers to demonstrate conformity with the requirements of this Regulation. A balanced representation of interests involving all relevant stakeholders in the development of standards, in particular SMEs, consumer organisations and environmental and social stakeholders in accordance with Articles 5 and 6 of Regulation (EU) No 1025/2012 should therefore be encouraged. In order to facilitate compliance, the standardisation requests should be issued by the Commission without undue delay. When preparing the standardisation request, the Commission should consult the advisory forum and the Board in order to collect relevant expertise. However, in the absence of relevant references to harmonised standards, the Commission should be able to establish, via implementing acts, and after consultation of the advisory forum, common specifications for certain requirements under this Regulation. The common specification should be an exceptional fall back solution to facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation request has not been accepted by any of the European standardisation organisations, or when the relevant harmonised standards insufficiently address fundamental rights concerns, or when the harmonised standards do not comply with the request, or when there are delays in the adoption of an appropriate harmonised standard. Where such a delay in the adoption of a harmonised standard is due to the technical complexity of that standard, this should be considered by the Commission before contemplating the establishment of common specifications. When developing common specifications, the Commission is encouraged to cooperate with international partners and international standardisation bodies.
(121)
Ba cheart ról lárnach a bheith ag an gcaighdeánú chun réitigh theicniúla a sholáthar do sholáthraithe chun comhlíonadh an Rialacháin seo a áirithiú, i gcomhréir leis an úrscothacht, chun an nuálaíocht a chur chun cinn mar aon leis an iomaíocht agus an fás sa mhargadh aonair. Comhlíonadh na gcaighdeán comhchuibhithe mar a shainmhínítear i bpointe (1)(c) d’Airteagal 2 de Rialachán (AE) Uimh. 1025/2012 ó Pharlaimint na hEorpa agus ón gComhairle (41), a mbeifí ag súil leis go léireofaí an úrscothacht leo, ba cheart an comhlíonadh sin a bheith ina mhodh ag soláthraithe chun comhréireacht le ceanglais an Rialacháin seo a léiriú. Dá bhrí sin, ba cheart ionadaíocht chothrom ar leasanna a bhfuil baint ag na geallsealbhóirí ábhartha uile i bhforbairt caighdeán leo, go háirithe FBManna, eagraíochtaí tomhaltóirí agus geallsealbhóirí comhshaoil agus sóisialta i gcomhréir le hAirteagail 5 agus 6 de Rialachán (AE) Uimh. 1025/2012 a spreagadh. Chun comhlíonadh a éascú, ba cheart don Choimisiún na hiarrataí ar chaighdeánú a eisiúint gan moill mhíchuí. Agus an iarraidh ar chaighdeánú á hullmhú aige, ba cheart don Choimisiún dul i gcomhairle leis an bhfóram comhairleach agus an Bord chun saineolas ábhartha a bhailiú. In éagmais tagairtí ábhartha do chaighdeáin chomhchuibhithe, áfach, ba cheart don Choimisiún a bheith in ann sonraíochtaí coiteanna a bhunú, trí ghníomhartha cur chun feidhme, agus tar éis dul i gcomhairle leis an bhfóram comhairleach, maidir le ceanglais áirithe faoin Rialachán seo. Ba cheart an tsonraíocht choiteann a bheith ina réiteach cúltaca eisceachtúil chun oibleagáid an tsoláthraí a éascú ceanglais an Rialacháin seo a chomhlíonadh, nuair nár ghlac aon cheann de na heagraíochtaí Eorpacha um chaighdeánú leis an iarraidh ar chaighdeánú, nó nuair a thugann na caighdeáin chomhchuibhithe ábhartha aghaidh neamhdhóthanach ar ábhair imní maidir le cearta bunúsacha, nó nuair nach gcomhlíonann na caighdeáin chomhchuibhithe an iarraidh, nó nuair a bhíonn moill ar chaighdeán comhchuibhithe iomchuí a ghlacadh. I gcás inarb é castacht theicniúil an chaighdeáin sin is cúis leis an moill sin maidir le caighdeán comhchuibhithe a ghlacadh, ba cheart don Choimisiún é sin a mheas sula ndéanfaidh sé machnamh ar shonraíochtaí coiteanna a bhunú. Agus sonraíochtaí coiteanna á bhforbairt, moltar don Choimisiún comhoibriú le comhpháirtithe idirnáisiúnta agus le comhlachtaí idirnáisiúnta um chaighdeánú.
(122)
It is appropriate that, without prejudice to the use of harmonised standards and common specifications, providers of a high-risk AI system that has been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which the AI system is intended to be used, should be presumed to comply with the relevant measure provided for under the requirement on data governance set out in this Regulation. Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, in accordance with Article 54(3) of Regulation (EU) 2019/881, high-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the references of which have been published in the Official Journal of the European Union should be presumed to comply with the cybersecurity requirement of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover the cybersecurity requirement of this Regulation. This remains without prejudice to the voluntary nature of that cybersecurity scheme.
(122)
Gan dochar d’úsáid caighdeán comhchuibhithe agus sonraíochtaí coiteanna, is iomchuí a thoimhdiú i ndáil le soláthraithe córais intleachta saorga ardriosca a ndearnadh oiliúint agus tástáil air maidir le sonraí lena léirítear an suíomh geografach, iompraíochta, comhthéacsúil nó feidhmiúil ina bhfuil sé beartaithe an córas intleachta saorga a úsáid, go gcomhlíonann siad an beart ábhartha dá bhforáiltear faoin gceanglas maidir le rialachas sonraí a leagtar amach sa Rialachán seo. Gan dochar do na ceanglais a bhaineann le stóinseacht agus cruinneas a leagtar amach sa Rialachán seo, i gcomhréir le hAirteagal 54(3) de Rialachán (AE) 2019/881, ba cheart a thoimhdiú maidir le córais intleachta saorga ardriosca atá deimhnithe nó ar eisíodh ráiteas comhréireachta ina leith faoi scéim cibearshlándála de bhun an Rialacháin sin agus ar foilsíodh a dtagairtí in Iris Oifigiúil an Aontais Eorpaigh go gcomhlíonann siad ceanglas cibearshlándála an Rialacháin seo a mhéid a chumhdaítear leis an deimhniú cibearshlándála nó leis an ráiteas comhréireachta nó codanna de ceanglas cibearshlándála an Rialacháin seo. Tá sé seo gan dochar fós do chineál deonach na scéime cibearshlándála sin.
(123)
In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service.
(123)
Chun ardleibhéal iontaofachta a áirithiú i gcórais intleachta saorga ardriosca, ba cheart na córais sin a bheith faoi réir measúnú comhréireachta sula gcuirfear ar an margadh nó i mbun seirbhíse iad.
(124)
It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation based on the New Legislative Framework, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already provided for in that law. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant Union harmonisation legislation.
(124)
Chun an t-ualach ar oibreoirí a íoslaghdú agus chun aon dúbláil a d’fhéadfadh a bheith ann a sheachaint, maidir le córais intleachta saorga ardriosca a bhaineann le táirgí a chumhdaítear faoi reachtaíocht chomhchuibhithe an Aontais atá ann cheana bunaithe ar an gCreat Reachtach Nua, is iomchuí gur cheart a mheasúnú a mhéid a chomhlíonann na córais intleachta saorga sin ceanglais an Rialacháin seo mar chuid den mheasúnú comhréireachta dá bhforáiltear cheana sa dlí sin. Níor cheart d’infheidhmeacht na gceanglas sa Rialachán seo difear a dhéanamh do loighic, modheolaíocht nó struchtúr ginearálta an mheasúnaithe comhréireachta faoi reachtaíocht chomhchuibhithe ábhartha an Aontais.
(125)
Given the complexity of high-risk AI systems and the risks that are associated with them, it is important to develop an adequate conformity assessment procedure for high-risk AI systems involving notified bodies, so-called third party conformity assessment. However, given the current experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for biometrics.
(125)
I bhfianaise chastacht na gcóras intleachta saorga ardriosca agus na rioscaí a bhaineann leo, tá sé tábhachtach nós imeachta leordhóthanach um measúnú comhréireachta a fhorbairt le haghaidh córais intleachta saorga ardriosca lena mbaineann comhlachtaí faoina dtugtar fógra, ar a dtugtar measúnú comhréireachta tríú páirtí. I bhfianaise thaithí reatha na ndeimhneoirí réamh-mhargaidh gairmiúla i réimse na sábháilteachta táirgí agus chineál éagsúil na rioscaí lena mbaineann, áfach, is iomchuí, ar a laghad le linn na chéad chéime de chur i bhfeidhm an Rialacháin seo, an raon feidhme a theorannú maidir le cur i bhfeidhm measúnaithe comhréireachta tríú páirtí le haghaidh córais intleachta saorga ardriosca cé is moite díobh siúd a bhaineann leis na táirgí. Dá bhrí sin, mar riail ghinearálta, ba cheart don soláthraí an measúnú comhréireachta a dhéanamh ar na córais sin faoina fhreagracht féin, cé is moite de chórais intleachta saorga atá beartaithe lena n-úsáid le haghaidh bithmhéadrachta.
(126)
In order to carry out third-party conformity assessments when so required, notified bodies should be notified under this Regulation by the national competent authorities, provided that they comply with a set of requirements, in particular on independence, competence, absence of conflicts of interests and suitable cybersecurity requirements. Notification of those bodies should be sent by national competent authorities to the Commission and the other Member States by means of the electronic notification tool developed and managed by the Commission pursuant to Article R23 of Annex I to Decision No 768/2008/EC.
(126)
Chun measúnuithe comhréireachta tríú páirtí a dhéanamh nuair a cheanglaítear sin, ba cheart do na húdaráis inniúla náisiúnta fógra a thabhairt do na comhlachtaí faoina dtugtar fógra faoin Rialachán seo, ar choinníoll go gcomhlíonann siad tacar ceanglas, go háirithe maidir leis an neamhspleáchas, an inniúlacht, an easpa coinbhleachtaí leasa agus ceanglais chibearshlándála oiriúnacha. Ba cheart d’údaráis inniúla náisiúnta fógra faoi na comhlachtaí sin a sheoladh chuig an gCoimisiún agus chuig na Ballstáit eile trí bhíthin na huirlise leictreonaí um fhógra a thabhairt arna forbairt agus arna bainistiú ag an gCoimisiún de bhun Airteagal R23 d’Iarscríbhinn I a ghabhann le Cinneadh Uimh. 768/2008/CE.
(127)
In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is adequate to facilitate the mutual recognition of conformity assessment results produced by competent conformity assessment bodies, independent of the territory in which they are established, provided that those conformity assessment bodies established under the law of a third country meet the applicable requirements of this Regulation and the Union has concluded an agreement to that extent. In this context, the Commission should actively explore possible international instruments for that purpose and in particular pursue the conclusion of mutual recognition agreements with third countries.
(127)
I gcomhréir le gealltanais an Aontais faoi Chomhaontú na hEagraíochta Domhanda Trádála maidir le Bacainní Teicniúla ar Thrádáil, is leor aitheantas frithpháirteach a éascú do thorthaí an mheasúnaithe comhréireachta a tháirgeann comhlachtaí inniúla um measúnú comhréireachta, neamhspleách ar an gcríoch ina bhfuil siad bunaithe, ar choinníoll go gcomhlíonann na comhlachtaí um measúnú comhréireachta sin arna mbunú faoi dhlí tríú tír ceanglais infheidhme an Rialacháin seo agus go bhfuil comhaontú tugtha i gcrích ag an Aontas a mhéid sin. Sa chomhthéacs sin, ba cheart don Choimisiún féachaint go gníomhach ar ionstraimí idirnáisiúnta a d’fhéadfadh a bheith ann chun na críche sin agus go háirithe tabhairt i gcrích comhaontuithe um aitheantas frithpháirteach le tríú tíortha a shaothrú.
(128)
In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that whenever a change occurs which may affect the compliance of a high-risk AI system with this Regulation (e.g. change of operating system or software architecture), or when the intended purpose of the system changes, that AI system should be considered to be a new AI system which should undergo a new conformity assessment. However, changes occurring to the algorithm and the performance of AI systems which continue to ‘learn’ after being placed on the market or put into service, namely automatically adapting how functions are carried out, should not constitute a substantial modification, provided that those changes have been pre-determined by the provider and assessed at the moment of the conformity assessment.
(128)
Aon uair a tharlaíonn athrú a d’fhéadfadh difear a dhéanamh do chomhlíontacht córais intleachta saorga ardriosca leis an Rialachán seo (e.g. athrú ar an gcóras oibriúcháin nó ar ailtireacht na mbogearraí), nó nuair a thagann athrú ar an gcríoch atá beartaithe don chóras, is iomchuí gur cheart a mheas gur córas intleachta saorga nua é an córas intleachta saorga sin ar cheart measúnú comhréireachta nua a dhéanamh air i gcomhréir le coincheap an mhodhnaithe shubstaintiúil a bhunaítear go coitianta le haghaidh táirgí a rialaítear le reachtaíocht chomhchuibhithe an Aontais. Níor cheart, áfach, modhnú substaintiúil a bheith in athruithe a thagann ar an algartam agus ar fheidhmíocht na gcóras intleachta saorga a leanann de bheith ‘ag foghlaim’ tar éis iad a chur ar an margadh nó i mbun seirbhíse, eadhon oiriúnú go huathoibríoch don chaoi a ndéantar feidhmeanna, ar choinníoll go ndearna an soláthraí na hathruithe sin a réamhchinneadh agus go ndearnadh measúnú orthu tráth an mheasúnaithe comhréireachta.
(129)
High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. For high-risk AI systems embedded in a product, a physical CE marking should be affixed, and may be complemented by a digital CE marking. For high-risk AI systems only provided digitally, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing on the market or the putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.
(129)
Ba cheart an mharcáil CE a bheith ar chórais intleachta saorga ardriosca chun a shonrú go bhfuil siad i gcomhréir leis an Rialachán seo sa dóigh is gur féidir leo gluaiseacht gan bhac laistigh den mhargadh inmheánach. Maidir le córais intleachta saorga ardriosca atá leabaithe sa táirge, ba cheart marcáil CE fisiciúil a ghreamú orthu, agus féadfar é a chomhlánú le marcáil dhigiteach CE. I gcás córais intleachta saorga ardriosca nach soláthraítear ach go digiteach, ba cheart marcáil dhigiteach CE a úsáid. Níor cheart do na Ballstáit constaicí éagóracha a chruthú maidir le córais intleachta saorga ardriosca a chomhlíonann na ceanglais a leagtar síos sa Rialachán seo agus a bhfuil an mharcáil CE orthu a chur ar an margadh nó a chur i mbun seirbhíse.
(130)
Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons, the protection of the environment and climate change and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons, environmental protection and the protection of key industrial and infrastructural assets, market surveillance authorities could authorise the placing on the market or the putting into service of AI systems which have not undergone a conformity assessment. In duly justified situations, as provided for in this Regulation, law enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation of the market surveillance authority, provided that such authorisation is requested during or after the use without undue delay.
(130)
Faoi choinníollacha áirithe, d’fhéadfadh rochtain thapa ar theicneolaíochtaí nuálacha a bheith ríthábhachtach do shláinte agus sábháilteacht daoine, do chosaint an chomhshaoil agus don athrú aeráide agus don tsochaí ina hiomláine. Dá bhrí sin, is iomchuí i gcás cúiseanna eisceachtúla a bhaineann leis an tslándáil phoiblí nó le beatha nó sláinte daoine nádúrtha a chosaint, caomhnú an chomhshaoil agus cosaint sócmhainní tionsclaíocha agus bonneagair ríthábhachtacha, go bhféadfadh údaráis faireachais margaidh údarú a thabhairt maidir le córais intleachta saorga nach ndearnadh measúnú comhréireachta orthu a chur ar an margadh nó a chur i mbun seirbhíse. I gcásanna a bhfuil údar cuí leo, dá bhforáiltear faoin Rialachán seo, féadfaidh údaráis forfheidhmithe dlí nó údaráis cosanta sibhialta córas intleachta saorga ardriosca sonrach a chur i mbun seirbhíse gan údarú ón údarás faireachais margaidh, ar choinníoll go n-iarrtar an t-údarú sin le linn na húsáide nó ina diaidh gan moill mhíchuí.
(131)
In order to facilitate the work of the Commission and the Member States in the AI field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, as well as providers who consider that an AI system listed in the high-risk use cases in an annex to this Regulation is not high-risk on the basis of a derogation, should be required to register themselves and information about their AI system in an EU database, to be established and managed by the Commission. Before using an AI system listed in the high-risk use cases in an annex to this Regulation, deployers of high-risk AI systems that are public authorities, agencies or bodies, should register themselves in such database and select the system that they envisage to use. Other deployers should be entitled to do so voluntarily. This section of the EU database should be publicly accessible, free of charge, the information should be easily navigable, understandable and machine-readable. The EU database should also be user-friendly, for example by providing search functionalities, including through keywords, allowing the general public to find relevant information to be submitted upon the registration of high-risk AI systems and on the use case of high-risk AI systems, set out in an annex to this Regulation, to which the high-risk AI systems correspond. Any substantial modification of high-risk AI systems should also be registered in the EU database. For high-risk AI systems in the area of law enforcement, migration, asylum and border control management, the registration obligations should be fulfilled in a secure non-public section of the EU database. Access to the secure non-public section should be strictly limited to the Commission as well as to market surveillance authorities with regard to their national section of that database. High-risk AI systems in the area of critical infrastructure should only be registered at national level. The Commission should be the controller of the EU database, in accordance with Regulation (EU) 2018/1725. In order to ensure the full functionality of the EU database, when deployed, the procedure for setting the database should include the development of functional specifications by the Commission and an independent audit report. The Commission should take into account cybersecurity risks when carrying out its tasks as data controller on the EU database. In order to maximise the availability and use of the EU database by the public, the EU database, including the information made available through it, should comply with requirements under the Directive (EU) 2019/882.
(131)
Chun obair an Choimisiúin agus na mBallstát i réimse na hintleachta saorga a éascú chomh maith leis an trédhearcacht i leith an phobail a mhéadú, ba cheart é a bheith de cheangal ar sholáthraithe córas intleachta saorga ardriosca, cé is moite díobh siúd a bhaineann le táirgí a thagann faoi raon feidhme reachtaíocht chomhchuibhithe áirithe an Aontais, chomh maith le soláthraithe atá den tuairim, ar bhonn maolaithe, nach mbaineann ardriosca le córas intleachta saorga a liostaítear sna cásanna úsáide ardriosca in iarscríbhinn a ghabhann leis an Rialachán seo, iad féin agus faisnéis faoina gcóras intleachta saorga ardriosca a chlárú i mbunachar sonraí an Aontais, bunachar sonraí atá le bunú agus le bainistiú ag an gCoimisiún. Sula n-úsáidfidh siad córas intleachta saorga a liostaítear sna cásanna úsáide ardriosca in iarscríbhinn a ghabhann leis an Rialachán seo, ba cheart d’úsáideoirí gairmiúla córas intleachta saorga ardriosca ar údaráis, gníomhaireachtaí nó comhlachtaí poiblí iad, iad féin a chlárú sa bhunachar sonraí sin agus an córas a bheartaíonn siad a úsáid a roghnú. Ba cheart d’úsáideoirí gairmiúla eile a bheith i dteideal é sin a dhéanamh go deonach. Ba cheart an chuid seo de bhunachar sonraí an Aontais a bheith inrochtana don phobal, saor, ba cheart an fhaisnéis a bheith in-nascleanta go héasca, intuigthe agus inléite ag meaisín. Ba cheart bunachar sonraí an Aontais a bheith soláimhsithe freisin, mar shampla trí fheidhmiúlachtaí cuardaigh a sholáthar, lena n-áirítear trí eochairfhocail, lena gceadaítear don phobal i gcoitinne faisnéis ábhartha a aimsiú atá le cur isteach nuair a chláraítear córais intleachta saorga ardriosca agus maidir le cás úsáide na gcóras intleachta saorga ardriosca, a leagtar amach in iarscríbhinn a ghabhann leis an Rialachán seo, dá gcomhfhreagraíonn na córais intleachta saorga ardriosca. Ba cheart aon mhodhnú substaintiúil ar chórais intleachta saorga ardriosca a chlárú freisin i mbunachar sonraí an Aontais. Maidir le córais intleachta saorga ardriosca i réimse fhorfheidhmiú an dlí, na himirce, an tearmainn agus an bhainistithe rialaithe teorann, ba cheart na hoibleagáidí clárúcháin a chomhlíonadh i roinn shlán neamhphoiblí de bhunachar sonraí an Aontais. Ba cheart rochtain ar an roinn shlán neamhphoiblí a theorannú go docht don Choimisiún agus d’údaráis faireachais margaidh maidir lena roinn náisiúnta den bhunachar sonraí sin. Níor cheart córais intleachta saorga ardriosca i réimse an bhonneagair chriticiúil a chlárú ach amháin ar an leibhéal náisiúnta. Ba cheart don Choimisiún a bheith ina rialaitheoir ar bhunachar sonraí an Aontais, i gcomhréir le Rialachán (AE) 2018/1725. Chun feidhmiúlacht iomlán an bhunachair sonraí a áirithiú, nuair atá sé in úsáid, ba cheart forbairt na sonraíochtaí feidhmiúla ag an gCoimisiún agus ag tuarascáil neamhspleách iniúchóireachta a áireamh mar chuid den nós imeachta chun bunachar sonraí an Aontais a shocrú. Ba cheart don Choimisiún rioscaí cibearshlándála a chur san áireamh agus a chúraimí mar rialaitheoir sonraí á ndéanamh aige ar bhunachar sonraí an Aontais. Chun infhaighteacht agus úsáid bhunachar sonraí an Aontais ag an bpobal a uasmhéadú, ba cheart bunachar sonraí an Aontais, lena n-áirítear an fhaisnéis a chuirtear ar fáil tríd, na ceanglais faoi Threoir (AE) 2019/882 a chomhlíonadh.
(132)
Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems and subject to targeted exceptions to take into account the special need of law enforcement. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect taking into account the circumstances and the context of use. When implementing that obligation, the characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons should be notified when they are exposed to AI systems that, by processing their biometric data, can identify or infer the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and interests. Such information and notifications should be provided in accessible formats for persons with disabilities.
(132)
Córais intleachta saorga áirithe atá beartaithe le hidirghníomhú le daoine nádúrtha nó le hábhar a ghiniúint, d’fhéadfadh siad rioscaí sonracha a chothú maidir le pearsanú nó meabhlaireacht gan beann ar cé acu atá siad incháilithe mar ardriosca nó nach bhfuil. In imthosca áirithe, ba cheart, dá bhrí sin, úsáid na gcóras sin a bheith faoi réir oibleagáidí sonracha trédhearcachta gan dochar do na ceanglais agus na hoibleagáidí le haghaidh córas intleachta saorga ardriosca agus faoi réir eisceachtaí spriocdhírithe chun riachtanas speisialta fhorfheidhmiú an dlí a chur san áireamh. Go háirithe, ba cheart fógra a thabhairt do dhaoine nádúrtha go bhfuil siad ag idirghníomhú le córas intleachta saorga, ach amháin más léir sin ó thaobh duine nádúrtha atá réasúnta eolach, géarchúiseach agus cáiréiseach, agus na himthosca agus comhthéacs na húsáide á gcur san áireamh. Agus an oibleagáid sin á cur chun feidhme, ba cheart saintréithe daoine nádúrtha, ar de ghrúpaí leochaileacha iad mar gheall ar a n-aois nó a míchumas, a chur san áireamh a mhéid atá sé beartaithe leis an gcóras intleachta saorga idirghníomhú leis na grúpaí sin chomh maith. Thairis sin, ba cheart fógra a thabhairt do dhaoine nádúrtha nuair a nochtar córais intleachta saorga dóibh ar féidir leo, trína sonraí bithmhéadracha a phróiseáil, mothúcháin nó intinn na ndaoine sin a shainaithint nó a thabhairt le tuiscint nó iad a shannadh do chatagóirí sonracha. D’fhéadfadh baint a bheith ag na catagóirí sonracha sin le gnéithe, amhail gnéas, aois, dath gruaige, dath súile, tatúnna, tréithe pearsanta, bunús eitneach, roghanna pearsanta agus leasanna pearsanta. Ba cheart an fhaisnéis agus na fógraí sin a sholáthar i bhformáidí inrochtana le haghaidh daoine faoi mhíchumas.
(133)
A variety of AI systems can generate large quantities of synthetic content that becomes increasingly hard for humans to distinguish from human-generated and authentic content. The wide availability and increasing capabilities of those systems have a significant impact on the integrity and trust in the information ecosystem, raising new risks of misinformation and manipulation at scale, fraud, impersonation and consumer deception. In light of those impacts, the fast technological pace and the need for new methods and techniques to trace origin of information, it is appropriate to require providers of those systems to embed technical solutions that enable marking in a machine readable format and detection that the output has been generated or manipulated by an AI system and not a human. Such techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is technically feasible, taking into account available techniques or a combination of such techniques, such as watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints or other techniques, as may be appropriate. When implementing this obligation, providers should also take into account the specificities and the limitations of the different types of content and the relevant technological and market developments in the field, as reflected in the generally acknowledged state of the art. Such techniques and methods can be implemented at the level of the AI system or at the level of the AI model, including general-purpose AI models generating content, thereby facilitating fulfilment of this obligation by the downstream provider of the AI system. To remain proportionate, it is appropriate to envisage that this marking obligation should not cover AI systems performing primarily an assistive function for standard editing or AI systems not substantially altering the input data provided by the deployer or the semantics thereof.
(133)
Is féidir le córais intleachta saorga éagsúla méideanna móra d’ábhar sintéiseach a ghiniúint a éiríonn níos deacra do dhaoine idirdhealú a dhéanamh ó ábhar a ghintear ag an duine agus ó ábhar barántúil. Bíonn tionchar suntasach ag infhaighteacht leathan agus cumais mhéadaitheacha na gcóras sin ar shláine éiceachóras na faisnéise agus ar an muinín a chuirtear ann, rud a chruthaíonn rioscaí nua mífhaisnéise agus cúblála ar scála, calaois, pearsanú agus meabhlaireacht tomhaltóirí. I bhfianaise na dtionchar sin, an luais thapa teicneolaíochta agus an ghá atá le modhanna agus teicnící nua chun tionscnamh faisnéise a rianú, is iomchuí a cheangal ar sholáthraithe na gcóras sin réitigh theicniúla a leabú lena bhféadfar marcáil a dhéanamh i bhformáid mheaisín-inléite agus a bhrath gur gineadh nó gur ionramháladh an t-aschur le córas intleachta saorga seachas ag duine. Ba cheart na teicnící agus na modhanna sin a bheith iontaofa, idir-inoibritheach, éifeachtach agus stóinseach a mhéid is indéanta sin go teicniúil, agus na teicnící atá ar fáil nó meascán de na teicnící sin á gcur san áireamh, amhail comharthaí uisce, sainaithint meiteashonraí, modhanna cripteagrafacha chun bunáitíocht agus barántúlacht an ábhair a chruthú, modhanna logála, méarloirg nó teicnící eile, de réir mar is iomchuí. Agus an oibleagáid sin á cur chun feidhme, ba cheart do sholáthraithe sainiúlachtaí agus srianta na gcineálacha éagsúla ábhair agus na forbairtí ábhartha teicneolaíochta agus margaidh sa réimse, mar a léirítear iad sna forbairtí úrscothacha lena nglactar go ginearálta, a chur san áireamh freisin. Is féidir na teicnící agus na modhanna sin a chur chun feidhme ar leibhéal an chórais intleachta saorga nó ar leibhéal na samhla intleachta saorga, lena n-áirítear samhlacha intleachta saorga ilchuspóireacha lena ngintear ábhar, agus, ar an gcaoi sin, comhlíonadh na hoibleagáide sin a éascú ag soláthraí iartheachtach an chórais intleachta saorga. Chun leanúint de bheith comhréireach, is iomchuí foráil a dhéanamh nár cheart a chumhdach leis an oibleagáid mharcála sin córais intleachta saorga a fheidhmíonn feidhm chúnta go príomha le haghaidh eagarthóireacht chaighdeánach nó córais intleachta saorga nach n-athróidh go substaintiúil na sonraí ionchuir arna soláthar ag an úsáideoir gairmiúil nó an tséimeantaic a ghabhann leo.
(134)
Further to the technical solutions employed by the providers of the AI system, deployers who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful (deep fakes), should also clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the AI output accordingly and disclosing its artificial origin. Compliance with this transparency obligation should not be interpreted as indicating that the use of the AI system or its output impedes the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter, in particular where the content is part of an evidently creative, satirical, artistic, fictional or analogous work or programme, subject to appropriate safeguards for the rights and freedoms of third parties. In those cases, the transparency obligation for deep fakes set out in this Regulation is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work, including its normal exploitation and use, while maintaining the utility and quality of the work. In addition, it is also appropriate to envisage a similar disclosure obligation in relation to AI-generated or manipulated text to the extent it is published with the purpose of informing the public on matters of public interest unless the AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility for the publication of the content.
(134)
Sa bhreis ar na réitigh theicniúla a úsáideann soláthraithe an chórais intleachta saorga, ba cheart d’úsáideoirí gairmiúla, a úsáideann córas intleachta saorga chun ábhar íomhá, fuaime nó físe atá cosúil go suntasach le daoine, réada, áiteanna, eintitis nó imeachtaí atá ann cheana a ghiniúint nó a chúbláil agus a dhealródh go bréagach do dhuine go bhfuil sé barántúil nó fírinneach (domhainbhrionnuithe), a nochtadh go soiléir agus go sainiúil gur cruthaíodh nó gur ionramháladh an t-ábhar go saorga trí aschur na hintleachta saorga a lipéadú dá réir sin agus a thionscnamh saorga a nochtadh. Níor cheart comhlíonadh na hoibleagáide trédhearcachta sin a léirmhíniú mar léiriú go gcuireann úsáid an chórais intleachta saorga nó a aschur bac ar an gceart chun tuairimí a nochtadh agus ar an gceart chun saoirse na n-ealaíon agus na n-eolaíochtaí a ráthaítear sa Chairt, go háirithe i gcás inar cuid de shaothar nó de chlár, nó ina cuid de shaothar nó de chlár atá ar aon dul le saothar nó clár, cruthaitheach, aorach, ealaíonta, ficseanúil é an t-ábhar, faoi réir coimircí iomchuí maidir le cearta agus saoirsí tríú páirtithe. Sna cásanna sin, tá an oibleagáid trédhearcachta maidir le domhainbhrionnuithe a leagtar amach sa Rialachán seo teoranta do nochtadh gur ann d’ábhar ginte nó cúbláilte den sórt sin ar bhealach iomchuí nach gcuireann isteach ar thaispeáint ná ar theachtadh na hoibre, lena n-áirítear a ghnáthshaothrú agus a ghnáthúsáid, agus áirgiúlacht agus cáilíocht na hoibre á gcoinneáil ag an am céanna. Ina theannta sin, is iomchuí freisin foráil a dhéanamh maidir le hoibleagáid nochta chomhchosúil i ndáil le téacs arna ghiniúint nó arna chúbláil ag intleacht shaorga a mhéid a fhoilsítear é chun an pobal a chur ar an eolas faoi ábhair a bhaineann le leas an phobail mura rud é go ndeachaigh an t-ábhar arna ghiniúint ag intleacht shaorga faoi phróiseas athbhreithnithe dhaonna nó rialaithe eagarthóireachta agus go bhfuil freagracht eagarthóireachta ar dhuine nádúrtha nó dlítheanach as an ábhar a fhoilsiú.
(135)
Without prejudice to the mandatory nature and full applicability of the transparency obligations, the Commission may also encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content, including to support practical arrangements for making, as appropriate, the detection mechanisms accessible and facilitating cooperation with other actors along the value chain, disseminating content or checking its authenticity and provenance to enable the public to effectively distinguish AI-generated content.
(135)
Gan dochar do chineál éigeantach agus infheidhmeacht iomlán na n-oibleagáidí trédhearcachta, féadfaidh an Coimisiún tarraingt suas cód cleachtais a spreagadh agus a éascú ar leibhéal an Aontais chun cur chun feidhme éifeachtach na n-oibleagáidí maidir le brath agus lipéadú ábhair arna ghiniúint nó arna chúbláil go saorga a éascú, lena n-áirítear tacú le socruithe praiticiúla chun na sásraí braite a dhéanamh inrochtana, de réir mar is iomchuí, agus comhar a éascú le gníomhaithe eile feadh an tslabhra luacha, ábhar a scaipeadh nó a bharántúlacht agus a fhoinse a sheiceáil chun a chur ar chumas an phobail idirdhealú éifeachtach a dhéanamh idir ábhar arna ghiniúint ag intleacht shaorga.
(136)
The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the obligations of providers of very large online platforms or very large online search engines to identify and mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, in particular the risk of the actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, including through disinformation. The requirement to label content generated by AI systems under this Regulation is without prejudice to the obligation in Article 16(6) of Regulation (EU) 2022/2065 for providers of hosting services to process notices on illegal content received pursuant to Article 16(1) of that Regulation and should not influence the assessment and the decision on the illegality of the specific content. That assessment should be performed solely with reference to the rules governing the legality of the content.
(136)
Na hoibleagáidí a chuirtear ar sholáthraithe agus úsáideoirí gairmiúla córas intleachta saorga áirithe sa Rialachán seo chun gur féidir aschur na gcóras sin a bhrath agus a nochtadh go ndéantar aschur na gcóras sin a ghiniúint nó a chúbláil go saorga, tá siad ábhartha go háirithe chun cur chun feidhme éifeachtach Rialachán (AE) 2022/2065 a éascú. Tá feidhm aige sin go háirithe a mhéid a bhaineann leis na hoibleagáidí atá ar sholáthraithe ardán an-mhór ar líne nó inneall cuardaigh an-mhór ar líne rioscaí sistéamacha a shainaithint agus a mhaolú a d’fhéadfadh eascairt as scaipeadh ábhair a gineadh nó a ionramháladh go saorga, go háirithe an riosca go mbeadh éifeachtaí diúltacha iarbhír nó intuartha ar phróisis dhaonlathacha, ar an dioscúrsa sibhialta agus ar phróisis toghcháin, lena n-áirítear trí bhréagaisnéis. Tá an ceanglas maidir le hábhar a ghintear le córais intleachta saorga faoin Rialachán seo a lipéadú gan dochar don oibleagáid in Airteagal 16(6) de Rialachán (AE) 2022/2065 atá ar sholáthraithe seirbhísí óstála fógraí a phróiseáil maidir le hábhar neamhdhleathach a fhaightear de bhun Airteagal 16(1) den Rialachán sin agus níor cheart dó tionchar a imirt ar an measúnú ná ar an gcinneadh maidir le neamhdhleathacht an ábhair shonraigh. Níor cheart an measúnú sin a dhéanamh ach amháin ag tagairt do na rialacha lena rialaítear dlíthiúlacht an ábhair.
(137)
Compliance with the transparency obligations for the AI systems covered by this Regulation should not be interpreted as indicating that the use of the AI system or its output is lawful under this Regulation or other Union and Member State law and should be without prejudice to other transparency obligations for deployers of AI systems laid down in Union or national law.
(137)
Níor cheart comhlíonadh na n-oibleagáidí trédhearcachta a chumhdaítear leis an Rialachán seo a léirmhíniú mar léiriú go bhfuil úsáid nó aschur an chórais intleachta saorga dleathach faoin Rialachán seo nó faoi dhlí eile de chuid an Aontais agus na mBallstát agus ba cheart é a bheith gan dochar d’oibleagáidí trédhearcachta eile ar úsáideoirí gairmiúla na gcóras intleachta saorga a leagtar síos i ndlí an Aontais nó sa dlí náisiúnta.
(138)
AI is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that promotes innovation, is future-proof and resilient to disruption, Member States should ensure that their national competent authorities establish at least one AI regulatory sandbox at national level to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. Member States could also fulfil this obligation through participating in already existing regulatory sandboxes or establishing jointly a sandbox with one or more Member States’ competent authorities, insofar as this participation provides equivalent level of national coverage for the participating Member States. AI regulatory sandboxes could be established in physical, digital or hybrid form and may accommodate physical as well as digital products. Establishing authorities should also ensure that the AI regulatory sandboxes have the adequate resources for their functioning, including financial and human resources.
(138)
Grúpa teicneolaíochtaí atá san intleacht shaorga atá ag forbairt go tapa agus a bhfuil formhaoirseacht rialála agus spás rialaithe sábháilte le haghaidh turgnamhaíochta ag teastáil ina leith, agus nuálaíocht fhreagrach agus comhtháthú coimircí iomchuí agus bearta maolaithe riosca á n-áirithiú. Chun creat dlíthiúil a chuireann an nuálaíocht chun cinn, atá stóinseach don todhchaí, agus athléimneach in aghaidh suaite a áirithiú, ba cheart do na Ballstáit a áirithiú go mbunódh a n-údaráis inniúla náisiúnta bosca gainimh rialála intleachta saorga amháin ar a laghad ar an leibhéal náisiúnta chun forbairt agus tástáil na gcóras intleachta saorga nuálach faoi fhormhaoirseacht rialála dhian a éascú sula gcuirfear na córais sin ar an margadh nó i mbun seirbhíse ar shlí eile. D’fhéadfadh na Ballstáit an oibleagáid sin a chomhlíonadh freisin trí pháirt a ghlacadh i mboscaí gainimh rialála atá ann cheana nó trí bhosca gainimh a bhunú go comhpháirteach le húdarás inniúil Ballstáit amháin nó níos mó, a mhéid a sholáthraíonn an rannpháirtíocht sin leibhéal coibhéiseach cumhdaigh náisiúnta do na Ballstáit rannpháirteacha. D’fhéadfaí boscaí gainimh rialála intleachta saorga a bhunú i bhfoirm fhisiciúil, dhigiteach nó hibrideach agus d’fhéadfadh siad freastal ar tháirgí fisiciúla agus digiteacha. Ba cheart d’údaráis bhunaitheacha a áirithiú freisin go bhfuil na hacmhainní leordhóthanacha ag na boscaí gainimh rialála intleachta saorga chun go bhfeidhmeoidh siad, lena n-áirítear acmhainní airgeadais agus daonna.
(139)
The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and national law. Moreover, the AI regulatory sandboxes should aim to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, to facilitate regulatory learning for authorities and undertakings, including with a view to future adaptions of the legal framework, to support cooperation and the sharing of best practices with the authorities involved in the AI regulatory sandbox, and to accelerate access to markets, including by removing barriers for SMEs, including start-ups. AI regulatory sandboxes should be widely available throughout the Union, and particular attention should be given to their accessibility for SMEs, including start-ups. The participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure. Any significant risks identified during the development and testing of such AI systems should result in adequate mitigation and, failing that, in the suspension of the development and testing process. Where appropriate, national competent authorities establishing AI regulatory sandboxes should cooperate with other relevant authorities, including those supervising the protection of fundamental rights, and could allow for the involvement of other actors within the AI ecosystem such as national or European standardisation organisations, notified bodies, testing and experimentation facilities, research and experimentation labs, European Digital Innovation Hubs and relevant stakeholder and civil society organisations. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the AI regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. AI regulatory sandboxes established under this Regulation should be without prejudice to other law allowing for the establishment of other sandboxes aiming to ensure compliance with law other than this Regulation. Where appropriate, relevant competent authorities in charge of those other regulatory sandboxes should consider the benefits of using those sandboxes also for the purpose of ensuring compliance of AI systems with this Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory sandbox.
(139)
Ba cheart a bheith i gceist le cuspóirí na mboscaí gainimh rialála intleachta saorga nuálaíocht intleachta saorga a chothú trí thimpeallacht rialaithe turgnamhaíochta agus tástála a bhunú sa chéim forbartha agus réamh-mhargaithe d’fhonn a áirithiú go gcomhlíonfaidh na córais intleachta saorga nuálacha an Rialachán seo agus dlí ábhartha eile de chuid an Aontais agus an dlí náisiúnta eile, Thairis sin, le boscaí gainimh rialála intleachta saorga, ba cheart é a bheith mar aidhm feabhas a chur ar dheimhneacht dhlíthiúil do nuálaithe agus ar fhormhaoirseacht agus tuiscint na n-údarás inniúil ar na deiseanna, ar na rioscaí atá ag teacht chun cinn agus ar thionchair na húsáide a bhaintear as intleacht shaorga, foghlaim rialála a éascú d’údaráis agus gnóthais, lena n-áirítear d’fhonn an creat dlíthiúil a oiriúnú amach anseo, tacú le comhar agus le comhroinnt dea-chleachtas leis na húdaráis a bhfuil baint acu le bosca gainimh rialála intleachta saorga, agus rochtain ar mhargaí a luathú, lena n-áirítear trí bhacainní do FBManna a bhaint, lena n-áirítear gnólachtaí nuathionscanta. Ba cheart boscaí gainimh rialála intleachta saorga a bheith ar fáil go forleathan ar fud an Aontais, agus ba cheart aird ar leith a thabhairt ar a n-inrochtaineacht do FBManna, lena n-áirítear gnólachtaí nuathionscanta. Ba cheart an rannpháirtíocht sa bhosca gainimh rialála intleachta saorga díriú ar shaincheisteanna lena n-ardaítear éiginnteacht dhlíthiúil do sholáthraithe agus soláthraithe ionchasacha chun nuálaíocht a dhéanamh, triail a bhaint as an intleacht shaorga san Aontas agus cur le foghlaim rialála atá bunaithe ar fhianaise. Dá bhrí sin, ba cheart a chumhdach leis an maoirseacht ar na córais intleachta saorga sa bhosca gainimh rialála intleachta saorga a bhforbairt, a n-oiliúint, a dtástáil agus a mbailíochtú sula gcuirfear na córais ar an margadh nó i mbun seirbhíse, chomh maith leis an gcoincheap maidir le modhnú substaintiúil agus tarlú an mhodhnaithe shubstaintiúil a bhféadfadh nós imeachta nua um measúnú comhréireachta a bheith ag teastáil ina leith. Má shainaithnítear aon rioscaí suntasacha le linn fhorbairt agus thástáil na gcóras intleachta saorga sin, ba cheart maolú leordhóthanach a dhéanamh agus, ina éagmais sin, ba cheart an próiseas forbartha agus tástála a chur ar fionraí. I gcás inarb iomchuí, ba cheart d’údaráis inniúla náisiúnta a bhunaíonn boscaí gainimh rialála intleachta saorga oibriú i gcomhar le húdaráis ábhartha eile, lena n-áirítear iad siúd a dhéanann maoirseacht ar chosaint na gceart bunúsach, agus d’fhéadfaidís rannpháirtíocht gníomhaithe eile a cheadú laistigh den éiceachóras intleachta saorga, amhail eagraíochtaí náisiúnta nó Eorpacha um chaighdeánú, comhlachtaí faoina dtugtar fógra, saoráidí tástála agus turgnamhaíochta, saotharlanna taighde agus turgnamhaíochta, Moil Eorpacha maidir leis an Nuálaíocht Dhigiteach agus eagraíochtaí ábhartha geallsealbhóirí agus na sochaí sibhialta. Chun cur chun feidhme aonfhoirmeach ar fud an Aontais agus barainneacht scála a áirithiú, is iomchuí rialacha comhchoiteanna a bhunú maidir leis na boscaí gainimh rialála intleachta saorga a chur chun feidhme agus creat a bhunú maidir le comhar idir na húdaráis ábhartha atá rannpháirteach i maoirseacht na mboscaí gainimh. Ba cheart boscaí gainimh rialála intleachta saorga a bhunaítear faoin Rialachán seo a bheith gan dochar do dhlí eile lena gceadaítear boscaí gainimh eile a bhunú arb é is aidhm dóibh comhlíonadh dlí, cé is moite den Rialachán seo, a áirithiú. I gcás inarb iomchuí, ba cheart do na húdaráis inniúla ábhartha atá i gceannas ar na boscaí gainimh rialála eile féachaint ar na tairbhí a bhaineann leis na boscaí gainimh sin a úsáid freisin chun comhlíonadh na gcóras intleachta saorga leis an Rialachán seo a áirithiú. Féadfar tástáil faoi fhíordhálaí a oibriú agus a mhaoirsiú freisin faoi chuimsiú an bhosca gainimh rialála intleachta saorga, faoi réir comhaontú idir na húdaráis inniúla náisiúnta agus na rannpháirtithe sa bhosca gainimh rialála intleachta saorga.
(140)
This Regulation should provide the legal basis for the providers and prospective providers in the AI regulatory sandbox to use personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, only under specified conditions, in accordance with Article 6(4) and Article 9(2), point (g), of Regulation (EU) 2016/679, and Articles 5, 6 and 10 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) and Article 10 of Directive (EU) 2016/680. All other obligations of data controllers and rights of data subjects under Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 remain applicable. In particular, this Regulation should not provide a legal basis in the meaning of Article 22(2), point (b) of Regulation (EU) 2016/679 and Article 24(2), point (b) of Regulation (EU) 2018/1725. Providers and prospective providers in the AI regulatory sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to adequately mitigate any identified significant risks to safety, health, and fundamental rights that may arise during the development, testing and experimentation in that sandbox.
(140)
Ba cheart an Rialachán seo a bheith ina bhunús dlí do na soláthraithe agus na soláthraithe ionchasacha sa bhosca gainimh rialála intleachta saorga chun sonraí pearsanta a bhailítear chun críoch eile a úsáid chun córais intleachta saorga áirithe a fhorbairt ar mhaithe le leas an phobail faoi chuimsiú an bhosca gainimh rialála intleachta saorga, faoi choinníollacha sonraithe amháin, i gcomhréir le hAirteagail 6(4) agus 9(2), pointe (g), de Rialachán (AE) 2016/679, agus Airteagail 5, 6 agus 10 de Rialachán (AE) 2018/1725, agus gan dochar d’Airteagail 4(2) agus 10 de Threoir (AE) 2016/680. Beidh feidhm fós ag na hoibleagáidí eile ar fad atá ar rialaitheoirí sonraí agus ag cearta na n-ábhar sonraí faoi Rialacháin (AE) 2016/679 agus (AE) 2018/1725 agus Treoir (AE) 2016/680. Go háirithe, níor cheart bunús dlí a sholáthar leis an Rialachán seo de réir bhrí Airteagal 22(2), pointe (b), de Rialachán (AE) 2016/679 agus Airteagal 24(2), pointe (b), de Rialachán (AE) 2018/1725. Ba cheart do sholáthraithe agus soláthraithe ionchasacha an bhosca gainimh rialála intleachta saorga coimircí iomchuí a áirithiú agus oibriú i gcomhar leis na húdaráis inniúla, lena n-áirítear trína dtreoir a leanúint agus trí ghníomhú go tapa agus de mheon macánta chun aon ardriosca sainaitheanta suntasach a mhaolú maidir le sábháilteacht, sláinte, agus cearta bunúsacha a d’fhéadfadh teacht chun cinn le linn na forbartha, na tástála agus na turgnamhaíochta sa bhosca gainimh sin.
(141)
In order to accelerate the process of development and the placing on the market of the high-risk AI systems listed in an annex to this Regulation, it is important that providers or prospective providers of such systems may also benefit from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory sandbox. However, in such cases, taking into account the possible consequences of such testing on individuals, it should be ensured that appropriate and sufficient guarantees and conditions are introduced by this Regulation for providers or prospective providers. Such guarantees should include, inter alia, requesting informed consent of natural persons to participate in testing in real world conditions, with the exception of law enforcement where the seeking of informed consent would prevent the AI system from being tested. Consent of subjects to participate in such testing under this Regulation is distinct from, and without prejudice to, consent of data subjects for the processing of their personal data under the relevant data protection law. It is also important to minimise the risks and enable oversight by competent authorities and therefore require prospective providers to have a real-world testing plan submitted to competent market surveillance authority, register the testing in dedicated sections in the EU database subject to some limited exceptions, set limitations on the period for which the testing can be done and require additional safeguards for persons belonging to certain vulnerable groups, as well as a written agreement defining the roles and responsibilities of prospective providers and deployers and effective oversight by competent personnel involved in the real world testing. Furthermore, it is appropriate to envisage additional safeguards to ensure that the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded and that personal data is protected and is deleted when the subjects have withdrawn their consent to participate in the testing without prejudice to their rights as data subjects under the Union data protection law. As regards transfer of data, it is also appropriate to envisage that data collected and processed for the purpose of testing in real-world conditions should be transferred to third countries only where appropriate and applicable safeguards under Union law are implemented, in particular in accordance with bases for transfer of personal data under Union law on data protection, while for non-personal data appropriate safeguards are put in place in accordance with Union law, such as Regulations (EU) 2022/868 (42) and (EU) 2023/2854 (43) of the European Parliament and of the Council.
(141)
Chun dlús a chur leis an bpróiseas chun na córais intleachta saorga ardriosca a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo a fhorbairt agus a chur ar an margadh, tá sé tábhachtach go bhféadfaidh soláthraithe nó soláthraithe ionchasacha na gcóras sin tairbhiú de chóras sonrach chun na córais sin a thástáil faoi fhíordhálaí, gan a bheith rannpháirteach sa bhosca gainimh rialála intleachta saorga. I gcásanna den sórt sin, áfach, agus na hiarmhairtí, a d’fhéadfadh a bheith ag an tástáil sin, ar dhaoine aonair á gcur san áireamh, ba cheart a áirithiú go dtabharfar isteach leis an Rialachán seo ráthaíochtaí agus coinníollacha iomchuí agus leordhóthanacha do sholáthraithe nó soláthraithe ionchasacha. Ba cheart a áireamh ar na ráthaíochtaí sin, inter alia, toiliú feasach a iarraidh ó dhaoine nádúrtha chun páirt a ghlacadh sa tástáil faoi fhíordhálaí, cé is moite d’fhorfheidhmiú an dlí i gcás ina gcuirfí cosc ar thástáil an chórais intleachta saorga mar gheall ar thoiliú feasach a bheith á lorg. Tá toiliú na ndaoine is ábhar don tástáil chun páirt a ghlacadh sa tástáil sin faoin Rialachán seo éagsúil ó thoiliú na n-ábhar sonraí chun a sonraí pearsanta a phróiseáil faoin dlí ábhartha maidir le cosaint sonraí agus gan dochar don toiliú sin. Tá sé tábhachtach freisin na rioscaí a íoslaghdú agus formhaoirseacht ag na húdaráis inniúla a chumasú agus, dá bhrí sin, ceangal a chur ar sholáthraithe ionchasacha plean tástála i bhfíordhálaí a chur faoi bhráid an údaráis inniúil faireachais margaidh, an tástáil a chlárú i gcodanna tiomnaithe de bhunachar sonraí an Aontais faoi réir roinnt eisceachtaí teoranta, teorainneacha a leagan síos maidir leis an tréimhse inar féidir an tástáil a dhéanamh agus coimircí breise a éileamh do dhaoine, lena n-áirítear daoine ar de ghrúpaí leochaileacha áirithe iad chomh maith le comhaontú i scríbhinn lena sainítear róil agus freagrachtaí na soláthraithe ionchasacha agus na n-úsáideoirí gairmiúla agus formhaoirseacht éifeachtach ag pearsanra inniúil a bhfuil baint acu leis an tástáil i bhfíordhálaí. Thairis sin, is iomchuí coimircí breise a bheartú chun a áirithiú gur féidir tuartha, moltaí nó cinntí an chórais intleachta saorga a aisiompú go héifeachtach agus neamhaird a thabhairt orthu agus go ndéanfar sonraí pearsanta a chosaint agus a scriosadh nuair a tharraingíonn na daoine is ábhar siar a dtoiliú chun páirt a ghlacadh sa tástáil gan dochar dá gcearta mar ábhair sonraí faoi dhlí an Aontais maidir le cosaint sonraí. A mhéid a bhaineann le sonraí a aistriú, is iomchuí a bheartú nár cheart sonraí a bhailítear agus ar a ndéantar próiseáil chun críche tástála i bhfíordhálaí a aistriú go tríú tíortha ach amháin i gcás ina gcuirtear coimircí iomchuí agus is infheidhme faoi dhlí an Aontais chun feidhme, go háirithe i gcomhréir le boinn chun sonraí pearsanta a aistriú faoi dhlí an Aontais maidir le cosaint sonraí, agus, i gcás sonraí neamhphearsanta, cuirtear coimircí iomchuí i bhfeidhm i gcomhréir le dlí an Aontais, amhail Rialacháin (AE) 2022/868 (42) agus (AE) 2023/2854 (43) ó Pharlaimint na hEorpa agus ón gComhairle.
(142)
To ensure that AI leads to socially and environmentally beneficial outcomes, Member States are encouraged to support and promote research and development of AI solutions in support of socially and environmentally beneficial outcomes, such as AI-based solutions to increase accessibility for persons with disabilities, tackle socio-economic inequalities, or meet environmental targets, by allocating sufficient resources, including public and Union funding, and, where appropriate and provided that the eligibility and selection criteria are fulfilled, considering in particular projects which pursue such objectives. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, consumer, environmental, and digital rights, as well as academics.
(142)
Chun a áirithiú go mbeidh torthaí a théann chun tairbhe don tsochaí agus don chomhshaol mar thoradh ar an intleacht shaorga, moltar do na Ballstáit tacú le taighde agus forbairt réiteach intleachta saorga agus iad a chur chun cinn chun tacú le torthaí atá chun tairbhe na sochaí agus an chomhshaoil, amhail réitigh bunaithe ar an intleacht shaorga chun inrochtaineacht do dhaoine faoi mhíchumas a mhéadú, dul i ngleic le neamhionannais shocheacnamaíocha, nó spriocanna comhshaoil a bhaint amach, trí acmhainní leordhóthanacha a leithdháileadh, lena n-áirítear cistiú poiblí agus cistiú ón Aontas, agus, i gcás inarb iomchuí agus ar choinníoll go gcomhlíontar na critéir incháilitheachta agus roghnúcháin, agus go háirithe tionscadail lena saothraítear na cuspóirí sin á gcur san áireamh. Ba cheart na tionscadail sin a bheith bunaithe ar phrionsabal an chomhair idirdhisciplínigh idir forbróirí intleachta saorga, saineolaithe ar neamhionannas agus neamh-idirdhealú, inrochtaineacht, cearta tomhaltóirí, comhshaoil agus digiteacha, chomh maith le hacadóirí.
(143)
In order to promote and protect innovation, it is important that the interests of SMEs, including start-ups, that are providers or deployers of AI systems are taken into particular account. To that end, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Member States should provide SMEs, including start-ups, that have a registered office or a branch in the Union, with priority access to the AI regulatory sandboxes provided that they fulfil the eligibility conditions and selection criteria and without precluding other providers and prospective providers to access the sandboxes provided the same conditions and criteria are fulfilled. Member States should utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, including start-ups, deployers, other innovators and, as appropriate, local public authorities, to support SMEs throughout their development path by providing guidance and responding to queries about the implementation of this Regulation. Where appropriate, these channels should work together to create synergies and ensure homogeneity in their guidance to SMEs, including start-ups, and deployers. Additionally, Member States should facilitate the participation of SMEs and other relevant stakeholders in the standardisation development processes. Moreover, the specific interests and needs of providers that are SMEs, including start-ups, should be taken into account when notified bodies set conformity assessment fees. The Commission should regularly assess the certification and compliance costs for SMEs, including start-ups, through transparent consultations and should work with Member States to lower such costs. For example, translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, in particular those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border deployers. In order to address the specific needs of SMEs, including start-ups, the Commission should provide standardised templates for the areas covered by this Regulation, upon request of the Board. Additionally, the Commission should complement Member States’ efforts by providing a single information platform with easy-to-use information with regards to this Regulation for all providers and deployers, by organising appropriate communication campaigns to raise awareness about the obligations arising from this Regulation, and by evaluating and promoting the convergence of best practices in public procurement procedures in relation to AI systems. Medium-sized enterprises which until recently qualified as small enterprises within the meaning of the Annex to Commission Recommendation 2003/361/EC (44) should have access to those support measures, as those new medium-sized enterprises may sometimes lack the legal resources and training necessary to ensure proper understanding of, and compliance with, this Regulation.
(143)
Chun an nuálaíocht a chur chun cinn agus a chosaint, tá sé tábhachtach leasanna FBManna, lena n-áirítear gnólachtaí nuathionscanta ar soláthraithe nó úsáideoirí gairmiúla córas intleachta saorga iad, a chur san áireamh go háirithe. Chuige sin, ba cheart do na Ballstáit tionscnaimh a fhorbairt atá dírithe ar na hoibreoirí sin, lena n-áirítear maidir le hardú feasachta agus cumarsáid faisnéise. Ba cheart do na Ballstáit rochtain tosaíochta ar bhoscaí gainimh rialála intleachta saorga a sholáthar do FBManna, lena n-áirítear gnólachtaí nuathionscanta, a bhfuil oifig chláraithe nó brainse acu san Aontas, ar choinníoll go gcomhlíonann siad na coinníollacha incháilitheachta agus na critéir roghnúcháin agus gan bac a chur ar sholáthraithe agus soláthraithe ionchasacha eile rochtain a fháil ar na boscaí gainimh ar choinníoll go gcomhlíontar na coinníollacha agus na critéir chéanna. Ba cheart do na Ballstáit na bealaí atá ann cheana a úsáid agus, i gcás inarb iomchuí, ba cheart dóibh bealaí tiomnaithe nua le haghaidh cumarsáid le FBManna, lena n-áirítear gnólachtaí nuathionscanta, úsáideoirí gairmiúla, nuálaithe eile agus, de réir mar is iomchuí, údaráis phoiblí áitiúla, a bhunú chun tacú le FBManna feadh a gconaire forbartha trí threoir a thabhairt agus freagairt ar fhiosrúcháin maidir le cur chun feidhme an Rialacháin seo. I gcás inarb iomchuí, ba cheart do na bealaí sin oibriú le chéile chun sineirgí a chruthú agus aonchineálacht a áirithiú ina dtreoir do FBManna, lena n-áirítear gnólachtaí nuathionscanta agus úsáideoirí gairmiúla. Ina theannta sin, ba cheart do na Ballstáit rannpháirtíocht FBManna agus geallsealbhóirí ábhartha eile sna próisis forbartha um chaighdeánú a éascú. Thairis sin, ba cheart leasanna sonracha agus riachtanais shonracha ar FBManna iad, lena n-áirítear gnólachtaí nuathionscanta a chur san áireamh agus táillí measúnaithe comhréireachta á socrú ag na comhlachtaí faoina dtugtar fógra. Ba cheart don Choimisiún measúnú tráthrialta ar na costais deimhniúcháin agus chomhlíontachta do FBManna, lena n-áirítear gnólachtaí nuathionscanta, trí chomhairliúcháin thrédhearcacha agus oibriú leis na Ballstáit chun na costais sin a laghdú. Mar shampla, d’fhéadfadh costais shuntasacha a bheith i gceist le costais aistriúcháin a bhaineann le doiciméadacht agus cumarsáid shainordaitheach leis na húdaráis, le haghaidh soláthraithe agus oibreoirí eile, go háirithe iad siúd ar mionscála. Ba cheart do na Ballstáit a áirithiú, más féidir, maidir leis na teangacha a chinnfidh siad agus a nglacfaidh siad leo le haghaidh dhoiciméadacht ábhartha na soláthraithe agus cumarsáid leis na hoibreoirí inti, gur teanga í ceann amháin de na teangacha sin a bhfuil tuiscint éigin ag an líon is mó úsáideoirí gairmiúla trasteorann is féidir uirthi. Chun aghaidh a thabhairt ar riachtanais shonracha FBManna, lena n-áirítear gnólachtaí nuathionscanta, ba cheart don Choimisiún teimpléid chaighdeánaithe a sholáthar le haghaidh na réimsí a chumhdaítear leis an Rialachán seo arna iarraidh sin don Bhord. Ina theannta sin, ba cheart don Choimisiún iarrachtaí na mBallstát a chomhlánú trí fhaisnéis atá éasca le húsáid maidir leis an Rialachán seo a sholáthar d’ardán faisnéise aonair do gach soláthraí agus úsáideoir gairmiúil, trí fheachtais chumarsáide iomchuí a eagrú chun feasacht a ardú maidir leis na hoibleagáidí a eascraíonn as an Rialachán seo, agus trí mheasúnú agus cur chun cinn a dhéanamh ar chóineasú na gcleachtas is fearr i nósanna imeachta soláthair phoiblí i ndáil le córais intleachta saorga. Ba cheart rochtain a bheith ag fiontair mheánmhéide a cháiligh mar fhiontair bheaga go dtí le déanaí de réir bhrí na hIarscríbhinne a ghabhann le Moladh 2003/361/CE ón gCoimisiún (44) ar na bearta tacaíochta sin, toisc go bhféadfadh sé nach mbeadh na hacmhainní dlíthiúla ná an oiliúint is gá ag na fiontair mheánmhéide nua sin uaireanta chun tuiscint cheart ar an Rialachán seo agus comhlíonadh an Rialacháin seo a áirithiú.
(144)
In order to promote and protect innovation, the AI-on-demand platform, all relevant Union funding programmes and projects, such as Digital Europe Programme, Horizon Europe, implemented by the Commission and the Member States at Union or national level should, as appropriate, contribute to the achievement of the objectives of this Regulation.
(144)
Chun nuálaíocht a chur chun cinn agus a chosaint, ba cheart don Ardán um Intleacht Shaorga ar éileamh, gach clár agus tionscadal cistiúcháin ábhartha de chuid an Aontais, amhail an Clár don Eoraip Dhigiteach, Fís Eorpach, arna gcur chun feidhme ag an gCoimisiún agus ag na Ballstáit ar leibhéal an Aontais nó ar an leibhéal náisiúnta, a bheith ina rannchuidiú, de réir mar is iomchuí, le cuspóirí an Rialacháin seo a bhaint amach.
(145)
In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers, in particular SMEs, including start-ups, and notified bodies with their obligations under this Regulation, the AI-on-demand platform, the European Digital Innovation Hubs and the testing and experimentation facilities established by the Commission and the Member States at Union or national level should contribute to the implementation of this Regulation. Within their respective mission and fields of competence, the AI-on-demand platform, the European Digital Innovation Hubs and the testing and experimentation Facilities are able to provide in particular technical and scientific support to providers and notified bodies.
(145)
Chun na rioscaí maidir le cur chun feidhme mar thoradh ar easpa eolais agus saineolais sa mhargadh a íoslaghdú agus chun comhlíontacht na soláthraithe, go háirithe FBManna, lena n-áirítear gnólachtaí nuathionscanta, agus na gcomhlachtaí faoina dtugtar fógra lena n-oibleagáidí faoin Rialachán seo a éascú, ba cheart don Ardán um Intleacht Shaorga ar éileamh, do na Moil Eorpacha maidir leis an Nuálaíocht Dhigiteach agus do na saoráidí tástála agus turgnamhaíochta arna mbunú ag an gCoimisiún agus ag na Ballstáit ar leibhéal an Aontais nó ar an leibhéal náisiúnta rannchuidiú le cur chun feidhme an Rialacháin seo. Laistigh dá misean agus dá réimsí inniúlachta faoi seach, tá an tArdán um Intleacht Shaorga ar éileamh, na Moil Eorpacha maidir leis an Nuálaíocht Dhigiteach agus na saoráidí tástála agus turgnamhaíochta in ann tacaíocht theicniúil agus eolaíoch a sholáthar go háirithe do sholáthraithe agus comhlachtaí faoina dtugtar fógra.
(146)
Moreover, in light of the very small size of some operators and in order to ensure proportionality regarding costs of innovation, it is appropriate to allow microenterprises to fulfil one of the most costly obligations, namely to establish a quality management system, in a simplified manner which would reduce the administrative burden and the costs for those enterprises without affecting the level of protection and the need for compliance with the requirements for high-risk AI systems. The Commission should develop guidelines to specify the elements of the quality management system to be fulfilled in this simplified manner by microenterprises.
(146)
Thairis sin, i bhfianaise a laghad is atá cuid de na hoibreoirí agus chun comhréireacht maidir le costais nuálaíochta a áirithiú, is iomchuí cead a thabhairt do mhicrifhiontair ceann de na hoibleagáidí is costasaí a chomhlíonadh ar bhealach simplithe, eadhon an oibleagáid córas bainistíochta cáilíochta a bhunú lena laghdófaí an t-ualach riaracháin agus na costais do na fiontair sin gan difear a dhéanamh don leibhéal cosanta agus don ghá atá leis na ceanglais maidir le córais intleachta saorga ardriosca a chomhlíonadh. Ba cheart don Choimisiún treoirlínte a fhorbairt chun na gnéithe den chóras bainistíochta cáilíochta atá le comhlíonadh ar an mbealach simplithe sin ag micrifhiontair a shonrú.
(147)
It is appropriate that the Commission facilitates, to the extent possible, access to testing and experimentation facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is, in particular, the case as regards expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulations (EU) 2017/745 and (EU) 2017/746.
(147)
Is iomchuí go n-éascóidh an Coimisiún, a mhéid agus is féidir, rochtain ar shaoráidí tástála agus turgnamhaíochta le haghaidh comhlachtaí, grúpaí nó saotharlanna arna mbunú nó arna gcreidiúnú de bhun aon reachtaíochta comhchuibhithe ábhartha de chuid an Aontais agus a chomhlíonann cúraimí i gcomhthéacs measúnaithe comhréireachta táirgí nó feistí a chumhdaítear le reachtaíocht chomhchuibhithe sin an Aontais. Is amhlaidh atá go háirithe i gcás painéil saineolaithe, sain-saotharlanna agus saotharlanna tagartha i réimse na bhfeistí leighis de bhun Rialacháin (AE) 2017/745 agus (AE) 2017/746.
(148)
This Regulation should establish a governance framework that both allows to coordinate and support the application of this Regulation at national level, as well as build capabilities at Union level and integrate stakeholders in the field of AI. The effective implementation and enforcement of this Regulation require a governance framework that allows to coordinate and build up central expertise at Union level. The AI Office was established by Commission Decision (45) and has as its mission to develop Union expertise and capabilities in the field of AI and to contribute to the implementation of Union law on AI. Member States should facilitate the tasks of the AI Office with a view to support the development of Union expertise and capabilities at Union level and to strengthen the functioning of the digital single market. Furthermore, a Board composed of representatives of the Member States, a scientific panel to integrate the scientific community and an advisory forum to contribute stakeholder input to the implementation of this Regulation, at Union and national level, should be established. The development of Union expertise and capabilities should also include making use of existing resources and expertise, in particular through synergies with structures built up in the context of the Union level enforcement of other law and synergies with related initiatives at Union level, such as the EuroHPC Joint Undertaking and the AI testing and experimentation facilities under the Digital Europe Programme.
(148)
Leis an Rialachán seo, ba cheart creat rialachais a bhunú lenar féidir cur i bhfeidhm an Rialacháin seo a chomhordú agus tacú leis ar an leibhéal náisiúnta, chomh maith le cumais a fhorbairt ar leibhéal an Aontais agus geallsealbhóirí a chomhtháthú i réimse na hintleachta saorga. Chun an Rialachán seo a chur chun feidhme agus a fhorfheidhmiú go héifeachtach, tá gá le creat rialachais lenar féidir saineolas lárnach a chomhordú agus a fhorbairt ar leibhéal an Aontais. Bunaíodh an Oifig um Intleacht Shaorga le Cinneadh ón gCoimisiún (45) agus tá sé de mhisean aici saineolas agus cumais an Aontais a fhorbairt i réimse na hintleachta saorga agus rannchuidiú le cur chun feidhme dhlí an Aontais maidir leis an intleacht shaorga. Ba cheart do na Ballstáit cúraimí na hOifige um Intleacht Shaorga a éascú d’fhonn tacú le forbairt shaineolas agus chumais an Aontais ar leibhéal an Aontais agus d’fhonn feidhmiú an mhargaidh aonair dhigitigh a neartú. Ina theannta sin, ba cheart Bord a bhunú ar a mbeidh ionadaithe ó na Ballstáit, painéal eolaíoch chun an pobal eolaíoch a chomhtháthú agus fóram comhairleach chun ionchur ó gheallsealbhóirí a chur ar fáil maidir le cur chun feidhme an Rialacháin seo, ar leibhéal an Aontais agus ar an leibhéal náisiúnta. Ba cheart a áireamh freisin agus saineolas agus cumais an Aontais á bhforbairt úsáid a bhaint as na hacmhainní agus an saineolas atá ann cheana, go háirithe trí shineirgí le struchtúir arna bhforbairt i gcomhthéacs forfheidhmiú dlí eile ar leibhéal an Aontais agus sineirgí le tionscnaimh ghaolmhara ar leibhéal an Aontais, amhail an Comhghnóthas EuroHPC agus saoráidí tástála agus turgnamhaíochta intleachta saorga faoin gClár don Eoraip Dhigiteach.
(149)
In order to facilitate a smooth, effective and harmonised implementation of this Regulation a Board should be established. The Board should reflect the various interests of the AI eco-system and be composed of representatives of the Member States. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or contributing to guidance on matters related to the implementation of this Regulation, including on enforcement matters, technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to the Commission and the Member States and their national competent authorities on specific questions related to AI. In order to give some flexibility to Member States in the designation of their representatives in the Board, such representatives may be any persons belonging to public entities who should have the relevant competences and powers to facilitate coordination at national level and contribute to the achievement of the Board’s tasks. The Board should establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related, respectively, to market surveillance and notified bodies. The standing subgroup for market surveillance should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020. In accordance with Article 33 of that Regulation, the Commission should support the activities of the standing subgroup for market surveillance by undertaking market evaluations or studies, in particular with a view to identifying aspects of this Regulation requiring specific and urgent coordination among market surveillance authorities. The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific issues. The Board should also cooperate, as appropriate, with relevant Union bodies, experts groups and networks active in the context of relevant Union law, including in particular those active under relevant Union law on data, digital products and services.
(149)
Chun cur chun feidhme rianúil éifeachtach agus comhchuibhithe an Rialacháin seo a éascú, ba cheart Bord a bhunú. Ba cheart don Bhord leasanna éagsúla an éiceachórais intleachta saorga a léiriú agus ba cheart ionadaithe ó na Ballstáit a bheith air. Ba cheart don Bhord a bheith freagrach as roinnt cúraimí comhairleacha, lena n-áirítear tuairimí, moltaí, comhairle a thabhairt nó cur le treoir maidir le hábhair a bhaineann leis an Rialachán seo a chur chun feidhme, lena n-áirítear maidir le cúrsaí forfheidhmiúcháin, sonraíochtaí teicniúla nó caighdeáin atá ann cheana i dtaca leis na ceanglais a bhunaítear sa Rialachán seo agus comhairle a thabhairt don Choimisiún agus do na Ballstáit agus dá n-údaráis inniúla náisiúnta maidir le ceisteanna sonracha a bhaineann leis an intleacht shaorga. Chun solúbthacht áirithe a thabhairt do na Ballstáit maidir lena n-ionadaithe ar an mBord a ainmniú, féadfaidh aon duine a bhaineann le heintitis phoiblí ar cheart na hinniúlachtaí agus na cumhachtaí ábhartha a bheith aige chun comhordú a éascú ar an leibhéal náisiúnta agus chun rannchuidiú le cúraimí an Bhoird a bhaint amach a bheith ina ionadaí sin. Ba cheart don Bhord dhá bhuan-fhoghrúpa a bhunú chun ardán comhair agus malartaithe a chur ar fáil i measc na n-údarás faireachais margaidh agus na n-údarás a thugann fógra maidir le saincheisteanna a bhaineann le faireachas margaidh agus le comhlachtaí faoina dtugtar fógra faoi seach. Ba cheart don bhuan-fhoghrúpa um fhaireachas margaidh gníomhú mar an ngrúpa comhair riaracháin (ADCO) don Rialachán seo de réir bhrí Airteagal 30 de Rialachán (AE) 2019/1020. I gcomhréir le hAirteagal 33 den Rialachán sin, ba cheart don Choimisiún tacú le gníomhaíochtaí an bhuan-fhoghrúpa um fhaireachas margaidh trí mheastóireachtaí margaidh nó staidéir mhargaidh a dhéanamh, go háirithe d’fhonn gnéithe den Rialachán seo a shainaithint a dteastaíonn comhordú sonrach agus práinneach ina leith i measc na n-údarás faireachais margaidh. Féadfaidh an Bord buan-fhoghrúpaí nó foghrúpaí sealadacha eile a bhunú de réir mar is iomchuí chun scrúdú a dhéanamh ar cheisteanna sonracha. Ba cheart don Bhord dul i gcomhar freisin, de réir mar is iomchuí, le comhlachtaí, grúpaí saineolaithe agus líonraí ábhartha an Aontais atá gníomhach i gcomhthéacs dhlí ábhartha an Aontais, lena n-áirítear go háirithe iad siúd atá gníomhach faoi dhlí ábhartha an Aontais maidir le sonraí, táirgí digiteacha agus seirbhísí digiteacha.
(150)
With a view to ensuring the involvement of stakeholders in the implementation and application of this Regulation, an advisory forum should be established to advise and provide technical expertise to the Board and the Commission. To ensure a varied and balanced stakeholder representation between commercial and non-commercial interest and, within the category of commercial interests, with regards to SMEs and other undertakings, the advisory forum should comprise inter alia industry, start-ups, SMEs, academia, civil society, including the social partners, as well as the Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).
(150)
D’fhonn a áirithiú go mbeidh na geallsealbhóirí rannpháirteach i gcur chun feidhme agus cur i bhfeidhm an Rialacháin seo, ba cheart fóram comhairleach a bhunú chun comhairle a thabhairt don Bhord agus don Choimisiún agus chun saineolas teicniúil a sholáthar dóibh. Chun ionadaíocht éagsúil agus chothrom na ngeallsealbhóirí a áirithiú idir leas tráchtála agus neamhthráchtála agus, laistigh den chatagóir leasanna tráchtála, maidir le FBManna agus gnóthais eile, ba cheart a bheith san fhóram comhairleach inter alia an lucht tionscail, gnólachtaí nuathionscanta, FBManna, an saol acadúil, an tsochaí shibhialta, lena n-áirítear na comhpháirtithe sóisialta, chomh maith leis an nGníomhaireacht um Chearta Bunúsacha, ENISA, an Coiste Eorpach um Chaighdeánú (CEN), Coiste Eorpach na gCaighdeán Leictriteicniúil (CENELEC) agus Institiúid Eorpach na gCaighdeán Teileachumarsáide (ETSI).
(151)
To support the implementation and enforcement of this Regulation, in particular the monitoring activities of the AI Office as regards general-purpose AI models, a scientific panel of independent experts should be established. The independent experts constituting the scientific panel should be selected on the basis of up-to-date scientific or technical expertise in the field of AI and should perform their tasks with impartiality, objectivity and ensure the confidentiality of information and data obtained in carrying out their tasks and activities. To allow the reinforcement of national capacities necessary for the effective enforcement of this Regulation, Member States should be able to request support from the pool of experts constituting the scientific panel for their enforcement activities.
(151)
Chun tacú le cur chun feidhme agus forfheidhmiú an Rialacháin seo, go háirithe gníomhaíochtaí faireacháin na hOifige um Intleacht Shaorga a mhéid a bhaineann le samhlacha intleachta saorga ilchuspóireacha, ba cheart painéal eolaíoch saineolaithe neamhspleácha a bhunú. Ba cheart na saineolaithe neamhspleácha atá mar chuid den phainéal eolaíoch a roghnú ar bhonn saineolas eolaíoch nó teicniúil atá cothrom le dáta i réimse na hintleachta saorga agus ba cheart dóibh a gcúraimí a chomhlíonadh le neamhchlaontacht, oibiachtúlacht agus rúndacht na faisnéise agus na sonraí a fhaightear agus a gcúraimí agus a ngníomhaíochtaí á ndéanamh acu a áirithiú. Chun gur féidir na hacmhainneachtaí náisiúnta is gá a threisiú chun an Rialachán seo a fhorfheidhmiú go héifeachtach, ba cheart do na Ballstáit a bheith in ann tacaíocht a iarraidh ar an díorma saineolaithe arb iad an painéal eolaíoch iad dá ngníomhaíochtaí forfheidhmiúcháin.
(152)
In order to support adequate enforcement as regards AI systems and reinforce the capacities of the Member States, Union AI testing support structures should be established and made available to the Member States.
(152)
Chun tacú le forfheidhmiú leordhóthanach a mhéid a bhaineann le córais intleachta saorga agus chun acmhainneachtaí na mBallstát a threisiú, ba cheart struchtúir tacaíochta tástála intleachta saorga an Aontais a bhunú agus a chur ar fáil do na Ballstáit.
(153)
Member States hold a key role in the application and enforcement of this Regulation. In that respect, each Member State should designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of supervising the application and implementation of this Regulation. Member States may decide to appoint any kind of public entity to perform the tasks of the national competent authorities within the meaning of this Regulation, in accordance with their specific national organisational characteristics and needs. In order to increase organisation efficiency on the side of Member States and to set a single point of contact vis-à-vis the public and other counterparts at Member State and Union levels, each Member State should designate a market surveillance authority to act as a single point of contact.
(153)
Tá ról lárnach ag na Ballstáit ó thaobh an Rialacháin seo a chur i bhfeidhm agus a fhorfheidhmiú. Ina thaobh sin, ba cheart do gach Ballstát údarás amháin ar a laghad a thugann fógra agus údarás amháin ar a laghad faireachais margaidh a ainmniú mar údaráis inniúla náisiúnta chun maoirseacht a dhéanamh ar chur i bhfeidhm agus cur chun feidhme an Rialacháin seo. Féadfaidh na Ballstáit cinneadh a dhéanamh eintiteas poiblí d’aon chineál a cheapadh chun cúraimí na n-údarás inniúil náisiúnta a dhéanamh de réir bhrí an Rialacháin seo, i gcomhréir lena saintréithe eagraíochtúla náisiúnta sonracha agus lena riachtanais eagraíochtúla náisiúnta shonracha. Chun éifeachtúlacht eagraíochtúil na mBallstát a fheabhsú agus chun pointe teagmhála aonair a shocrú i leith an phobail agus comhpháirtithe eile ar leibhéal na mBallstát agus ar leibhéal an Aontais, ba cheart do gach Ballstát údarás faireachais margaidh a ainmniú chun gníomhú mar phointe teagmhála aonair.
(154)
The national competent authorities should exercise their powers independently, impartially and without bias, so as to safeguard the principles of objectivity of their activities and tasks and to ensure the application and implementation of this Regulation. The members of these authorities should refrain from any action incompatible with their duties and should be subject to confidentiality rules under this Regulation.
(154)
Ba cheart do na húdaráis inniúla náisiúnta a gcumhachtaí a fheidhmiú go neamhspleách, go neamhchlaonta agus gan laofacht, chun prionsabail oibiachtúlachta a ngníomhaíochtaí agus a gcúraimí a choimirciú agus chun cur i bhfeidhm agus cur chun feidhme an Rialacháin seo a áirithiú. Ba cheart do chomhaltaí na n-údarás sin staonadh ó aon ghníomhaíocht nach luíonn lena ndualgais agus ba cheart dóibh a bheith faoi réir rialacha rúndachta faoin Rialachán seo.
(155)
In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. Where relevant, post-market monitoring should include an analysis of the interaction with other AI systems including other devices and software. Post-market monitoring should not cover sensitive operational data of deployers which are law enforcement authorities. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents resulting from the use of their AI systems, meaning incident or malfunctioning leading to death or serious damage to health, serious and irreversible disruption of the management and operation of critical infrastructure, infringements of obligations under Union law intended to protect fundamental rights or serious damage to property or the environment.
(155)
Chun a áirithiú gur féidir le soláthraithe na gcóras intleachta saorga ardriosca taithí a chur san áireamh maidir le húsáid na gcóras intleachta saorga ardriosca chun feabhas a chur ar a gcórais agus ar an bpróiseas ceaptha agus forbartha nó gur féidir leo aon bheart ceartaitheach féideartha a dhéanamh go tráthúil, ba cheart do gach soláthraí córas faireacháin iarmhargaidh a bheith i bhfeidhm aige. I gcás inarb ábhartha, áireofar le faireachán iarmhargaidh anailís ar an idirghníomhaíocht le córais intleachta saorga eile, lena n-áirítear feistí agus bogearraí eile. Le faireachán iarmhargaidh, níor cheart sonraí oibríochtúla íogaire úsáideoirí gairmiúla ar údaráis forfheidhmithe dlí iad a chumhdach. Tá an córas sin ríthábhachtach freisin chun a áirithiú gur féidir dul i ngleic ar bhealach níos éifeachtúla agus níos tráthúla le rioscaí a d’fhéadfadh eascairt as córais intleachta saorga a leanann de bheith ‘ag foghlaim’ tar éis iad a chur ar an margadh nó i mbun seirbhíse. Sa chomhthéacs sin, ba cheart a cheangal ar sholáthraithe freisin córas a bheith i bhfeidhm acu chun aon teagmhas tromchúiseach a eascraíonn as úsáid a gcóras intleachta saorga a thuairisciú do na húdaráis ábhartha, is é sin teagmhas nó mífheidhmiú as a n-eascraíonn bás nó damáiste tromchúiseach don tsláinte, cur isteach tromchúiseach do-aisiompaithe ar bhainistiú agus oibriú bonneagair chriticiúil, sáruithe ar oibleagáidí faoi dhlí an Aontais atá beartaithe chun cearta bunúsacha nó damáiste tromchúiseach do mhaoin nó don chomhshaol a chosaint.
(156)
In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Market surveillance authorities designated pursuant to this Regulation should have all enforcement powers laid down in this Regulation and in Regulation (EU) 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias. Although the majority of AI systems are not subject to specific requirements and obligations under this Regulation, market surveillance authorities may take measures in relation to all AI systems when they present a risk in accordance with this Regulation. Due to the specific nature of Union institutions, agencies and bodies falling within the scope of this Regulation, it is appropriate to designate the European Data Protection Supervisor as a competent market surveillance authority for them. This should be without prejudice to the designation of national competent authorities by the Member States. Market surveillance activities should not affect the ability of the supervised entities to carry out their tasks independently, when such independence is required by Union law.
(156)
Chun a áirithiú go ndéantar na ceanglais agus na hoibleagáidí a leagtar amach sa Rialachán seo a fhorfheidhmiú go hiomchuí agus go héifeachtach, ar reachtaíocht chomhchuibhithe de chuid an Aontais í, ba cheart feidhm a bheith ag an gcóras maidir le faireachas margaidh ar tháirgí agus comhlíontacht táirgí, ina iomláine, a bhunaítear le Rialachán (AE) 2019/1020. Ba cheart na cumhachtaí forfheidhmiúcháin go léir a leagtar síos sa Rialachán seo agus i Rialachán (AE) 2019/1020 a bheith ag na húdaráis faireachais margaidh arna n-ainmniú de bhun an Rialacháin seo agus ba cheart dóibh a gcumhachtaí a fheidhmiú agus a ndualgais a chomhlíonadh go neamhspleách, go neamhchlaonta agus gan laofacht. Cé nach bhfuil formhór na gcóras intleachta saorga faoi réir ceanglais shonracha agus oibleagáidí sonracha faoin Rialachán seo, féadfaidh na húdaráis faireachais margaidh bearta a dhéanamh i ndáil leis na córais intleachta saorga uile nuair a bhaineann riosca leo i gcomhréir leis an Rialachán seo. De bharr chineál sonrach institiúidí, ghníomhaireachtaí agus chomhlachtaí an Aontais a thagann faoi raon feidhme an Rialacháin seo, is iomchuí an Maoirseoir Eorpach ar Chosaint Sonraí a ainmniú mar údarás inniúil faireachais margaidh dóibh. Ba cheart an méid sin a bheith gan dochar d’ainmniú údarás inniúil náisiúnta ag na Ballstáit. Níor cheart le gníomhaíochtaí faireachais margaidh difear a dhéanamh do chumas na n-eintiteas faoi mhaoirseacht a gcúraimí a dhéanamh go neamhspleách, nuair a cheanglaítear an neamhspleáchas sin le dlí an Aontais.
(157)
This Regulation is without prejudice to the competences, tasks, powers and independence of relevant national public authorities or bodies which supervise the application of Union law protecting fundamental rights, including equality bodies and data protection authorities. Where necessary for their mandate, those national public authorities or bodies should also have access to any documentation created under this Regulation. A specific safeguard procedure should be set for ensuring adequate and timely enforcement against AI systems presenting a risk to health, safety and fundamental rights. The procedure for such AI systems presenting a risk should be applied to high-risk AI systems presenting a risk, prohibited systems which have been placed on the market, put into service or used in violation of the prohibited practices laid down in this Regulation and AI systems which have been made available in violation of the transparency requirements laid down in this Regulation and present a risk.
(157)
Tá an Rialachán seo gan dochar d’inniúlachtaí, cúraimí, cumhachtaí agus neamhspleáchas na n-údarás nó na gcomhlachtaí poiblí náisiúnta ábhartha a dhéanann maoirseacht ar chur i bhfeidhm dhlí an Aontais lena gcosnaítear cearta bunúsacha, lena n-áirítear comhlachtaí comhionannais agus údaráis cosanta sonraí. I gcás inar gá sin dá sainordú, ba cheart rochtain a bheith ag na húdaráis nó comhlachtaí poiblí náisiúnta sin freisin ar aon doiciméadacht a chruthaítear faoin Rialachán seo. Ba cheart nós imeachta cosanta sonrach a leagan síos chun forfheidhmiú leordhóthanach tráthúil a áirithiú i gcoinne córais intleachta saorga a bhfuil riosca don tsláinte, don tsábháilteacht agus do chearta bunúsacha ag baint leo. Ba cheart an nós imeachta le haghaidh na gcóras intleachta saorga sin a bhfuil riosca ag baint leo a chur i bhfeidhm ar chórais intleachta saorga ardriosca a bhfuil riosca ag baint leo, ar chórais thoirmiscthe a cuireadh ar an margadh, a cuireadh i mbun seirbhíse nó a úsáideadh de shárú ar na cleachtais thoirmiscthe a leagtar síos sa Rialachán seo agus ar chórais intleachta saorga a cuireadh ar fáil de shárú ar na ceanglais trédhearcachta a leagtar síos sa Rialachán seo agus a bhfuil riosca ag baint leo.
(158)
Union financial services law includes internal governance and risk-management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legal acts, the competent authorities for the supervision and enforcement of those legal acts, in particular competent authorities as defined in Regulation (EU) No 575/2013 of the European Parliament and of the Council (46) and Directives 2008/48/EC (47), 2009/138/EC (48), 2013/36/EU (49), 2014/17/EU (50) and (EU) 2016/97 (51) of the European Parliament and of the Council, should be designated, within their respective competences, as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions unless Member States decide to designate another authority to fulfil these market surveillance tasks. Those competent authorities should have all powers under this Regulation and Regulation (EU) 2019/1020 to enforce the requirements and obligations of this Regulation, including powers to carry our ex post market surveillance activities that can be integrated, as appropriate, into their existing supervisory mechanisms and procedures under the relevant Union financial services law. It is appropriate to envisage that, when acting as market surveillance authorities under this Regulation, the national authorities responsible for the supervision of credit institutions regulated under Directive 2013/36/EU, which are participating in the Single Supervisory Mechanism established by Council Regulation (EU) No 1024/2013 (52), should report, without delay, to the European Central Bank any information identified in the course of their market surveillance activities that may be of potential interest for the European Central Bank’s prudential supervisory tasks as specified in that Regulation. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU, it is also appropriate to integrate some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on deployers of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. The same regime should apply to insurance and re-insurance undertakings and insurance holding companies under Directive 2009/138/EC and the insurance intermediaries under Directive (EU) 2016/97 and other types of financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services law to ensure consistency and equal treatment in the financial sector.
(158)
Le dlí an Aontais maidir le seirbhísí airgeadais, cumhdaítear rialachas inmheánach agus rialacha agus ceanglais i dtaca le bainistíocht riosca is infheidhme maidir le hinstitiúidí airgeadais rialáilte le linn sholáthar na seirbhísí sin, lena n-áirítear nuair a bhaineann siad úsáid as córais intleachta saorga. Chun cur i bhfeidhm agus forfheidhmiú comhleanúnach na n-oibleagáidí faoin Rialachán seo agus rialacha agus cheanglais ábhartha ghníomhartha dlí an Aontais maidir le seirbhísí airgid a áirithiú, ba cheart na húdaráis atá inniúil ar mhaoirseacht agus forfheidhmiú na ngníomhartha dlí sin, go háirithe na húdaráis inniúla mar a shainmhínítear i Rialachán (AE) Uimh. 575/2013 ó Pharlaimint na hEorpa agus ón gComhairle (46) agus Treoracha 2008/48/CE (47), 2009/138/CE (48), 2013/36/AE (49), 2014/17/AE (50) agus (AE) 2016/97 (51) ó Pharlaimint na hEorpa agus ón gComhairle, a ainmniú, faoina n-inniúlachtaí faoi seach, mar údaráis inniúla chun cur chun feidhme an Rialacháin seo a mhaoirsiú, lena n-áirítear le haghaidh gníomhaíochtaí faireachais margaidh, a mhéid a bhaineann le córais intleachta saorga arna soláthar nó arna n-úsáid ag institiúidí airgeadais rialáilte faoi mhaoirseacht, mura gcinnfidh na Ballstáit údarás eile a ainmniú chun na cúraimí faireachais margaidh sin a chomhlíonadh. Ba cheart gach cumhacht a bheith ag na húdaráis inniúla sin faoin Rialachán seo agus faoi Rialachán (AE) 2019/1020 chun ceanglais agus oibleagáidí an Rialacháin seo a fhorfheidhmiú, lena n-áirítear cumhachtaí chun ár ngníomhaíochtaí faireachais margaidh ex post a dhéanamh, cumhachtaí is féidir a chomhtháthú, de réir mar is iomchuí, ina sásraí maoirseachta agus ina nósanna imeachta maoirseachta atá ann cheana faoi reachtaíocht ábhartha an Aontais maidir le seirbhísí airgeadais. Agus iad ag gníomhú mar údaráis faireachais margaidh faoin Rialachán seo, na húdaráis náisiúnta atá freagrach as maoirseacht a dhéanamh ar institiúidí creidmheasa arna rialáil faoi Threoir 2013/36/AE, atá rannpháirteach sa Sásra Maoirseachta Aonair arna bhunú le Rialachán (AE) Uimh. 1024/2013 ón gComhairle (52), is iomchuí a bheartú gur cheart do na húdaráis sin tuairisciú gan mhoill don Bhanc Ceannais Eorpach aon fhaisnéis a shainaithnítear le linn a ngníomhaíochtaí faireachais margaidh a d’fhéadfadh a bheith ina hábhar spéise do chúraimí maoirseachta stuamachta an Bhainc Cheannais Eorpaigh mar a shonraítear sa Rialachán sin. Chun an chomhsheasmhacht a fheabhsú idir an Rialachán seo agus na rialacha is infheidhme maidir le hinstitiúidí creidmheasa arna rialáil faoi Threoir 2013/36/AE ó Pharlaimint na hEorpa agus ón gComhairle, is iomchuí freisin cuid d’oibleagáidí nós imeachta na soláthraithe maidir le bainistiú riosca, faireachán iarmhargaidh agus doiciméadacht a chomhtháthú sna hoibleagáidí agus nósanna imeachta atá ann cheana faoi Threoir 2013/36/AE. Chun forluí a sheachaint, ba cheart maoluithe teoranta a bheartú i ndáil le córas bainistíochta cáilíochta na soláthraithe agus an oibleagáid maidir le faireachán a chuirtear ar úsáideoirí gairmiúla córas intleachta saorga ardriosca sa mhéid go bhfuil feidhm acu ar institiúidí creidmheasa arna rialáil faoi Threoir 2013/36/AE. Ba cheart feidhm a bheith ag an gcóras céanna maidir le gnóthais árachais agus athárachais agus cuideachtaí sealbhaíochta árachais faoi Threoir 2009/138/CE agus maidir leis na hidirghabhálaithe árachais faoi Threoir (AE) 2016/97 agus cineálacha eile institiúidí airgeadais faoi réir na gceanglas maidir le rialachas inmheánach, socruithe nó próisis arna mbunú de bhun dhlí ábhartha an Aontais maidir le seirbhísí airgeadais chun comhsheasmhacht agus cóir chomhionann a áirithiú san earnáil airgeadais.
(159)
Each market surveillance authority for high-risk AI systems in the area of biometrics, as listed in an annex to this Regulation insofar as those systems are used for the purposes of law enforcement, migration, asylum and border control management, or the administration of justice and democratic processes, should have effective investigative and corrective powers, including at least the power to obtain access to all personal data that are being processed and to all information necessary for the performance of its tasks. The market surveillance authorities should be able to exercise their powers by acting with complete independence. Any limitations of their access to sensitive operational data under this Regulation should be without prejudice to the powers conferred to them by Directive (EU) 2016/680. No exclusion on disclosing data to national data protection authorities under this Regulation should affect the current or future powers of those authorities beyond the scope of this Regulation.
(159)
Ba cheart cumhachtaí éifeachtacha imscrúdaitheacha agus ceartaitheacha a bheith ag gach údarás faireachais margaidh i réimse na bithmhéadrachta, mar a liostaítear in iarscríbhinn a ghabhann leis an Rialachán seo a mhéid a úsáidtear na córais sin ar chun críocha a bhaineann le forfheidhmiú an dlí, imirce, tearmann agus bainistiú rialaithe teorann, nó leis an gceartas agus na próisis dhaonlathacha a riar, lena n-áirítear ar a laghad an chumhacht chun rochtain a fháil ar na sonraí pearsanta uile atá á bpróiseáil agus ar an bhfaisnéis uile is gá chun a chúraimí a chomhlíonadh. Ba cheart do na húdaráis faireachais margaidh a bheith in ann a gcumhachtaí a fheidhmiú trí ghníomhú ar shlí a bheidh go hiomlán neamhspleách. Ba cheart aon teorainneacha a bhaineann leis an rochtain atá acu ar shonraí oibríochtúla íogaire faoin Rialachán seo a bheith gan dochar do na cumhachtaí a thugtar dóibh le Treoir (AE) 2016/680. Níor cheart d’eisiamh ar bith maidir le sonraí a nochtadh d’údaráis náisiúnta um chosaint sonraí faoin Rialachán seo difear a dhéanamh do na cumhachtaí atá ag na húdaráis sin faoi láthair nó a bheidh acu amach anseo lasmuigh de raon feidhme an Rialacháin seo.
(160)
The market surveillance authorities and the Commission should be able to propose joint activities, including joint investigations, to be conducted by market surveillance authorities or market surveillance authorities jointly with the Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness and providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present a serious risk across two or more Member States. Joint activities to promote compliance should be carried out in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office should provide coordination support for joint investigations.
(160)
Ba cheart do na húdaráis faireachais margaidh agus don Choimisiún a bheith in ann gníomhaíochtaí comhpháirteacha a mholadh, lena n-áirítear imscrúduithe comhpháirteacha, a dhéanfaidh na húdaráis faireachais margaidh nó na húdaráis faireachais margaidh i gcomhpháirt leis an gCoimisiún, a mbeidh mar aidhm acu comhlíonadh a chur chun cinn, neamhchomhlíonadh a shainaithint, feasacht a mhúscailt agus treoir a thabhairt i dtaca leis an Rialachán seo a mhéid a bhaineann le catagóirí sonracha córas intleachta saorga ardriosca agus a bhfaightear amach ina leith go bhfuil riosca tromchúiseach ag baint leo in dhá Bhallstát nó níos mó ná sin. Ba cheart gníomhaíochtaí comhpháirteacha chun comhlíonadh a chur chun cinn a dhéanamh i gcomhréir le hAirteagal 9 de Rialachán (AE) 2019/1020. Ba cheart don Oifig um Intleacht Shaorga tacaíocht chomhordúcháin a chur ar fáil d’imscrúduithe comhpháirteacha.
(161)
It is necessary to clarify the responsibilities and competences at Union and national level as regards AI systems that are built on general-purpose AI models. To avoid overlapping competences, where an AI system is based on a general-purpose AI model and the model and system are provided by the same provider, the supervision should take place at Union level through the AI Office, which should have the powers of a market surveillance authority within the meaning of Regulation (EU) 2019/1020 for this purpose. In all other cases, national market surveillance authorities remain responsible for the supervision of AI systems. However, for general-purpose AI systems that can be used directly by deployers for at least one purpose that is classified as high-risk, market surveillance authorities should cooperate with the AI Office to carry out evaluations of compliance and inform the Board and other market surveillance authorities accordingly. Furthermore, market surveillance authorities should be able to request assistance from the AI Office where the market surveillance authority is unable to conclude an investigation on a high-risk AI system because of its inability to access certain information related to the general-purpose AI model on which the high-risk AI system is built. In such cases, the procedure regarding mutual assistance in cross-border cases in Chapter VI of Regulation (EU) 2019/1020 should apply mutatis mutandis.
(161)
Is gá na freagrachtaí agus na hinniúlachtaí a shoiléiriú ar leibhéal an Aontais agus ar an leibhéal náisiúnta a mhéid a bhaineann le córais intleachta saorga atá bunaithe ar shamhlacha intleachta saorga ilchuspóireacha. Chun inniúlachtaí forluiteacha a sheachaint, i gcás ina bhfuil córas intleachta saorga bunaithe ar shamhail intleachta saorga ilchuspóireach agus inarb é an soláthraí céanna a sholáthraíonn an tsamhail agus an córas, ba cheart an mhaoirseacht a dhéanamh ar leibhéal an Aontais tríd an Oifig um Intleacht Shaorga, ar cheart na cumhachtaí a bhíonn ag údarás faireachais margaidh a bheith aici de réir bhrí Rialachán (AE) 2019/1020 chun na críche sin. I ngach cás eile, beidh na húdaráis náisiúnta faireachais margaidh freagrach fós as maoirseacht a dhéanamh ar chórais intleachta saorga. Mar sin féin, i gcás córais intleachta saorga ilchuspóireacha is féidir le húsáideoirí gairmiúla a úsáid agus cuspóir amháin ar a laghad acu a aicmítear mar chuspóir a mbaineann ardriosca leis, ba cheart do na húdaráis faireachais margaidh comhoibriú leis an Oifig um Intleacht Shaorga chun meastóireachtaí a dhéanamh ar chomhlíonadh agus an Bord agus údaráis eile faireachais margaidh eile a chur ar an eolas dá réir. Thairis sin, ba cheart do na húdaráis faireachais margaidh a bheith in ann cúnamh a iarraidh ar an Oifig um Intleacht Shaorga i gcás nach bhfuil an t-údarás faireachais margaidh in ann imscrúdú a thabhairt i gcrích maidir le córas intleachta saorga ardriosca mar nach bhfuil rochtain aige ar fhaisnéis áirithe a bhaineann leis an tsamhail intleachta saorga ilchuspóireach ar ar bunaíodh an córas intleachta saorga ardriosca. I gcásanna mar sin, ba cheart feidhm mutatis mutandis a bheith ag an nós imeachta maidir le cúnamh frithpháirteach i gcásanna trasteorann i gCaibidil VI de Rialachán (AE) 2019/1020.
(162)
To make best use of the centralised Union expertise and synergies at Union level, the powers of supervision and enforcement of the obligations on providers of general-purpose AI models should be a competence of the Commission. The AI Office should be able to carry out all necessary actions to monitor the effective implementation of this Regulation as regards general-purpose AI models. It should be able to investigate possible infringements of the rules on providers of general-purpose AI models both on its own initiative, following the results of its monitoring activities, or upon request from market surveillance authorities in line with the conditions set out in this Regulation. To support effective monitoring of the AI Office, it should provide for the possibility that downstream providers lodge complaints about possible infringements of the rules on providers of general-purpose AI models and systems.
(162)
Chun an úsáid is fearr is féidir a bhaint as saineolas agus sineirgí láraithe an Aontais ar leibhéal an Aontais, ba cheart inniúlacht a bheith ag an gCoimisiún i dtaca le cumhachtaí maoirseachta agus forfheidhmithe maidir leis na hoibleagáidí atá ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha. Ba cheart don Oifig um Intleacht Shaorga a bheith in ann na gníomhaíochtaí uile is gá a dhéanamh chun faireachán a dhéanamh ar chur chun feidhme éifeachtach an Rialacháin seo a mhéid a bhaineann le samhlacha intleachta saorga ilchuspóireacha. Ba cheart di a bheith in ann imscrúdú a dhéanamh ar sháruithe féideartha ar na rialacha maidir le soláthraithe samhlacha intleachta saorga ilchuspóireacha ar a tionscnamh féin, de réir thorthaí a gníomhaíochtaí faireacháin, nó arna iarraidh sin do na húdaráis faireachais margaidh i gcomhréir leis na coinníollacha a leagtar amach sa Rialachán seo. Chun tacú le faireachán éifeachtach arna dhéanamh ag an Oifig um Intleacht Shaorga, ba cheart a fhoráil don fhéidearthacht go ndéanfaidh soláthraithe iartheachtacha gearáin faoi sháruithe féideartha ar na rialacha maidir le soláthraithe na samhlacha agus na gcóras intleachta saorga ilchuspóireach.
(163)
With a view to complementing the governance systems for general-purpose AI models, the scientific panel should support the monitoring activities of the AI Office and may, in certain cases, provide qualified alerts to the AI Office which trigger follow-ups, such as investigations. This should be the case where the scientific panel has reason to suspect that a general-purpose AI model poses a concrete and identifiable risk at Union level. Furthermore, this should be the case where the scientific panel has reason to suspect that a general-purpose AI model meets the criteria that would lead to a classification as general-purpose AI model with systemic risk. To equip the scientific panel with the information necessary for the performance of those tasks, there should be a mechanism whereby the scientific panel can request the Commission to require documentation or information from a provider.
(163)
D’fhonn na córais rialachais le haghaidh samhlacha intleachta saorga ilchuspóireacha a chomhlánú, ba cheart don phainéal eolaíoch tacú le gníomhaíochtaí faireacháin na hOifige um Intleacht Shaorga agus féadfaidh sé, i gcásanna áirithe, foláirimh cháilithe a chur ar fáil don Oifig um Intleacht Shaorga trína spreagfar bearta leantacha, amhail imscrúduithe. Ba cheart é sin a bheith amhlaidh i gcás ina bhfuil údar ag an bpainéal eolaíoch amhras a bheith air go mbaineann riosca nithiúil in-sainaitheanta ar leibhéal an Aontais le samhail intleachta saorga ilchuspóireach. Thairis sin, ba cheart é sin a bheith amhlaidh i gcás ina bhfuil cúis ag an bpainéal eolaíoch amhras a bheith air go gcomhlíonann samhail intleachta saorga ilchuspóireach na critéir lenar bhféadfaí an tsamhail a aicmiú mar shamhail intleachta saorga ilchuspóireach lena ngabhann riosca sistéamach. Chun go mbeidh an fhaisnéis is gá chun na cúraimí sin a dhéanamh ag an bpainéal eolaíochta, ba cheart sásra a bheith ann trínar féidir leis an bpainéal eolaíoch a iarraidh ar an gCoimisiún doiciméadacht nó faisnéis a éileamh ar sholáthraí.
(164)
The AI Office should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations for providers of general-purpose AI models laid down in this Regulation. The AI Office should be able to investigate possible infringements in accordance with the powers provided for in this Regulation, including by requesting documentation and information, by conducting evaluations, as well as by requesting measures from providers of general-purpose AI models. When conducting evaluations, in order to make use of independent expertise, the AI Office should be able to involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take appropriate measures, including risk mitigation measures in the case of identified systemic risks as well as restricting the making available on the market, withdrawing or recalling the model. As a safeguard, where needed beyond the procedural rights provided for in this Regulation, providers of general-purpose AI models should have the procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply mutatis mutandis, without prejudice to more specific procedural rights provided for by this Regulation.
(164)
Ba cheart don Oifig um Intleacht Shaorga a bheith in ann na gníomhaíochtaí is gá a dhéanamh chun faireachán a dhéanamh ar chur chun feidhme éifeachtach na n-oibleagáidí do sholáthraithe samhlacha intleachta saorga ilchuspóireacha a leagtar síos sa Rialachán seo agus ar chomhlíonadh na n-oibleagáidí sin. Ba cheart don Oifig um Intleacht Shaorga a bheith in ann sáruithe féideartha a imscrúdú i gcomhréir leis na cumhachtaí dá bhforáiltear sa Rialachán seo, lena n-áirítear trí dhoiciméadacht agus faisnéis a iarraidh, trí mheastóireachtaí a dhéanamh, agus trí bhearta a iarraidh ar sholáthraithe samhlacha intleachta saorga ilchuspóireacha. Agus meastóireachtaí á ndéanamh, chun úsáid a bhaint as saineolas neamhspleách, ba cheart don Oifig um Intleacht Shaorga a bheith in ann páirt a thabhairt do shaineolaithe neamhspleácha chun na meastóireachtaí a dhéanamh thar a ceann. Ba cheart comhlíonadh na n-oibleagáidí a bheith in-fhorfheidhmithe, inter alia, trí iarrataí ar bhearta iomchuí a dhéanamh, lena n-áirítear bearta maolaithe riosca i gcás rioscaí sistéamacha sainaitheanta chomh maith le srian a chur leis an tsamhail a bheith ar fáil ar an margadh agus an tsamhail a tharraingt siar nó a aisghairm. Mar chosaint i gcás inar gá sa bhreis ar na cearta nós imeachta dá bhforáiltear sa Rialachán seo, ba cheart na cearta nós imeachta dá bhforáiltear in Airteagal 18 de Rialachán (AE) 2019/1020 a bheith ag soláthraithe samhlacha intleachta saorga ilchuspóireacha, ar cheart feidhm mutatis mutandis a bheith acu, gan dochar do chearta nós imeachta níos sonraí dá bhforáiltear leis an Rialachán seo.
(165)
The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of ethical and trustworthy AI in the Union. Providers of AI systems that are not high-risk should be encouraged to create codes of conduct, including related governance mechanisms, intended to foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the available technical solutions and industry best practices such as model and data cards. Providers and, as appropriate, deployers of all AI systems, high-risk or not, and AI models should also be encouraged to apply on a voluntary basis additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with the involvement, as appropriate, of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisations in the design and development of AI systems, and diversity of the development teams, including gender balance. To ensure that the voluntary codes of conduct are effective, they should be based on clear objectives and key performance indicators to measure the achievement of those objectives. They should also be developed in an inclusive way, as appropriate, with the involvement of relevant stakeholders such as business and civil society organisations, academia, research organisations, trade unions and consumer protection organisation. The Commission may develop initiatives, including of a sectoral nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
(165)
De thoradh forbairt córas intleachta saorga cé is moite de chórais intleachta saorga ardriosca i gcomhréir le ceanglais an Rialacháin seo, d’fhéadfadh sé go mbeadh glacadh níos fairsinge le hintleacht shaorga eiticiúil iontaofa ann san Aontas. Ba cheart soláthraithe córas intleachta saorga nach mbaineann ardriosca leo a spreagadh chun cóid iompair a chruthú, lena n-áirítear sásraí rialachais gaolmhara, arna gceapadh chun go ndéanfaí cuid de na ceanglais éigeantacha nó iad uile is infheidhme maidir le córais intleachta saorga ardriosca a chur i bhfeidhm go deonach, ar córais iad a ndéantar oiriúnú orthu i bhfianaise na críche atá beartaithe dóibh agus an riosca níos ísle lena mbaineann agus na réitigh theicniúla atá ar fáil agus dea-chleachtais an tionscail, amhail samhail agus cártaí sonraí, á gcur san áireamh. Ba cheart a mholadh do sholáthraithe agus, mar is iomchuí, d’úsáideoirí gairmiúla na gcóras intleachta saorga uile, bíodh ardriosca ag baint leo nó ná bíodh, agus d’úsáideoirí gairmiúla samhlacha saorga intleachta, ceanglais bhreise a chur i bhfeidhm ar bhonn deonach a bhaineann, mar shampla, le gnéithe de Threoirlínte Eitice an Aontais le haghaidh Intleacht Shaorga Iontaofa, inbhuanaitheacht chomhshaoil, bearta litearthachta intleachta saorga, ceapadh agus forbairt chuimsitheach éagsúil córas intleachta saorga, lena n-áirítear aird a thabhairt ar dhaoine leochaileacha agus inrochtaineacht do dhaoine faoi mhíchumas, rannpháirtíocht na ngeallsealbhóirí ábhartha mar is iomchuí, amhail eagraíochtaí gnó agus eagraíochtaí na sochaí sibhialta, an lucht léinn agus eagraíochtaí taighde, ceardchumainn agus eagraíochtaí cosanta tomhaltóirí i gceapadh agus forbairt na gcóras intleachta saorga, agus éagsúlacht na bhfoirne forbartha, lena n-áirítear cothromaíocht inscne. Chun a áirithiú go mbeidh na cóid iompair dheonacha éifeachtach, ba cheart iad a bheith bunaithe ar chuspóirí soiléire agus ar phríomhtháscairí feidhmíochta chun gur féidir a mhéid a bhaintear na cuspóirí sin amach a thomhas. Ba cheart iad a fhorbairt freisin ar bhealach cuimsitheach, mar is iomchuí, le rannpháirtíocht na bpáirtithe leasmhara ábhartha, amhail eagraíochtaí gnó agus eagraíochtaí na sochaí sibhialta, an lucht léinn, eagraíochtaí taighde, ceardchumainn agus eagraíochtaí cosanta tomhaltóirí. D’fhéadfadh an Coimisiún tionscnaimh a fhorbairt, lena n-áirítear tionscnaimh de chineál earnála, chun laghdú bac teicniúil a éascú, ar baic iad lena gcuirtear isteach ar mhalartú sonraí trasteorann maidir le forbairt intleachta saorga, lena n-áirítear sonraí maidir le bonneagar rochtana sonraí, idir-inoibritheacht shéimeantach agus theicniúil cineálacha éagsúla sonraí.
(166)
It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, Regulation (EU) 2023/988 of the European Parliament and of the Council (53) would apply as a safety net.
(166)
Maidir le córais intleachta saorga a bhaineann le táirgí nach meastar ardriosca a bheith ag baint leo i gcomhréir leis an Rialachán seo agus sa chaoi sin nach gceanglaítear orthu na ceanglais a leagtar amach maidir le córais intleachta saorga ardriosca a chomhlíonadh, tá sé tábhachtach go mbeidh siad sábháilte, mar sin féin, nuair a chuirtear iad ar an margadh nó i mbun seirbhíse. Chun rannchuidiú leis an gcuspóir sin, bheadh feidhm ag Rialachán (AE) 2023/988 ó Pharlaimint na hEorpa agus ón gComhairle (53) mar líontán sábhála.
(167)
In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks, in accordance with Union or national law. They should carry out their tasks and activities in such a manner as to protect, in particular, intellectual property rights, confidential business information and trade secrets, the effective implementation of this Regulation, public and national security interests, the integrity of criminal and administrative proceedings, and the integrity of classified information.
(167)
Chun comhar iontaofa cuiditheach a áirithiú idir na húdaráis inniúla ar leibhéal an Aontais agus ar an leibhéal náisiúnta, ba cheart do na páirtithe ar fad atá páirteach i gcur i bhfeidhm an Rialacháin seo rúndacht faisnéise agus sonraí a fhaightear agus a gcúraimí á ndéanamh acu a urramú, i gcomhréir le dlí an Aontais nó leis an dlí náisiúnta. Ba cheart dóibh a gcúraimí agus a ngníomhaíochtaí a dhéanamh ar bhealach ina dtabharfar cosaint go háirithe, do chearta maoine intleachtúla, faisnéis rúnda ghnó agus rúin trádála, cur chun feidhme éifeachtach an Rialacháin seo, leasanna slándála poiblí agus náisiúnta, sláine imeachtaí coiriúla agus riaracháin, agus sláine faisnéise rúnaicmithe.
(168)
Compliance with this Regulation should be enforceable by means of the imposition of penalties and other enforcement measures. Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement, and to respect the ne bis in idem principle. In order to strengthen and harmonise administrative penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific infringements should be laid down. When assessing the amount of the fines, Member States should, in each individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to the nature, gravity and duration of the infringement and of its consequences and to the size of the provider, in particular if the provider is an SME, including a start-up. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.
(168)
Ba cheart a bheith in ann comhlíonadh an Rialacháin seo fhorfheidhmiú trí phionóis agus bearta forfheidhmiúcháin eile a fhorchur. Ba cheart do na Ballstáit gach beart is gá a dhéanamh chun a áirithiú go gcuirfear forálacha an Rialacháin seo chun feidhme, lena n-áirítear trí phionóis éifeachtúla, chomhréireacha agus athchomhairleacha a leagan síos má sháraítear na forálacha sin, agus chun prionsabal ne bis in idem a urramú. Chun na pionóis riaracháin a ghearrfar má sháraítear an Rialachán seo a neartú agus a chomhchuibhiú, ba cheart na huasteorainneacha le haghaidh fíneálacha riaracháin i gcás sáruithe sonracha áirithe a leagan síos. Agus méid na bhfíneálacha á mheasúnú, ba cheart do na Ballstáit, i ngach cás aonair, imthosca ábhartha uile an cháis shonraigh a chur san áireamh, agus aird chuí á tabhairt go háirithe ar chineál, tromchúis agus fad an tsáraithe agus ar a iarmhairtí agus ar mhéid an tsoláthraí, go háirithe más FBM nó gnólacht nuathionscanta é an soláthraí. Ba cheart é a bheith de chumhacht ag an Maoirseoir Eorpach ar Chosaint Sonraí fíneálacha a ghearradh ar institiúidí, gníomhaireachtaí agus comhlachtaí an Aontais a thagann faoi raon feidhme an Rialacháin seo.
(169)
Compliance with the obligations on providers of general-purpose AI models imposed under this Regulation should be enforceable, inter alia, by means of fines. To that end, appropriate levels of fines should also be laid down for infringement of those obligations, including the failure to comply with measures requested by the Commission in accordance with this Regulation, subject to appropriate limitation periods in accordance with the principle of proportionality. All decisions taken by the Commission under this Regulation are subject to review by the Court of Justice of the European Union in accordance with the TFEU, including the unlimited jurisdiction of the Court of Justice with regard to penalties pursuant to Article 261 TFEU.
(169)
Ba cheart comhlíonadh na n-oibleagáidí i gcás soláthraithe samhlacha intleachta saorga ilchuspóireacha a fhorchuirtear faoin Rialachán seo a bheith in-fhorfheidhmithe, inter alia, trí fhíneálacha. Chuige sin, ba cheart leibhéil iomchuí fíneálacha a leagan síos freisin maidir le sárú na n-oibleagáidí sin, lena n-áirítear mura gcomhlíontar na bearta a d’iarr an Coimisiún i gcomhréir leis an Rialachán seo, faoi réir tréimhsí teorann iomchuí i gcomhréir le prionsabal na comhréireachta. Beidh gach cinneadh a dhéanfaidh an Coimisiún faoin Rialachán seo faoi réir athbhreithniú ag Cúirt Bhreithiúnais an Aontais Eorpaigh i gcomhréir le CFAE lena n-áirítear dlínse neamhtheoranta na Cúirte Breithiúnais maidir le pionóis de bhun Airteagal 261 CFAE.
(170)
Union and national law already provide effective remedies to natural and legal persons whose rights and freedoms are adversely affected by the use of AI systems. Without prejudice to those remedies, any natural or legal person that has grounds to consider that there has been an infringement of this Regulation should be entitled to lodge a complaint to the relevant market surveillance authority.
(170)
Foráiltear do leigheasanna éifeachtacha cheana féin le dlí an Aontais agus leis an dlí náisiúnta do dhaoine nádúrtha agus dlítheanacha a ndéanann an úsáid a bhaintear as córas intleachta saorga difear dá gcearta agus dá saoirsí. Gan dochar do na leigheasanna sin, aon duine nádúrtha nó dlítheanach a bhfuil forais aige a mheas gur sáraíodh an Rialachán seo, ba cheart dó a bheith i dteideal gearán a thaisceadh leis an údarás ábhartha faireachais margaidh.
(171)
Affected persons should have the right to obtain an explanation where a deployer’s decision is based mainly upon the output from certain high-risk AI systems that fall within the scope of this Regulation and where that decision produces legal effects or similarly significantly affects those persons in a way that they consider to have an adverse impact on their health, safety or fundamental rights. That explanation should be clear and meaningful and should provide a basis on which the affected persons are able to exercise their rights. The right to obtain an explanation should not apply to the use of AI systems for which exceptions or restrictions follow from Union or national law and should apply only to the extent this right is not already provided for under Union law.
(171)
Ba cheart é a bheith de cheart ag daoine dá ndéantar difear míniú a fháil i gcás ina mbunaítear cinneadh úsáideora ghairmiúil go príomha ar an aschur ó chórais áirithe ardriosca intleachta saorga a thagann faoi raon feidhme an Rialacháin seo agus a bhfuil éifeachtaí dlíthiúla ag an gcinneadh sin nó a bhfuil tionchair shuntasacha chomhchosúla aige ar na daoine sin ionas go measann siad go n-imrítear tionchar díobhálach ar a sláinte, ar a sábháilteacht nó ar a gcearta bunúsacha. Ba cheart an míniú sin a bheith soiléir agus fóinteach agus ba cheart bonn a sholáthar leis do na daoine dá ndéantar difear chun go mbeidh siad in ann a gcearta a fheidhmiú. Níor cheart feidhm a bheith ag an gceart chun míniú a fháil maidir le húsáid na gcóras intleachta saorga sin a bhfuil baint ag na heisceachtaí nó na srianta a bhaineann leo le dlí an Aontais nó an dlí náisiúnta agus níor cheart feidhm a bheith aige ach amháin sa mhéid nach bhforáiltear don cheart sin cheana faoi dhlí an Aontais.
(172)
Persons acting as whistleblowers on the infringements of this Regulation should be protected under the Union law. Directive (EU) 2019/1937 of the European Parliament and of the Council (54) should therefore apply to the reporting of infringements of this Regulation and the protection of persons reporting such infringements.
(172)
Ba cheart daoine a ghníomhaíonn mar sceithirí i ndáil le sáruithe ar an Rialachán seo a chosaint faoi dhlí an Aontais. Ba cheart feidhm a bheith ag Treoir (AE) 2019/1937 ó Pharlaimint na hEorpa agus ón gComhairle (54) maidir le sáruithe ar an Rialachán seo a thuairisciú agus maidir leis na daoine a thuairiscíonn na sáruithe sin a chosaint.
(173)
In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the conditions under which an AI system is not to be considered to be high-risk, the list of high-risk AI systems, the provisions regarding technical documentation, the content of the EU declaration of conformity the provisions regarding the conformity assessment procedures, the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply, the threshold, benchmarks and indicators, including by supplementing those benchmarks and indicators, in the rules for the classification of general-purpose AI models with systemic risk, the criteria for the designation of general-purpose AI models with systemic risk, the technical documentation for providers of general-purpose AI models and the transparency information for providers of general-purpose AI models. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making (55). In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.
(173)
Chun a áirithiú gur féidir an creat rialála a oiriúnú i gcás inar gá, ba cheart an chumhacht chun gníomhartha a ghlacadh i gcomhréir le hAirteagal 290 CFAE a tharmligean chuig an gCoimisiún chun leasú a dhéanamh ar na coinníollacha ar fúthu nach bhfuil córas intleachta saorga le meas mar chóras lena mbaineann ardriosca, ar liosta na gcóras intleachta saorga ardriosca, ar na forálacha maidir le doiciméadacht theicniúil, ar ábhar an dearbhaithe comhréireachta AE, ar na forálacha maidir leis na nósanna imeachta um measúnú comhréireachta, ar na forálacha lena mbunaítear na córais intleachta saorga ardriosca ar cheart feidhm ina leith a bheith ag an nós imeachta um measúnú comhréireachta bunaithe ar mheasúnú an chórais bainistithe cáilíochta agus ar mheasúnú na doiciméadachta teicniúla, ar an tairseach, na tagarmharcanna agus na táscairí, lena n-áirítear trí na tagarmharcanna agus na táscairí sin a fhorlíonadh, sna rialacha chun samhlacha intleachta saorga ilchuspóireacha a mbaineann riosca sistéamach leo a aicmiú, ar na critéir le haghaidh samhlacha intleachta saorga ilchuspóireacha a mbaineann riosca sistéamach leo a aicmiú, ar an doiciméadacht theicniúil do sholáthraithe na samhlacha intleachta saorga ilchuspóireacha agus ar an bhfaisnéis thrédhearcach do sholáthraithe na samhlacha intleachta saorga ilchuspóireacha. Tá sé tábhachtach, go háirithe, go rachadh an Coimisiún i mbun comhairliúcháin iomchuí le linn a chuid oibre ullmhúcháin, lena n-áirítear ar leibhéal na saineolaithe, agus go ndéanfaí na comhairliúcháin sin i gcomhréir leis na prionsabail a leagtar síos i gComhaontú Idirinstitiúideach an 13 Aibreán 2016 maidir le Reachtóireacht Níos Fearr (55). Go sonrach, chun rannpháirtíocht chomhionann in ullmhú na ngníomhartha tarmligthe a áirithiú, faigheann Parlaimint na hEorpa agus an Chomhairle na doiciméid uile ag an am céanna leis na saineolaithe sna Ballstáit, agus bíonn rochtain chórasach ag a gcuid saineolaithe ar chruinnithe ghrúpaí saineolaithe an Choimisiúin a bhíonn ag déileáil le hullmhú na ngníomhartha tarmligthe.
(174)
Given the rapid technological developments and the technical expertise required to effectively apply this Regulation, the Commission should evaluate and review this Regulation by 2 August 2029 and every four years thereafter and report to the European Parliament and the Council. In addition, taking into account the implications for the scope of this Regulation, the Commission should carry out an assessment of the need to amend the list of high-risk AI systems and the list of prohibited practices once a year. Moreover, by 2 August 2028 and every four years thereafter, the Commission should evaluate and report to the European Parliament and to the Council on the need to amend the list of high-risk areas headings in the annex to this Regulation, the AI systems within the scope of the transparency obligations, the effectiveness of the supervision and governance system and the progress on the development of standardisation deliverables on energy efficient development of general-purpose AI models, including the need for further measures or actions. Finally, by 2 August 2028 and every three years thereafter, the Commission should evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements provided for high-risk AI systems in the case of AI systems other than high-risk AI systems and possibly other additional requirements for such AI systems.
(174)
I bhfianaise na bhforbairtí gasta teicneolaíochta agus an tsaineolais theicniúil is gá chun an Rialachán seo a chur i bhfeidhm go héifeachtach, ba cheart don Choimisiún meastóireacht agus athbhreithniú a dhéanamh ar an Rialachán seo faoin 2 Lúnasa 2029 agus gach 4 bliana ina dhiaidh sin agus ba cheart dó tuarascáil a chur faoi bhráid Pharlaimint na hEorpa agus na Comhairle. Ina theannta sin, agus na himpleachtaí do raon feidhme an Rialacháin seo á gcur san áireamh, ba cheart don Choimisiún measúnú a dhéanamh ar a riachtanaí atá sé liosta na gcóras intleachta saorga ardriosca agus liosta na gcleachtas toirmiscthe a leasú uair sa bhliain. Thairis sin, faoin 2 Lúnasa 2028 agus gach 4 bliana ina dhiaidh sin, ba cheart don Choimisiún meastóireacht a dhéanamh ar an ngá atá le leasú a dhéanamh ar liosta na limistéar ardriosca san iarscríbhinn a ghabhann leis an Rialachán seo, ar na córais intleachta saorga faoi raon feidhme na n-oibleagáidí trédhearcachta, ar éifeachtacht an chórais maoirseachta agus rialachais agus ar an dul chun cinn maidir le forbairt spriocanna insoláthartha caighdeánaithe i ndáil le forbairt atá tíosach ar fhuinneamh na samhlacha intleachta saorga ilchuspóireacha, lena n-áirítear an gá le bearta nó gníomhaíochtaí breise, agus an méid sin a thuairisciú do Pharlaimint na hEorpa agus don Chomhairle. Ar deireadh, faoin 2 Lúnasa 2028 agus gach 3 bliana ina dhiaidh sin, ba cheart don Choimisiún meastóireacht a dhéanamh ar thionchar agus éifeachtacht na gcód iompair deonach chun cur i bhfeidhm na gceanglas a fhoráiltear le haghaidh córais intleachta saorga ardriosca i gcás córais intleachta saorga seachas córais intleachta saorga ardriosca, agus b’fhéidir, ceanglais bhreise eile le haghaidh córais intleachta saorga mar sin.
(175)
In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council (56).
(175)
Chun coinníollacha aonfhoirmeacha a áirithiú maidir leis an Rialachán seo a chur chun feidhme, ba cheart cumhachtaí cur chun feidhme a thabhairt don Choimisiún. Ba cheart na cumhachtaí sin a fheidhmiú i gcomhréir le Rialachán (AE) Uimh. 182/2011 ó Pharlaimint na hEorpa agus ón gComhairle (56).
(176)
Since the objective of this Regulation, namely to improve the functioning of the internal market and to promote the uptake of human centric and trustworthy AI, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection against harmful effects of AI systems in the Union and supporting innovation, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
(176)
Ós rud é nach féidir leis na Ballstáit cuspóir an Rialacháin seo, eadhon feabhas a chur ar fheidhmiú an mhargaidh inmheánaigh agus glacadh na hintleachta saorga atá dírithe ar an duine agus iontaofa a chur chun cinn, agus ardleibhéal cosanta a áirithiú do shláinte, do shábháilteacht, do chearta bunúsacha arna gcumhdach sa Chairt, lena n-áirítear an daonlathas, an smacht reachta agus caomhnú an chomhshaoil ar éifeachtaí díobhálacha na gcóras intleachta saorga san Aontas agus tacú leis an nuálaíocht, a ghnóthú go leordhóthanach agus, de bharr fhairsinge nó éifeachtaí na gníomhaíochta, gur fearr is féidir é a ghnóthú ar leibhéal an Aontais, féadfaidh an tAontas bearta a ghlacadh i gcomhréir le prionsabal na coimhdeachta a leagtar amach in Airteagal 5 CAE. I gcomhréir le prionsabal na comhréireachta a leagtar amach san Airteagal sin, ní théann an Rialachán seo thar a bhfuil riachtanach chun an cuspóir sin a ghnóthú.
(177)
In order to ensure legal certainty, ensure an appropriate adaptation period for operators and avoid disruption to the market, including by ensuring continuity of the use of AI systems, it is appropriate that this Regulation applies to the high-risk AI systems that have been placed on the market or put into service before the general date of application thereof, only if, from that date, those systems are subject to significant changes in their design or intended purpose. It is appropriate to clarify that, in this respect, the concept of significant change should be understood as equivalent in substance to the notion of substantial modification, which is used with regard only to high-risk AI systems pursuant to this Regulation. On an exceptional basis and in light of public accountability, operators of AI systems which are components of the large-scale IT systems established by the legal acts listed in an annex to this Regulation and operators of high-risk AI systems that are intended to be used by public authorities should, respectively, take the necessary steps to comply with the requirements of this Regulation by end of 2030 and by 2 August 2030.
(177)
Chun deimhneacht dhlíthiúil a áirithiú, tréimhse oiriúnaithe iomchuí a áirithiú d’oibreoirí agus suaitheadh ar an margadh a sheachaint, lena n-áirítear trí leanúnachas úsáid na gcóras intleachta saorga a áirithiú, is iomchuí feidhm a bheith ag an Rialachán seo maidir leis na córais intleachta saorga ardriosca a cuireadh ar an margadh nó a cuireadh i mbun seirbhíse roimh dháta ginearálta chur i bhfeidhm na gcóras sin, más rud é, ón dáta sin, go mbeidh na córais sin faoi réir athruithe suntasacha ar a ndearadh nó ar an gcríoch atá beartaithe dóibh. Is iomchuí a shoiléiriú, i ndáil leis sin, gur cheart coincheap an athraithe shuntasaigh a thuiscint mar choincheap atá coibhéiseach ó thaobh substainte de le coincheap an mhodhnaithe shubstaintiúil, coincheap a úsáidtear i ndáil le córais intleachta saorga ardriosca amháin de bhun an Rialacháin seo. Ar bhonn eisceachtúil agus i bhfianaise na cuntasachta poiblí, ba cheart d’oibreoirí córas intleachta saorga ar comhpháirteanna iad de na córais mhórscála TF a bhunaítear leis na gníomhartha dlí a liostaítear in iarscríbhinn a ghabhann leis an Rialachán agus d’oibreoirí córas intleachta saorga ardriosca atá ceaptha lena n-úsáid ag údaráis phoiblí, faoi seach, na bearta is gá a dhéanamh chun ceanglais an Rialacháin seo a chomhlíonadh faoi dheireadh 2030 agus faoin 2 Lúnasa 2030.
(178)
Providers of high-risk AI systems are encouraged to start to comply, on a voluntary basis, with the relevant obligations of this Regulation already during the transitional period.
(178)
Moltar do sholáthraithe córas intleachta saorga ardriosca tosú ar oibleagáidí ábhartha an Rialacháin seo a chomhlíonadh, ar bhonn deonach, cheana féin le linn na hidirthréimhse.
(179)
This Regulation should apply from 2 August 2026. However, taking into account the unacceptable risk associated with the use of AI in certain ways, the prohibitions as well as the general provisions of this Regulation should already apply from 2 February 2025. While the full effect of those prohibitions follows with the establishment of the governance and enforcement of this Regulation, anticipating the application of the prohibitions is important to take account of unacceptable risks and to have an effect on other procedures, such as in civil law. Moreover, the infrastructure related to the governance and the conformity assessment system should be operational before 2 August 2026, therefore the provisions on notified bodies and governance structure should apply from 2 August 2025. Given the rapid pace of technological advancements and adoption of general-purpose AI models, obligations for providers of general-purpose AI models should apply from 2 August 2025. Codes of practice should be ready by 2 May 2025 in view of enabling providers to demonstrate compliance on time. The AI Office should ensure that classification rules and procedures are up to date in light of technological developments. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from 2 August 2025.
(179)
Ba cheart feidhm a bheith ag an Rialachán seo ón 2 Lúnasa 2026. Mar sin féin, agus an riosca do-ghlactha a bhaineann le hintleacht shaorga a úsáid ar bhealaí áirithe á chur san áireamh, ba cheart feidhm a bheith ag na toirmisc chomh maith le forálacha ginearálta an Rialacháin seo cheana féin ón 2 Feabhra 2025. Cé go mbunaítear rialachas agus fhorfheidhmiú an Rialacháin seo de bharr éifeacht iomlán na dtoirmeasc seo, tá sé tábhachtach réamh-mheas a dhéanamh ar chur i bhfeidhm na dtoirmeasc chun rioscaí do-ghlactha a chur san áireamh agus chun go mbeidh éifeacht aige ar nósanna imeachta eile, amhail sa dlí sibhialta. Thairis sin, ba cheart an bonneagar a bhaineann leis an rialachas agus an córas um measúnú comhréireachta a bheith oibríochtúil roimh an 2 Lúnasa 2026, dá bhrí sin ba cheart feidhm a bheith ag na forálacha maidir le comhlachtaí faoina dtugtar fógra agus struchtúr rialachais ón 2 Lúnasa 2025. I bhfianaise chomh tapa agus atá an dul chun cinn teicneolaíoch á dhéanamh agus samhlacha intleachta saorga ilchuspóireacha á nglacadh, ba cheart feidhm a bheith ag na hoibleagáidí do sholáthraithe samhlacha intleachta saorga ilchuspóireacha ó 2 Lúnasa 2025. Ba cheart do na cóid chleachtais a bheith réidh faoin 2 Bealtaine 2025 chun go mbeidh soláthraithe in ann comhlíonadh a léiriú in am. Ba cheart don Oifig um Intleacht Shaorga a áirithiú go bhfuil rialacha agus nósanna imeachta maidir le haicmiú cothrom le dáta i bhfianaise na bhforbairtí teicneolaíocha. Ina theannta sin, ba cheart do na Ballstáit na rialacha maidir le pionóis a leagan síos agus fógra a thabhairt don Choimisiún ina leith, lena n-áirítear fíneálacha riaracháin, agus a áirithiú go gcuirtear na pionóis sin chun feidhme go héifeachtach iomchuí faoi dháta chur i bhfeidhm an Rialacháin seo. Dá bhrí sin, ba cheart feidhm a bheith ag na forálacha maidir le pionóis ón 2 Lúnasa 2025.
(180)
The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(1) and (2) of Regulation (EU) 2018/1725 and delivered their joint opinion on 18 June 2021,
(180)
Chuathas i gcomhairle leis an Maoirseoir Eorpach ar Chosaint Sonraí agus an Bord Eorpach um Chosaint Sonraí i gcomhréir le hAirteagal 42(1) agus (2) de Rialachán (AE) 2018/1725 agus thug siad an tuairim chomhpháirteach uathu an 18 Meitheamh 2021,
HAVE ADOPTED THIS REGULATION:
TAR ÉIS AN RIALACHÁN SEO A GHLACADH:
1. This Regulation applies to:
1. Tá feidhm ag an Rialachán seo maidir leis an méid seo a leanas:
(a)
providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country;
(a)
soláthraithe a chuireann córais intleachta saorga ar an margadh nó i mbun seirbhíse nó a chuireann samhlacha intleachta saorga ilchuspóireacha ar an margadh san Aontas, gan beann ar cé acu san Aontas nó i dtríú tír atá na soláthraithe sin bunaithe nó lonnaithe;
(b)
deployers of AI systems that have their place of establishment or are located within the Union;
(b)
úsáideoirí gairmiúla córas intleachta saorga a bhfuil a n-áit bhunaíochta acu san Aontas nó atá lonnaithe laistigh den Aontas;
(c)
providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
(c)
soláthraithe agus úsáideoirí gairmiúla córas intleachta saorga a bhfuil a n-áit bhunaíochta acu san Aontas nó atá lonnaithe i dtríú tír, i gcás gur san Aontas a úsáidtear an t-aschur a tháirgtear leis an gcóras intleachta saorga;
(d)
importers and distributors of AI systems;
(d)
allmhaireoirí agus dáileoirí na gcóras intleachta saorga;
(e)
product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
(e)
monaróirí táirgí a chuireann córas intleachta saorga ar an margadh nó i mbun seirbhíse mar aon lena dtáirge agus faoina n-ainm nó faoina dtrádmharc féin;
(f)
authorised representatives of providers, which are not established in the Union;
(f)
ionadaithe údaraithe na soláthraithe, nach bhfuil bunaithe san Aontas;
(g)
affected persons that are located in the Union.
(g)
daoine dá ndéantar difear atá lonnaithe san Aontas.
2. For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to 109 and Article 112 apply. Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.
2. Ní bheidh feidhm ach ag Airteagal 6(1), Airteagail 102 go 109 agus Airteagal 112 maidir leis na córais intleachta saorga a aicmítear mar chórais intleachta saorga ardriosca i gcomhréir le hAirteagal 6(1) a bhaineann le táirgí a chumhdaítear le reachtaíocht chomhchuibhithe an Aontais a liostaítear i Roinn B d’Iarscríbhinn I. Ní bheidh feidhm ag Airteagal 57 ach a mhéid a dhéanfar na ceanglais maidir le córais intleachta saorga ardriosca faoin Rialachán seo a chomhtháthú faoin reachtaíocht chomhchuibhithe sin de chuid an Aontais.
3. This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences.
3. Ní bheidh feidhm ag an Rialachán seo maidir le réimsí nach dtagann faoi raon feidhme dhlí an Aontais, agus i gcás ar bith, ní dhéanfaidh sé difear d’inniúlachtaí na mBallstát a bhaineann leis an tslándáil náisiúnta, gan beann ar an gcineál eintitis ar leag na Ballstáit na cúraimí i ndáil leis na hinniúlachtaí sin air.
This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
Ní bheidh feidhm ag an Rialachán seo maidir le córais intleachta saorga más rud é go ndéanfar, agus a mhéid a dhéanfar, na córais sin a chur ar an margadh, a chur i mbun seirbhíse, nó a úsáid le modhnú nó gan mhodhnú go heisiach chun críoch míleata, cosanta nó chun críocha na slándála náisiúnta, gan beann ar an gcineál eintitis a dhéanann na gníomhaíochtaí sin.
This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
Ní bheidh feidhm ag an Rialachán seo maidir le córais intleachta saorga nach gcuirtear ar an margadh nó nach gcuirtear i mbun seirbhíse san Aontas, i gcás ina n-úsáidtear an t-aschur go heisiach chun críoch míleata, cosanta nó chun críocha na slándála náisiúnta, gan beann ar an gcineál eintitis a dhéanann na gníomhaíochtaí sin.
4. This Regulation applies neither to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or with one or more Member States, provided that such a third country or international organisation provides adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals.
4. Ní bheidh feidhm ag an Rialachán seo maidir le húdaráis phoiblí i dtríú tír ná le heagraíochtaí idirnáisiúnta a thagann faoi raon feidhme an Rialacháin seo de bhun mhír 1, i gcás ina n-úsáideann na húdaráis nó na heagraíochtaí sin córais intleachta saorga faoi chuimsiú comhar nó comhaontuithe idirnáisiúnta maidir le forfheidhmiú an dlí agus comhar breithiúnach leis an Aontas nó le Ballstát amháin nó níos mó, ar choinníoll go gcuirfidh tríú tír nó eagraíocht idirnáisiúnta mar sin coimircí leordhóthanacha ar fáil i ndáil le cearta bunúsacha agus saoirsí daoine aonair a chosaint.
5. This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services as set out in Chapter II of Regulation (EU) 2022/2065.
5. Ní dhéanfaidh an Rialachán seo difear do chur i bhfeidhm na bhforálacha maidir le dliteanas soláthraithe seirbhísí idirghabhálacha mar a leagtar amach i gCaibidil II de Rialachán (AE) 2022/2065.
6. This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
6. Ní bheidh feidhm ag an Rialachán seo maidir le córais intleachta saorga ná samhlacha intleachta saorga, lena n-áirítear a n-aschur, a fhorbraítear go sonrach agus a chuirtear i mbun seirbhíse chun críche taighde eolaíoch agus forbartha eolaíche, agus chun na críche sin amháin.
7. Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulation (EU) 2016/679 or (EU) 2018/1725, or Directive 2002/58/EC or (EU) 2016/680, without prejudice to Article 10(5) and Article 59 of this Regulation.
7. Tá feidhm ag dlí an Aontais maidir le cosaint sonraí pearsanta, príobháideachas agus rúndacht chumarsáide maidir le sonraí pearsanta a phróiseáiltear i ndáil leis na cearta agus na hoibleagáidí a leagtar síos sa Rialachán seo. Ní dhéanfaidh an Rialachán seo difear do Rialacháin (AE) 2016/679 agus (AE) 2018/1725, ná do Threoracha 2002/58/CE agus (AE) 2016/680, gan dochar d’Airteagal 10(5) agus Airteagal 59 den Rialachán seo.
8. This Regulation does not apply to any research, testing or development activity regarding AI systems or AI models prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with applicable Union law. Testing in real world conditions shall not be covered by that exclusion.
8. Ní bheidh feidhm ag an Rialachán seo maidir le haon ghníomhaíocht taighde, tástála nó forbartha a bhaineann le córais intleachta saorga nó samhlacha intleachta saorga sula gcuirfear ar an margadh nó i mbun seirbhíse iad. Déanfar na gníomhaíochtaí sin i gcomhréir le dlí infheidhme an Aontais. Ní chumhdófar an tástáil a dhéantar i bhfíordhálaí leis an díolúine sin.
9. This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety.
9. Tá an Rialachán seo gan dochar do na rialacha a leagtar síos le gníomhartha dlí eile de chuid an Aontais a bhaineann le cosaint tomhaltóirí agus sábháilteacht táirgí.
10. This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.
10. Ní bheidh feidhm ag an Rialachán seo maidir leis na hoibleagáidí atá ar úsáideoirí gairmiúla ar daoine nádúrtha iad a úsáideann córais intleachta saorga le linn gníomhaíocht phearsanta neamhghairmiúil amháin.
11. This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers.
11. Leis an rialachán seo, ní chuirtear bac ar an Aontas ná na Ballstáit dlíthe, rialacháin nó forálacha riaracháin atá níos fabhraí d’oibrithe a choinneáil ar bun nó a thabhairt isteach maidir lena gcearta a chosaint i ndáil le córais intleachta saorga a bheith á n-úsáid ag fostóirí, ná ar chur i bhfeidhm comhaontuithe comhchoiteanna atá níos fabhraí d’oibrithe a mholadh nó a cheadú.
12. This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.
12. Níl feidhm ag an Rialachán seo maidir le córais intleachta saorga a eisítear faoi cheadúnais foinse oscailte saor mura gcuirtear ar an margadh nó i mbun seirbhíse iad mar chórais intleachta saorga ardriosca nó córas intleachta saorga a thagann faoi Airteagal 5 nó Airteagal 50.
Cleachtais intleachta saorga a gcuirtear toirmeasc orthu
1. The following AI practices shall be prohibited:
1. Cuirfear toirmeasc ar na cleachtais intleachta saorga seo a leanas:
(a)
the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
(a)
córas intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid, ar córas é lena n-imscartar teicnící fo-thairseachúla nach bhfuil fios ag an duine orthu nó teicnící ionramhála nó meabhlacha d’aon ghnó, agus é mar chuspóir leis sin nó mar thoradh air sin iompraíocht duine nó grúpa daoine a shaobhadh go hábhartha trí chumas an duine a lagú go suntasach i dtaobh cinneadh feasach a dhéanamh, rud a chuirfeadh faoi deara dóibh cinneadh a dhéanamh nach ndéanfadh siad murach sin ar mhodh is cúis le díobháil shuntasach don duine sin, do dhuine eile nó do ghrúpa daoine, nó ar dócha le réasún gurb é is cúis léi sin;
(b)
the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
(b)
córas intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid, ar córas é a théann i dtír ar dhuine nádúrtha nó ar aon cheann de leochaileachtaí atá ag grúpa sonrach daoine i ngeall ar a n-aois, ar a míchumas nó staid shonrach shóisialta nó eacnamaíoch, agus é de chuspóir nó d’éifeacht iompar an duine sin nó duine a bhaineann leis an ngrúpa sin a shaobhadh go hábhartha ar mhodh is cúis le díobháil shuntasach don duine sin nó do dhuine eile, nó ar dócha gurb é is cúis leis sin;
(c)
the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(c)
córais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse nó a úsáid chun meastóireacht nó aicmiú a dhéanamh ar dhaoine nádúrtha nó ar ghrúpaí díobh i gcaitheamh tréimhse áirithe ama bunaithe ar a n-iompraíocht shóisialta nó a saintréithe pearsanta nó pearsantachta atá ar eolas, intuigthe nó tuartha, agus an scór sóisialta as a dtiocfaidh ceachtar díobh seo a leanas nó an dá cheann díobh seo a leanas:
(i)
detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(i)
caitheamh díobhálach nó neamhfhabhrach le daoine nádúrtha áirithe nó le grúpaí daoine i gcomhthéacsanna sóisialta nach mbaineann leis na comhthéacsanna inar gineadh nó inar bailíodh na sonraí i dtosach;
(ii)
detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;
(ii)
caitheamh díobhálach nó neamhfhabhrach le daoine nádúrtha áirithe nó le grúpaí daoine nach bhfuil údar léi nó atá díréireach lena n-iompraíocht shóisialta nó le tromchúis na hiompraíochta sin;
(d)
the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;
(d)
córas intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse chun na críche sonraí sin, nó a úsáid chun measúnuithe riosca a dhéanamh ar dhaoine nádúrtha chun a mheas nó a thuar an riosca go ndéanfaidh duine nádúrtha cion coiriúil bunaithe ar phróifíliú duine nádúrtha, agus air sin amháin, nó ar mheasúnú a dhéanamh ar a thréithe agus a shaintréithe pearsantachta; ní bheidh feidhm ag an toirmeasc seo maidir le córais intleachta saorga a úsáidtear chun tacú leis an measúnú daonna ar rannpháirtíocht duine i ngníomhaíocht choiriúil atá bunaithe cheana féin ar fhíorais oibiachtúla infhíoraithe a bhfuil baint dhíreach acu le gníomhaíocht choiriúil;
(e)
the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
(e)
córais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse chun na críche sonraí sin, nó a úsáid, ar córais iad lena gcruthaítear nó lena leathnaítear bunachair sonraí aghaidh-aitheanta trí íomhánna den aghaidh a chnuaschóipeáil gan spriocdhíriú ón idirlíon nó ó thaifid TCI;
(f)
the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;
(f)
córais intleachta saorga a chur ar an margadh, a chur i mbun seirbhíse chun na críche sonraí sin, nó a úsáid chun mothúcháin duine nádúrtha a thuiscint i réimsí an ionaid oibre agus na n-institiúidí oideachais, ach amháin i gcás ina bhfuil sé beartaithe úsáid an chórais intleachta saorga a chur ar bun nó a chur isteach sa mhargadh ar chúiseanna leighis nó sábháilteachta;
(g)
the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;
(g)
córais catagóirithe bhithmhéadraigh a chur ar an margadh, a chur i mbun seirbhíse chun na críche sonraí sin, nó a úsáid ar córais iad lena ndéantar daoine nádúrtha aonair a aicmiú bunaithe ar a sonraí bithmhéadracha chun cine, tuairimí polaitiúla, ballraíocht i gceardchumann, creideamh reiligiúnach nó fealsúnach, saol gnéis nó gnéaschlaonadh a dhéanamh amach nó a thuiscint; ní chumhdaítear faoin toirmeasc sin aon lipéadú ná scagadh ar thacair sonraí bithmhéadracha a fuarthas go dleathach, amhail íomhánna, bunaithe ar shonraí bithmhéadracha nó catagóiriú sonraí bithmhéadracha i réimse fhorfheidhmiú an dlí;
(h)
the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
(h)
córais cian-sainaitheanta bithmhéadraí fíor-ama a bheith á n-úsáid i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí, ach amháin a mhéid agus atá géarghá le húsáid den sórt sin le haghaidh ceann de na cuspóirí seo a leanas:
(i)
the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
(i)
cuardach spriocdhírithe a dhéanamh ar íospartaigh shonracha a bhaineann le fuadach, gáinneáil ar dhaoine nó teacht i dtír gnéasach ar dhaoine, chomh maith le daoine atá ar iarraidh a chuardach;
(ii)
the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(ii)
garbhagairt shonrach shubstaintiúil ar bheatha nó ar shábháilteacht fhisiciúil daoine nádúrtha ó ionsaí sceimhlitheoireachta a chosc nó fíorbhagairt láithreach nó fíorbhagairt intuartha ó ionsaí sceimhlitheoireachta a chosc;
(iii)
the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
(iii)
logánú nó sainaithint duine a bhfuil amhras faoi go ndearna sé cion coiriúil, chun críoch imscrúdú coiriúil nó ionchúiseamh coiriúil a dhéanamh nó pionós coiriúil a fhorghníomhú i leith na gcionta dá dtagraítear in Iarscríbhinn II agus atá inphionóis sa Bhallstát lena mbaineann le pianbhreith choimeádta nó le hordú coinneála go ceann uastréimhse 4 bliana, ar a laghad.
Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of biometric data for purposes other than law enforcement.
Tá pointe (h) den chéad fhomhír gan dochar d’Airteagal 9 de Rialachán (AE) 2016/679 i ndáil le sonraí bithmhéadracha a phróiseáil chun críocha seachas forfheidhmiú an dlí.
2. The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into account the following elements:
2. Ní úsáidfear córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críocha forfheidhmiú an dlí le haghaidh aon cheann de na cuspóirí dá dtagraítear i mír 1, an chéad fhomhír, pointe (h), chun na gcríoch a leagtar amach sa phointe sin ach amháin chun céannacht an duine a bhfuiltear ag díriú air go sonrach a dheimhniú, agus cuirfear na heilimintí seo a leanas san áireamh ann:
(a)
the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm that would be caused if the system were not used;
(a)
staid an cháis a d’fhágfadh go bhféadfaí é a úsáid, go háirithe i gcás nach n-úsáidtear an córas, a thromchúisí a bheadh an díobháil agus dóchúlacht agus scála na díobhála sin;
(b)
the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.
(b)
iarmhairtí úsáid an chórais do chearta agus saoirsí gach duine lena mbaineann, go háirithe a thromchúisí a bheadh na hiarmhairtí sin agus dóchúlacht agus scála na n-iarmhairtí sin.
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such systems may be commenced without the registration in the EU database, provided that such registration is completed without undue delay.
Ina theannta sin, maidir le húsáid córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí le haghaidh aon cheann de na cuspóirí dá dtagraítear i mír 1, an chéad fhomhír, pointe (h), den Airteagal seo, comhlíonfar na coimircí agus na coinníollacha is gá agus is comhréireach i ndáil lena n-úsáid, i gcomhréir leis an dlí náisiúnta lena n-údaraítear a n-úsáid, go háirithe a mhéid a bhaineann leis na teorainneacha ama, geografacha agus pearsanta. Ní údarófar úsáid an chórais cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal ach amháin má tá measúnú tionchair ar chearta bunúsacha dá bhforáiltear in Airteagal 27 curtha i gcrích ag an údarás forfheidhmithe dlí agus má chláraigh sé an córas i mbunachar sonraí an Aontais de réir Airteagal 49. I gcásanna práinne a bhfuil údar cuí leo, áfach, féadfar tús a chur le húsáid na gcóras sin agus gan iad a bheith cláraithe i mbunachar sonraí an Aontais, ar choinníoll go gcuirfear an clárú sin i gcrích gan moill mhíchuí.
3. For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as the results and outputs of that use shall be immediately discarded and deleted.
3. Chun críocha mhír 1, an chéad fhomhír, pointe (h) agus mhír 2, maidir le gach úsáid a bhaintear as an gcóras cian-sainaitheanta bithmhéadraí ‘fíor-ama’ chun críocha a bhaineann le forfheidhmiú an dlí i spásanna atá inrochtana don phobal, beidh an úsáid sin faoi réir údarú roimh ré arna dheonú ag údarás breithiúnach nó ag údarás riaracháin neamhspleách a bhfuil a chinneadh ceangailteach ar an mBallstát ina bhfuil an úsáid le baint as an gcóras, údarú arna eisiúint ar iarraidh réasúnaithe agus i gcomhréir le rialacha mionsonraithe an dlí náisiúnta dá dtagraítear i mír 5. Mar sin féin, i gcás práinne a bhfuil údar cuí léi, féadfar tús a chur le húsáid an chórais sin gan údarú ar choinníoll go n-iarrtar an t-údarú sin gan moill mhíchuí, laistigh de 24 uair an chloig, ar a dhéanaí. Má dhiúltaítear an t-údarú sin, cuirfear deireadh leis an úsáid le héifeacht láithreach agus, déanfar na sonraí go léir, chomh maith le torthaí agus aschur na húsáide sin, a chur i leataobh agus a scriosadh láithreach.
The competent judicial authority or an independent administrative authority whose decision is binding shall grant the authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification system.
Ní thabharfaidh an t-údarás inniúil breithiúnach ná an t-údarás riaracháin neamhspleách a bhfuil a chinneadh ceangailteach an t-údarú ach amháin i gcás inar deimhin leis, bunaithe ar fhianaise oibiachtúil nó ar thásca soiléire arna gcur faoina bhráid, gur gá an córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ atá i gceist a úsáid agus gur comhréireach é a úsáid chun ceann de na cuspóirí a shonraítear i mír 1, an chéad fhomhír, pointe (h), a bhaint amach, faoi mar a shainaithnítear san iarraidh agus, go háirithe, go bhfuil sé fós teoranta don mhéid atá fíor-riachtanach maidir leis an tréimhse ama chomh maith leis an raon feidhme geografach agus pearsanta. Agus cinneadh á dhéanamh maidir leis an iarraidh sin, cuirfidh an t-údarás sin na gnéithe dá dtagraítear i mír 2 san áireamh. Ní fhéadfar aon chinneadh a mbeidh éifeacht dhíobhálach dhlíthiúil aige ar dhuine a dhéanamh bunaithe ar aschur an chórais cian-sainaitheanta bithmhéadraí ‘fíor-ama’ amháin.
4. Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data.
4. Gan dochar do mhír 3, déanfar gach úsáid a bhaintear as córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí a chur in iúl don údarás faireachais margaidh ábhartha agus don údarás náisiúnta cosanta sonraí i gcomhréir leis na rialacha náisiúnta dá dtagraítear i mír 5. Beidh san fhógra, ar a laghad, an fhaisnéis a shonraítear faoi mhír 6 agus ní áireofar ann sonraí oibríochtúla íogaire.
5. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and under the conditions listed in paragraph 1, first subparagraph, point (h), and paragraphs 2 and 3. Member States concerned shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, first subparagraph, point (h), including which of the criminal offences referred to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric identification systems.
5. Féadfaidh Ballstát a chinneadh foráil a dhéanamh maidir leis an bhféidearthacht úsáid córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ a údarú go hiomlán nó i bpáirt i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí laistigh de na teorainneacha agus faoi na coinníollacha a liostaítear i mír 1, an chéad fhomhír, pointe (h), agus i míreanna 2 agus 3. Leagfaidh na Ballstáit sin lena mbaineann síos ina ndlí náisiúnta na rialacha mionsonraithe is gá maidir leis na húdaruithe dá dtagraítear i mír 3 a iarraidh, a eisiúint agus a fheidhmiú mar aon le maoirseacht agus tuairisciú a bhaineann leis na húdaruithe sin. Maidir leis na cuspóirí a liostaítear i mír 1, an chéad fhomhír, pointe (h), lena n-áirítear na cionta coiriúla dá dtagraítear i bpointe (h)(iii) di, sonrófar sna rialacha sin freisin cé acu na cuspóirí sin agus cé acu na cionta sin a bhféadfaidh na húdaráis inniúla a údarú ina leith na córais sin a úsáid chun críocha a bhaineann le forfheidhmiú an dlí. Tabharfaidh na Ballstáit fógra don Choimisiún faoi na rialacha sin 30 lá, ar a dhéanaí, tar éis a nglactha. Féadfaidh na Ballstáit, i gcomhréir le dlí an Aontais, dlíthe níos sriantaí a thabhairt isteach maidir le húsáid córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’.
6. National market surveillance authorities and the national data protection authorities of Member States that have been notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes pursuant to paragraph 4 shall submit to the Commission annual reports on such use. For that purpose, the Commission shall provide Member States and national market surveillance and data protection authorities with a template, including information on the number of the decisions taken by competent judicial authorities or an independent administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and their result.
6. Cuirfidh údaráis náisiúnta faireachas margaidh agus údaráis náisiúnta cosanta sonraí na mBallstát ar tugadh fógra dóibh maidir le húsáid córas cian-sainaitheanta bithmhéadraí ‘fíor-ama’ i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí de bhun mhír 4, cuirfidh siad tuarascálacha bliantúla maidir leis an úsáid sin faoi bhráid an Choimisiúin. Chun na críche sin, cuirfidh an Coimisiún teimpléad ar fáil do na Ballstáit agus do na húdaráis náisiúnta faireachais margaidh agus do na húdaráis náisiúnta cosanta sonraí, lena n-áirítear faisnéis faoi líon na gcinntí a rinne an t-údarás inniúil breithiúnach nó an t-údarás riaracháin neamhspleách a bhfuil a chinneadh ceangailteach ar iarrataí ar údaruithe i gcomhréir le mír 3 agus ar thorthaí na gcinntí sin.
7. The Commission shall publish annual reports on the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, based on aggregated data in Member States on the basis of the annual reports referred to in paragraph 6. Those annual reports shall not include sensitive operational data of the related law enforcement activities.
7. Foilseoidh an Coimisiún tuarascálacha bliantúla maidir le húsáid córas cian-sainaitheanta bithmhéadraí fíor-ama i spásanna atá inrochtana don phobal chun críocha a bhaineann le forfheidhmiú an dlí, bunaithe ar shonraí comhiomlánaithe sna Ballstáit ar bhonn na dtuarascálacha bliantúla dá dtagraítear i mír 6. Ní áireofar sna tuarascálacha bliantúla sin sonraí oibríochtúla íogaire maidir le gníomhaíochtaí na n-údarás forfheidhmithe dlí lena mbaineann.
8. This Article shall not affect the prohibitions that apply where an AI practice infringes other Union law.
8. Ní dhéanfaidh an tAirteagal seo difear do na toirmisc a bhfuil feidhm leo i gcás ina sáraíonn cleachtas intleachta saorga dlí eile de chuid an Aontais.
Classification rules for high-risk AI systems
Rialacha aicmithe maidir le córais intleachta saorga ardriosca
1. Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
1. Gan beann ar cé acu a chuirtear nó nach gcuirtear córas intleachta saorga ar an margadh nó i mbun seirbhíse go neamhspleách ar na táirgí dá dtagraítear i bpointí (a) agus (b), measfar an córas intleachta saorga sin a bheith ina chóras ardriosca i gcás ina gcomhlíonfar an dá choinníoll seo a leanas:
(a)
the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(a)
tá an córas intleachta saorga ceaptha lena úsáid mar chomhpháirt sábháilteachta de tháirge, nó is táirge é an córas intleachta saorga é féin, a chumhdaítear faoi reachtaíocht chomhchuibhithe an Aontais a liostaítear in Iarscríbhinn I;
(b)
the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
(b)
maidir leis an táirge sin ar comhpháirt sábháilteachta de, de bhun phointe (a), an córas intleachta saorga, nó an córas intleachta saorga ar táirge sábháilteachta é, ceanglaítear measúnú comhréireachta tríú páirtí a dhéanamh ar an táirge sin d’fhonn é a chur ar an margadh nó a chur i mbun seirbhíse de bhun reachtaíocht chomhchuibhithe an Aontais a liostaítear in Iarscríbhinn I.
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be considered to be high-risk.
2. I dteannta na gcóras intleachta saorga ardriosca dá dtagraítear i mír 1, measfar na córais intleachta saorga dá dtagraítear in Iarscríbhinn III a bheith ina gcórais ardriosca.
3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.
3. De mhaolú ar mhír 2, ní mheasfar córas intleachta saorga dá dtagraítear in Iarscríbhinn III a bheith ina chóras ardriosca i gcás nach bhfuil riosca suntasach díobhála ann do shláinte, do shábháilteacht nó do chearta bunúsacha daoine nádúrtha, lena n-áirítear gan tionchar ábhartha a imirt ar thoradh na cinnteoireachta.
The first subparagraph shall apply where any of the following conditions is fulfilled:
Beidh feidhm ag an gcéad fhomhír i gcás ina gcomhlíontar aon cheann de na coinníollacha seo a leanas:
(a)
the AI system is intended to perform a narrow procedural task;
(a)
go mbeartaítear leis an gcóras intleachta saorga cúram cúng nós imeachta a dhéanamh;
(b)
the AI system is intended to improve the result of a previously completed human activity;
(b)
go mbeartaítear leis an gcóras intleachta saorga feabhas a chur ar thoradh gníomhaíochta arna cur i gcrích roimhe sin ag an duine;
(c)
the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(c)
go mbeartaítear leis an gcóras intleachta saorga patrúin chinnteoireachta nó diallais ó phatrúin chinnteoireachta a bhí ann roimhe seo a bhrath agus nach bhfuil sé i gceist leis an gcóras sin teacht in ionad an mheasúnaithe dhaonna a cuireadh i gcrích roimhe sin ná tionchar a imirt air, gan athbhreithniú cuí daonna; nó
(d)
the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.
(d)
go mbeartaítear leis an gcóras intleachta saorga cúram ullmhúcháin a dhéanamh maidir le measúnú atá ábhartha chun críocha na gcásanna úsáide a liostaítear in Iarscríbhinn III.
Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons.
D’ainneoin na chéad fhomhíre, measfar i gcónaí gur córas ardriosca é an córas intleachta saorga dá dtagraítear in Iarscríbhinn III i gcás ina ndéanann an córas intleachta saorga próifíliú ar dhaoine nádúrtha.
4. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.
4. Déanfaidh an soláthraí a mheasann nach córas ardriosca é an córas intleachta saorga dá dtagraítear in Iarscríbhinn III, déanfaidh sé a mheasúnú a dhoiciméadú sula gcuirfear an córas sin ar an margadh nó i mbun seirbhíse. Beidh an soláthraí sin faoi réir na hoibleagáide clárúcháin a leagtar amach in Airteagal 49(2). Arna iarraidh sin do na húdaráis inniúla náisiúnta, soláthróidh an soláthraí doiciméadacht an mheasúnaithe.
5. The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
5. Déanfaidh an Coimisiún, tar éis dó dul i gcomhairle leis an mBord Eorpach um an Intleacht Shaorga (an ‘Bord’), agus tráth nach déanaí ná an 2 Feabhra 2026, treoirlínte a chur ar fáil lena sonraítear cur chun feidhme praiticiúil an Airteagail seo i gcomhréir le hAirteagal 96 mar aon le liosta cuimsitheach de shamplaí praiticiúla de chásanna úsáide córas intleachta saorga ardriosca vis-à-vis cásanna úsáide córas intleachta saorga neamh-ardriosca.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.
6. Tugtar de chumhacht don Choimisiún gníomhartha tarmligthe a ghlacadh i gcomhréir le hAirteagal 97 chun leasú a dhéanamh ar mhír 3, an dara fomhír, den Airteagal seo trí choinníollacha nua a chur leo siúd a leagtar síos inti, nó trína modhnú; i gcás ina bhfuil fianaise nithiúil iontaofa ann go bhfuil córais intleachta saorga ann a thagann faoi raon feidhme Iarscríbhinn III agus nach mbaineann riosca suntasach díobhála do shláinte, do shábháilteacht ná do chearta bunúsacha daoine nádúrtha leo.
7. The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by this Regulation.
7. Glacfaidh an Coimisiún gníomhartha tarmligthe i gcomhréir le hAirteagal 97 chun leasú a dhéanamh ar mhír 3, an dara fomhír, den Airteagal seo trí aon cheann de na coinníollacha a leagtar síos inti a scriosadh, i gcás ina bhfuil fianaise nithiúil iontaofa ann go bhfuil gá leis sin chun an leibhéal cosanta do shláinte, do shábháilteacht agus do chearta bunúsacha dá bhforáiltear leis an Rialachán seo a choinneáil ar bun.
8. Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and take account of market and technological developments.
8. Ní laghdófar le haon leasú ar na coinníollacha a leagtar síos i mír 3, an dara fomhír, arna nglacadh i gcomhréir le míreanna 6 agus 7 den Airteagal seo an leibhéal foriomlán cosanta do shláinte, do shábháilteacht agus do chearta bunúsacha dá bhforáiltear leis an Rialachán seo agus áiritheofar leis comhsheasmhacht leis na gníomhartha tarmligthe arna nglacadh de bhun Airteagal 7(1), agus cuirfear forbairtí margaidh agus teicneolaíocha san áireamh.
Córas bainistíochta riosca
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
1. Bunófar córas bainistíochta riosca, cuirfear chun feidhme é, déanfar é a dhoiciméadú agus a choimeád ar bun i ndáil le córais intleachta saorga ardriosca.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:
2. Is é a thuigfear le córas bainistíochta riosca próiseas atriallach leanúnach a phleanáiltear agus a reáchtáiltear ar feadh shaolré an chórais intleachta saorga ardriosca, próiseas inar gá athbhreithniú agus nuashonrú tráthrialta córasach. Is éard a chuimseofar ann an méid seo a leanas:
(a)
the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;
(a)
sainaithint agus anailís na rioscaí aitheanta agus na rioscaí atá measartha intuartha a d’fhéadfadh a bheith ag baint leis an gcóras intleachta saorga ardriosca don tsláinte, don tsábháilteacht nó do chearta bunúsacha nuair a úsáidtear an córas intleachta saorga ardriosca i gcomhréir leis an gcríoch atá beartaithe dó;
(b)
the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;
(b)
meastachán agus meastóireacht ar na rioscaí a d’fhéadfadh teacht chun cinn nuair a úsáidfear an córas intleachta saorga ardriosca i gcomhréir leis an gcríoch atá beartaithe dó agus faoi dhálaí mí-úsáide atá measartha intuartha;
(c)
the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;
(c)
meastóireacht ar rioscaí eile a d’fhéadfadh a bheith ann bunaithe ar anailís ar shonraí arna mbailiú ón gcóras faireacháin iarmhargaidh dá dtagraítear in Airteagal 72;
(d)
the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a).
(d)
bearta bainistíochta riosca atá iomchuí agus spriocdhírithe a ghlacadh chun aghaidh a thabhairt ar na rioscaí arna sainaithint de bhun phointe (a).
3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
3. Ní bhainfidh na rioscaí dá dtagraítear san Airteagal seo ach leo siúd a d’fhéadfaí a mhaolú nó deireadh a chur leo go réasúnta trí fhorbairt nó dearadh an chórais intleachta saorga ardriosca, nó trí fhaisnéis theicniúil leormhaith a sholáthar.
4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
4. Leis na bearta bainistíochta riosca dá dtagraítear i mír 2, pointe (d), tabharfar aird chuí ar na héifeachtaí agus ar an idirghníomhaíocht a d’fhéadfadh a bheith ann mar thoradh ar chur i bhfeidhm comhpháirteach na gceanglas a leagtar amach sa Roinn seo, d’fhonn rioscaí a íoslaghdú ar bhealach níos éifeachtaí agus cothroime iomchuí a bhaint amach i gcur chun feidhme na mbeart chun na ceanglais sin a chomhlíonadh.
5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable.
5. Fágfaidh na bearta bainistíochta riosca dá dtagraítear i mír 2, pointe (d), go measfar go bhfuil aon riosca iarmharach ábhartha a bhaineann le gach guais agus riosca iarmharach foriomlán na gcóras intleachta saorga ardriosca inghlactha.
In identifying the most appropriate risk management measures, the following shall be ensured:
Agus na bearta bainistíochta riosca is iomchuí á sainaithint, áiritheofar an méid seo a leanas:
(a)
elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system;
(a)
díothú nó laghdú rioscaí arna sainaithint agus arna measúnú de bhun mhír 2 a mhéid is indéanta go teicniúil trí dhearadh agus forbairt leormhaith an chórais intleachta saorga ardriosca;
(b)
where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;
(b)
i gcás inarb iomchuí, cur chun feidhme beart maolaithe agus rialaithe leormhaith lena dtabharfar aghaidh ar rioscaí nach féidir a dhíothú;
(c)
provision of information required pursuant to Article 13 and, where appropriate, training to deployers.
(c)
soláthar na faisnéise is gá de bhun Airteagal 13, agus, i gcás inarb iomchuí, oiliúint a chur ar úsáideoirí gairmiúla.
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.
D’fhonn na rioscaí a bhaineann le húsáid an chórais intleachta saorga ardriosca á dhíothú nó á laghdú, tabharfar aird chuí ar an eolas teicniúil, ar an taithí, ar an oideachas agus ar an oiliúint ar cheart don úsáideoir gairmiúil a bheith ag súil leo agus ar chomhthéacs úsáide intuartha an chórais.
6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.
6. Déanfar córais intleachta saorga ardriosca a thástáil chun críocha shainaithint na mbeart bainistíochta riosca is iomchuí agus is spriocdhírithe. Áiritheofar leis an tástáil go bhfeidhmeoidh córais intleachta saorga ardriosca go comhsheasmhach don chríoch atá beartaithe dóibh agus go gcomhlíonann siad na ceanglais a leagtar amach sa Roinn seo.
7. Testing procedures may include testing in real-world conditions in accordance with Article 60.
7. Féadfar tástáil i bhfíordhálaí a áireamh sna nósanna imeachta tástála i gcomhréir le hAirteagal 60.
8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
8. Déanfar an tástáil ar na córais intleachta saorga ardriosca, de réir mar is iomchuí, tráth ar bith le linn an phróisis forbartha, agus in aon chás, sula gcuirfear ar an margadh nó sula gcuirfear i mbun seirbhíse iad. Déanfar tástáil in aghaidh méadrachtaí a shaineofar roimh ré agus tairseach dhóchúil a bheidh iomchuí don chríoch atá beartaithe don chóras intleachta saorga ardriosca.
9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.
9. Agus an córas bainistíochta riosca dá bhforáiltear i míreanna 1 go 7 á chur chun feidhme, déanfaidh soláthraithe a mheas, i bhfianaise na críche atá beartaithe dó, cíbe an dócha go mbeidh drochthionchar ag an gcóras intleachta saorga ardriosca ar dhaoine faoi bhun 18 mbliana d’aois agus, de réir mar is iomchuí, ar ghrúpaí leochaileacha eile.
10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law.
10. I gcás soláthraithe córas intleachta saorga ardriosca atá faoi réir ceanglais maidir le próisis inmheánacha bainistíochta riosca faoi fhorálacha ábhartha eile an Aontais, féadfaidh na gnéithe dá bhforáiltear i míreanna 1 go 9 a bheith mar chuid de na nósanna imeachta bainistíochta riosca a bhunaítear de bhun an dlí sin, nó féadfar iad a chomhcheangal leo.
Sonraí agus rialachas sonraí
1. High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used.
1. Maidir le córais intleachta saorga ardriosca a bhaineann úsáid as teicnící lena mbaineann oiliúint samhlacha intleachta saorga le sonraí, déanfar iad a fhorbairt ar bhonn tacar sonraí oiliúna, bailíochtaithe agus tástála a chomhlíonann na critéir cháilíochta dá dtagraítear i míreanna 2 go 5, agus tacair sonraí den sórt sin in úsáid.
2. Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular:
2. Beidh tacair sonraí oiliúna, bailíochtaithe agus tástála faoi réir cleachtais rialachais sonraí agus bainistíochta atá iomchuí chun na críche atá beartaithe don chóras intleachta saorga ardriosca. Bainfidh na cleachtais sin go háirithe leis an méid seo a leanas:
(a)
the relevant design choices;
(a)
na roghanna ábhartha maidir le dearadh;
(b)
data collection processes and the origin of data, and in the case of personal data, the original purpose of the data collection;
(b)
próisis bailithe sonraí agus tionscnamh na sonraí, agus i gcás sonraí pearsanta, cuspóir bunaidh an bhailithe sonraí;
(c)
relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and aggregation;
(c)
oibríochtaí próiseála ullmhúcháin i gcomhair sonraí ábhartha, amhail anótáil, lipéadú, glanadh, nuashonrú, saibhriú agus comhiomlánú;
(d)
the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and represent;
(d)
toimhdí a fhoirmliú, go háirithe maidir leis an bhfaisnéis a bhfuil na sonraí ceaptha le tomhas agus le hionadaíocht a dhéanamh uirthi;
(e)
an assessment of the availability, quantity and suitability of the data sets that are needed;
(e)
measúnú ar infhaighteacht, cainníocht agus oiriúnacht na dtacar sonraí a bhfuil gá leo;
(f)
examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations;
(f)
iniúchadh i bhfianaise claontachtaí a d’fhéadfadh difear a dhéanamh do shláinte agus sábháilteacht daoine, tionchar diúltach a bheith acu ar chearta bunúsacha nó idirdhealú a thoirmisctear faoi dhlí an Aontais a bheith mar thoradh air, go háirithe i gcás ina mbíonn tionchar ag aschuir sonraí ar ionchuir le haghaidh oibríochtaí a bheidh ann amach anseo;
(g)
appropriate measures to detect, prevent and mitigate possible biases identified according to point (f);
(g)
bearta iomchuí chun claontachtaí a d’fhéadfadh a bheith ann a bhrath, a chosc agus a mhaolú, arna sainaithint i gcomhréir le pointe (f);
(h)
the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those gaps and shortcomings can be addressed.
(h)
bearnaí nó easnaimh ábhartha sonraí a shainaithint lena gcoisctear comhlíonadh an Rialacháin seo, agus conas is féidir aghaidh a thabhairt ar na bearnaí agus na heasnaimh sin.
3. Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination thereof.
3. Maidir leis na tacair sonraí oiliúna, bailíochtaithe agus tástála, beidh siad ábhartha, sách ionadaíoch, agus a mhéid is mó is féidir, saor ó earráidí agus iomlán, i bhfianaise na críche atá beartaithe dóibh. Beidh na hairíonna staidrimh iomchuí acu, lena n-áirítear, i gcás inarb iomchuí, a mhéid a bhaineann leis na daoine nó na grúpaí díobh a bhfuil sé beartaithe an córas intleachta saorga ardriosca a úsáid ina leith. Féadfar saintréithe na dtacar sonraí sin a chomhlíonadh ar leibhéal na dtacar aonair sonraí nó ar leibhéal atá ar mheascán díobh sin.
4. Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI system is intended to be used.
4. Cuirfear san áireamh sna tacair sonraí, a mhéid a cheanglaítear i bhfianaise na críche atá beartaithe dóibh, cuirfear san áireamh na saintréithe nó eilimintí a bhaineann go sonrach leis an suíomh geografach, comhthéacsúil, iompraíochta nó feidhmiúil ina bhfuil sé beartaithe an córas intleachta saorga ardriosca a úsáid.
5. To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur:
5. A mhéid a bhfuil géarghá leis chun brath agus ceartú claontachta a áirithiú i ndáil leis na córais intleachta saorga ardriosca, i gcomhréir le mír (2), pointí (f) agus (g) den Airteagal seo, féadfaidh soláthraithe na gcóras sin catagóirí speisialta sonraí pearsanta a phróiseáil go heisceachtúil, faoi réir coimircí iomchuí do chearta agus do shaoirsí bunúsacha daoine nádúrtha. De bhreis ar na forálacha a leagtar amach i Rialachán (AE) 2016/679 agus (AE) 2018/1725 agus i dTreoir (AE) 2016/680, ní mór na coinníollacha uile seo a leanas a chomhlíonadh chun go dtarlóidh próiseáil den sórt sin:
(a)
the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data;
(a)
ní féidir brath agus ceartú claontachta a chomhlíonadh go héifeachtach trí shonraí eile a phróiseáil, lena n-áirítear sonraí sintéiseacha nó anaithnidithe;
(b)
the special categories of personal data are subject to technical limitations on the re-use of the personal data, and state-of-the-art security and privacy-preserving measures, including pseudonymisation;
(b)
tá na catagóirí speisialta sonraí pearsanta faoi réir teorainneacha teicniúla maidir le hathúsáid na sonraí pearsanta, agus maidir le hathúsáid beart úrscothach slándála agus beart caomhnaithe príobháideachais, lena n-áirítear an bréagainmniú;
(c)
the special categories of personal data are subject to measures to ensure that the personal data processed are secured, protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations;
(c)
tá na catagóirí speisialta sonraí pearsanta faoi réir bearta chun a áirithiú go bhfuil na sonraí pearsanta a phróiseáiltear slán, faoi chosaint, faoi réir coimircí oiriúnacha, lena n-áirítear rialuithe dochta agus doiciméadacht na rochtana, chun mí-úsáid a sheachaint agus chun a áirithiú nach mbeidh rochtain ach ag daoine údaraithe a bhfuil oibleagáidí rúndachta iomchuí acu ar na sonraí pearsanta sin;
(d)
the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties;
(d)
ní bheidh na catagóirí speisialta sonraí pearsanta le tarchur ná le haistriú ná ní bheidh rochtain ar bhealach eile ag páirtithe eile orthu;
(e)
the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the end of its retention period, whichever comes first;
(e)
scriostar na catagóirí speisialta sonraí pearsanta a luaithe a bheidh an chlaontacht ceartaithe nó nuair a bheidh deireadh le tréimhse choinneála na sonraí pearsanta, cibé acu is túisce;
(f)
the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect and correct biases, and why that objective could not be achieved by processing other data.
(f)
áirítear sna taifid ar ghníomhaíochtaí próiseála de bhun Rialacháin (AE) 2016/679 agus (AE) 2018/1725 agus Threoir (AE) 2016/680 na cúiseanna a raibh géarghá le catagóirí speisialta sonraí pearsanta a phróiseáil chun claontachtaí a bhrath agus a cheartú, agus an fáth nárbh fhéidir an cuspóir sin a bhaint amach trí shonraí eile a phróiseáil.
6. For the development of high-risk AI systems not using techniques involving the training of AI models, paragraphs 2 to 5 apply only to the testing data sets.
6. Chun córais intleachta saorga ardriosca a fhorbairt nach n-úsáideann teicnící lena mbaineann oiliúint samhlacha intleachta saorga, ní bheidh feidhm ag míreanna 2 go 5 ach amháin maidir leis na tacair sonraí tástála.
1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date.
1. Déanfar an doiciméadacht theicniúil a bhaineann le córas intleachta saorga ardriosca a tharraingt suas sula gcuirfear an córas sin ar an margadh nó i mbun seirbhíse agus coinneofar cothrom le dáta í.
The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with the requirements set out in this Section and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment.
Déanfar an doiciméadacht theicniúil a tharraingt suas ar bhealach ina léireofar go gcomhlíonann an córas intleachta saorga ardriosca na ceanglais a leagtar amach sa Roinn seo agus go gcuirfear an fhaisnéis go léir is gá, i bhfoirm shoiléir chuimsitheach, ar fáil d’údaráis inniúla náisiúnta agus do chomhlachtaí náisiúnta faoina dtugtar fógra chun measúnú a dhéanamh ar chomhlíontacht na gcóras intleachta saorga ardriosca leis na ceanglais sin. Beidh, ar a laghad, na heilimintí a leagtar amach in Iarscríbhinn IV inti. Féadfaidh FBManna, lena n-áirítear gnólachtaí nuathionscanta, na heilimintí a bhaineann leis an doiciméadacht theicniúil a shonraítear in Iarscríbhinn IV a sholáthar ar bhealach simplithe. Chuige sin, bunóidh an Coimisiún foirm shimplithe den doiciméadacht theicniúil a bheidh dírithe ar riachtanais na bhfiontar beag agus na micrifhiontar. I gcás ina roghnaíonn FBM, lena n-áirítear gnólacht nuathionscanta, an fhaisnéis a cheanglaítear in Iarscríbhinn IV a sholáthar ar bhealach simplithe, bainfidh sé úsáid as an bhfoirm dá dtagraítear sa mhír seo. Glacfaidh na comhlachtaí faoina dtugtar fógra leis an bhfoirm sin chun críoch measúnaithe comhréireachta.
2. Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up containing all the information set out in paragraph 1, as well as the information required under those legal acts.
2. Maidir le córas intleachta saorga ardriosca a bhaineann le táirge a chumhdaítear faoi reachtaíocht chomhchuibhithe an Aontais a liostaítear i Roinn A d’Iarscríbhinn I, i gcás ina gcuirtear ar an margadh é nó ina gcuirtear i mbun seirbhíse é, déanfar doiciméadacht theicniúil aonair amháin a tharraingt suas ina mbeidh an fhaisnéis go léir a leagtar amach i mír I chomh maith leis an bhfaisnéis a cheanglaítear faoi na gníomhartha dlí sin.
3. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex IV, where necessary, to ensure that, in light of technical progress, the technical documentation provides all the information necessary to assess the compliance of the system with the requirements set out in this Section.
3. Tugtar de chumhacht don Choimisiún gníomhartha tarmligthe a ghlacadh i gcomhréir le hAirteagal 97 chun Iarscríbhinn IV a leasú i gcás inar gá chun a áirithiú, i bhfianaise an dul chun cinn theicniúil, go soláthraítear sa doiciméadacht theicniúil an fhaisnéis uile is gá chun comhlíontacht an chórais leis na ceanglais a leagtar amach sa Roinn seo a mheasúnú.
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
1. Déanfar córais intleachta saorga ardriosca a dhearadh agus a fhorbairt le huirlisí iomchuí comhéadain duine le meaisín ar bhealach lenar féidir le daoine nádúrtha iad a mhaoirsiú go héifeachtach le linn na tréimhse ina mbeidh an córas intleachta saorga ardriosca in úsáid.
2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.
2. Beidh sé d’aidhm ag an bhformhaoirseacht dhaonna na rioscaí don tsláinte, don tsábháilteacht nó do chearta bunúsacha a d’fhéadfadh teacht chun cinn a chosc nó a íoslaghdú nuair a úsáidtear córas intleachta saorga ardriosca i gcomhréir leis an gcríoch atá beartaithe dó nó faoi choinníollacha mí-úsáide atá measartha intuartha, go háirithe nuair a leanann na rioscaí sin de bheith ann d’ainneoin chur i bhfeidhm na gceanglas eile a leagtar amach sa Roinn seo.
3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:
3. Beidh na bearta formhaoirseachta i gcomhréir le rioscaí, leibhéal neamhspleáchais agus comhthéacs úsáide an chórais intleachta saorga ardriosca, agus áiritheofar iad trí cheann amháin nó tríd an dá cheann de na cineálacha beart seo a leanas:
(a)
measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
(a)
bearta arna sainaithint agus arna dtógáil sa chóras intleachta saorga ardriosca ag an soláthraí nuair is indéanta go teicniúil, sula gcuirfear ar an margadh nó i mbun seirbhíse é;
(b)
measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
(b)
bearta arna sainaithint ag an soláthraí sula gcuirfear an córas intleachta saorga ardriosca ar an margadh nó sula gcuirfear i mbun seirbhíse é, ar bearta iad atá iomchuí don úsáideoir gairmiúil a chur chun feidhme.
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:
4. Chun míreanna 1, 2 agus 3 a chur chun feidhme, déanfar an córas intleachta saorga ardriosca a sholáthar don úsáideoir gairmiúil sa chaoi go ndéanfar daoine nádúrtha a mbeidh formhaoirseacht dhaonna sannta dóibh, de réir mar is iomchuí agus comhréireach, a chumasú an méid seo a leanas a dhéanamh:
(a)
to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
(a)
tuiscint cheart a fháil ar acmhainneachtaí agus teorainneacha ábhartha an chórais intleachta saorga ardriosca agus a bheith in ann faireachán cuí a dhéanamh ar oibriú an chórais sin, lena n-áirítear i bhfianaise aimhrialtachtaí, mífheidhmithe agus feidhmíocht gan choinne a bhrath agus aghaidh a thabhairt orthu;
(b)
to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
(b)
a bheith ar an eolas i gcónaí faoin gclaonadh a d’fhéadfadh a bheith ann brath go huathoibríoch ar an aschur arna tháirgeadh ag córas intleachta saorga lena mbaineann riosca ard (claontacht uathoibrithe), go háirithe i gcás córas intleachta saorga ardriosca a úsáidtear chun faisnéis nó moltaí a chur ar fáil le haghaidh cinntí a bheidh le déanamh ag daoine nádúrtha;
(c)
to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
(c)
aschur an chórais intleachta saorga ardriosca a léirmhíniú i gceart, agus na huirlisí agus na modhanna léirmhínithe atá ar fáil, mar shampla, á gcur san áireamh;
(d)
to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
(d)
a chinneadh, in aon chás ar leith, gan úsáid a bhaint as an gcóras intleachta saorga ardriosca ná gan neamhaird a thabhairt ar aschur an chórais intleachta saorga ardriosca ná é a shárú nó a aisiompú ar shlí eile;
(e)
to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
(e)
idirghabháil a dhéanamh maidir le hoibriú an chórais intleachta saorga ardriosca nó cur isteach ar an gcóras le cnaipe ‘stad’ nó trí nós imeachta eile den chineál sin lenar féidir leis an gcóras teacht chun stad ar bhealach sábháilte.
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.
5. Maidir leis na córais intleachta saorga ardriosca dá dtagraítear i bpointe 1(a) d’Iarscríbhinn III, beidh na bearta dá dtagraítear i mír 3 den Airteagal seo de chineál a áiritheoidh, ina theannta sin, nach ndéanfaidh an t-úsáideoir gairmiúil aon ghníomh ná cinneadh ar bhonn an tsainaitheantais sin a dhéanfar leis an gcóras mura ndearna beirt daoine nádúrtha ar a laghad ag a bhfuil an inniúlacht, oiliúint agus údarás is gá an sainaitheantas a fhíorú agus a dhearbhú ar leithligh.
The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.
Ní bheidh feidhm ag an gceanglas maidir le fíorú ar leithligh a dhéanfaidh beirt daoine nádúrtha ar a laghad maidir le córais intleachta saorga ardriosca a úsáidtear chun críocha fhorfheidhmiú an dlí, imirce, rialaithe teorann nó tearmainn, i gcásanna ina bhfuil cur i bhfeidhm an cheanglais sin díréireach de réir dhlí an Aontais nó an dlí náisiúnta.
Accuracy, robustness and cybersecurity
Cruinneas, stóinseacht agus cibearshlándáil
1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
1. Déanfar córais intleachta saorga ardriosca a dhearadh agus a fhorbairt sa chaoi is go mbainfidh siad amach leibhéal iomchuí cruinnis, stóinseachta agus cibearshlándála, agus go bhfeidhmeoidh siad go comhsheasmhach ar feadh a saolré.
2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.
2. Chun aghaidh a thabhairt ar na gnéithe teicniúla i ndáil le leibhéil iomchuí chruinnis agus stóinseachta mar a leagtar amach i mír 1 iad mar aon le haon mhéadracht feidhmíochta ábhartha eile a thomhas, molfaidh an Coimisiún, agus é ag obair i gcomhar le geallsealbhóirí agus eagraíochtaí ábhartha, amhail údaráis mhéadreolaíochta agus tagarmharcála, go ndéanfar, de réir mar is iomchuí, tagarmharcanna agus modheolaíochtaí tomhais a fhorbairt.
3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
3. Sna treoracha úsáide a ghabhann le córais intleachta saorga ardriosca dearbhófar a leibhéil chruinnis agus a méadrachtaí cruinnis ábhartha.
4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard.
4. Beidh córais intleachta saorga ardriosca chomh hathléimneach agus is féidir maidir le hearráidí, fabhtanna nó neamhréireachtaí a d’fhéadfadh tarlú sa chóras nó sa timpeallacht ina n-oibríonn an córas, go háirithe mar gheall ar a n-idirghníomhú le daoine nádúrtha nó córais eile. Déanfar bearta teicniúla agus eagraíochtúla ina thaobh sin.
The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.
Féadfar stóinseacht na gcóras intleachta saorga ardriosca a bhaint amach trí réitigh iomarcaíochta theicniúla, lena bhféadfaí pleananna cúltaca nó pleananna atá slán i gcás teipe a áireamh.
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.
Maidir le córais intleachta saorga ardriosca a leanann de bheith ag foghlaim tar éis iad a bheith curtha ar an margadh nó curtha i mbun seirbhíse, déanfar iad a fhorbairt ar bhealach ina ndíothaítear nó ina laghdaítear a oiread is féidir an riosca go n-imreodh aschuir a d’fhéadfadh a bheith claonta tionchar ar ionchur le haghaidh oibríochtaí sa todhchaí (‘lúba aischothaithe’), agus chun a áirithiú go dtugtar aghaidh ar aon lúba aischothaithe mar iad le bearta maolaithe iomchuí.
5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities.
5. Beidh córais intleachta saorga ardriosca athléimneach i gcoinne iarrachtaí tríú páirtithe a n-úsáid, a n-aschuir nó a bhfeidhmíocht a athrú trí theacht i dtír ar leochaileachtaí an chórais.
The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
Maidir leis na réitigh theicniúla arb é is aidhm dóibh cibearshlándáil na gcóras intleachta saorga ardriosca a áirithiú, beidh siad iomchuí do na himthosca ábhartha agus do na rioscaí.
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws.
Maidir leis na réitigh theicniúla chun aghaidh a thabhairt ar leochaileachtaí a bhaineann go sonrach leis an intleacht shaorga, áireofar iontu sin, i gcás inarb iomchuí, bearta chun ionsaithe atá ag iarraidh ionramháil a dhéanamh ar an tacar sonraí oiliúna (nimhiú sonraí) a chosc, a bhrath, freagairt orthu, a réiteach agus rialú lena n-aghaidh, nó comhpháirteanna réamhoilte a úsáidtear san oiliúint (nimhiú samhla), ionchuir atá ceaptha ionas go ndéanfadh an tsamhail intleachta saorga botún mar gheall orthu (samplaí sáraíochta nó imghabháil samhla), ionsaithe rúndachta nó lochtanna samhla.
Quality management system
Córas bainistíochta cáilíochta
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
1. Cuirfidh soláthraithe córas intleachta saorga ardriosca córas bainistíochta cáilíochta i bhfeidhm lena n-áiritheofar go gcomhlíonfar an Rialachán seo. Déanfar an córas sin a dhoiciméadú ar bhealach córasach agus in ord, i bhfoirm beartais, nósanna imeachta agus treoracha i scríbhinn, agus beidh, ar laghad, na gnéithe seo a leanas san áireamh leis sin:
(a)
a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;
(a)
straitéis maidir le comhlíonadh rialála, lena n-áirítear comhlíonadh na nósanna imeachta um measúnú comhréireachta agus na nósanna imeachta chun modhnuithe ar an gcóras intleachta saorga a bhainistiú;
(b)
techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;
(b)
teicnící, nósanna imeachta agus gníomhaíochtaí córasacha atá le húsáid maidir le dearadh, rialú deartha agus fíorú deartha an chórais intleachta saorga ardriosca;
(c)
techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;
(c)
teicnící, nósanna imeachta agus gníomhaíochtaí córasacha atá le húsáid maidir le forbairt, rialú cáilíochta agus dearbhú cáilíochta an chórais intleachta saorga ardriosca;
(d)
examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;
(d)
nósanna imeachta scrúdaithe, tástála agus bailíochtaithe atá le déanamh roimh an gcóras intleachta saorga ardriosca a fhorbairt, lena linn agus ina dhiaidh, agus a mhinice is gá iad a dhéanamh;
(e)
technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that the high-risk AI system complies with those requirements;
(e)
sonraíochtaí teicniúla, lena n-áirítear caighdeáin, atá le cur i bhfeidhm agus, i gcás nach gcuirfear na caighdeáin chomhchuibhithe ábhartha i bhfeidhm ina n-iomláine nó nach gcumhdófar na ceanglais ábhartha uile a leagtar amach i Roinn 2 leo, na modhanna atá le húsáid chun a áirithiú go gcomhlíonann an córas intleachta saorga ardriosca na ceanglais sin;
(f)
systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI systems;
(f)
córais agus nósanna imeachta um bainistíocht sonraí, lena n-áirítear fáil sonraí, bailiú sonraí, anailísíocht sonraí, lipéadú sonraí, stóráil sonraí, scagadh sonraí, mianadóireacht sonraí, comhiomlánú sonraí, coinneáil sonraí agus aon oibríocht eile maidir leis na sonraí a fheidhmítear roimh chórais ardriosca intleachta saorga a chur ar an margadh agus a chur i mbun seirbhíse agus chun na críche sin;
(g)
the risk management system referred to in Article 9;
(g)
an córas bainistíochta riosca dá dtagraítear in Airteagal 9;
(h)
the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72;
(h)
córas faireacháin iarmhargaidh a chur ar bun, a chur chun feidhme agus a chothabháil i gcomhréir le hAirteagal 72;
(i)
procedures related to the reporting of a serious incident in accordance with Article 73;
(i)
nósanna imeachta a bhaineann le teagmhas tromchúiseach a thuairisciú i gcomhréir le hAirteagal 73;
(j)
the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;
(j)
láimhseáil na cumarsáide le húdaráis inniúla náisiúnta, le húdaráis ábhartha eile, lena n-áirítear iad siúd a thugann rochtain ar shonraí nó a thacaíonn leis an rochtain sin, le comhlachtaí faoina dtugtar fógra, le hoibreoirí eile, le custaiméirí nó le páirtithe leasmhara eile;
(k)
systems and procedures for record-keeping of all relevant documentation and information;
(k)
córais agus nósanna imeachta maidir le taifid a choimeád ar gach doiciméad agus faisnéis ábhartha;
(l)
resource management, including security-of-supply related measures;
(l)
bainistiú acmhainní, lena n-áirítear bearta a bhaineann le slándáil an tsoláthair;
(m)
an accountability framework setting out the responsibilities of the management and other staff with regard to all the aspects listed in this paragraph.
(m)
creat cuntasachta lena leagtar amach freagrachtaí an lucht bainistíochta agus na foirne eile maidir le gach gné a liostaítear sa mhír seo.
2. The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the compliance of their high-risk AI systems with this Regulation.
2. Beidh cur chun feidhme na ngnéithe dá dtagraítear i mír 1 comhréireach le méid eagraíocht an tsoláthraí. Urramóidh soláthraithe, in aon chás, an méid déine agus an leibhéal cosanta is gá chun a áirithiú go gcomhlíonfaidh a gcórais intleachta saorga ardriosca an Rialachán seo.
3. Providers of high-risk AI systems that are subject to obligations regarding quality management systems or an equivalent function under relevant sectoral Union law may include the aspects listed in paragraph 1 as part of the quality management systems pursuant to that law.
3. Dála soláthraithe córas intleachta saorga ardriosca atá faoi réir oibleagáidí maidir le córais bainistíochta cáilíochta nó feidhm choibhéiseach faoi dhlí earnála ábhartha an Aontais, féadfaidh siad na gnéithe a liostaítear i mír 1 a áireamh mar chuid de na córais bainistíochta cáilíochta a bhunaítear de bhun an dlí sin.
4. For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the obligation to put in place a quality management system, with the exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any harmonised standards referred to in Article 40 shall be taken into account.
4. I gcás soláthraithe ar institiúidí airgeadais iad atá faoi réir ceanglais maidir lena rialachas, socruithe nó próisis inmheánacha faoi dhlí seirbhísí airgeadais an Aontais, measfar go gcomhlíontar an oibleagáid córas bainistíochta cáilíochta a chur i bhfeidhm, cé is moite de mhír 1, pointí (g), (h) agus (i) den Airteagal seo, trí na rialacha maidir le socruithe nó próisis rialachais inmheánaigh de bhun dhlí seirbhísí airgeadais ábhartha an Aontais a chomhlíonadh. Chuige sin, cuirfear san áireamh aon chaighdeán comhchuibhithe dá dtagraítear in Airteagal 40.
Corrective actions and duty of information
Gníomhaíochtaí ceartaitheacha agus an dualgas maidir le faisnéis
1. Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised representative and importers accordingly.
1. Soláthraithe córas intleachta saorga a mheasann, nó a bhfuil cúis mhaith acu a mheas, nach bhfuil córas intleachta saorga a chuir siad ar an margadh nó a chuir siad i mbun seirbhíse i gcomhréir leis an Rialachán seo, déanfaidh siad na bearta ceartaitheacha is gá láithreach chun an córas sin a thabhairt chun comhréireachta, chun é a astarraingt, chun é a dhíchumasú, nó chun é a aisghairm, de réir mar is iomchuí. Cuirfidh siad na dáileoirí, agus, i gcás inarb infheidhme, na húsáideoirí gairmiúla, an t-ionadaí údaraithe agus na hallmhaireoirí ar an eolas faoin gcóras intleachta saorga ardriosca dá réir sin.
2. Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature of the non-compliance and of any relevant corrective action taken.
2. I gcás ina bhfuil riosca ag baint leis an gcóras intleachta saorga ardriosca de réir bhrí Airteagal 79(1) agus ina dtagann soláthraí an chórais ar an eolas faoin riosca sin, fiosróidh sé na cúiseanna láithreach bonn, i ndlúthchomhar leis an úsáideoir gairmiúil a thuairiscigh an riosca, i gcás inarb infheidhme, agus cuirfidh sé údaráis faireachais margaidh atá inniúil maidir leis an gcóras intleachta saorga ardriosca lena mbaineann ar an eolas ina thaobh, agus i gcás inarb infheidhme, an comhlacht faoina dtugtar fógra a d’eisigh deimhniúchán don chóras intleachta saorga ardriosca sin i gcomhréir le hAirteagal 44, go háirithe i ndáil le cineál an neamhchomhlíonta agus aon ghníomhaíocht cheartaitheach ábhartha a rinneadh.
Authorised representatives of providers of high-risk AI systems
Ionadaithe údaraithe sholáthraithe na gcóras intleachta saorga ardriosca
1. Prior to making their high-risk AI systems available on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.
1. Sula gcuirfidh siad a gcórais intleachta saorga ardriosca ar fáil ar mhargadh an Aontais, ceapfaidh soláthraithe atá bunaithe i dtríú tíortha, trí shainordú i scríbhinn, ionadaí údaraithe atá bunaithe san Aontas.
2. The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider.
2. Cuirfidh an soláthraí ar chumas a ionadaí údaraithe na cúraimí arna sonrú sa sainordú a gheofar ón soláthraí a dhéanamh.
3. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:
3. Feidhmeoidh an t-ionadaí údaraithe na cúraimí arna sonrú sa sainordú a gheofar ón soláthraí. Cuirfidh sé cóip den sainordú ar fáil do na húdaráis faireachais margaidh arna iarraidh sin, i gceann amháin de theangacha oifigiúla de chuid institiúidí an Aontais a chinnfidh an t-údarás inniúil. Chun críocha an Rialacháin seo, tabharfar de chumhacht, leis an sainordú, don ionadaí údaraithe na cúraimí seo a leanas a chur i gcrích:
(a)
verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider;
(a)
a fhíorú gur dréachtaíodh an dearbhú comhréireachta AE dá dtagraítear in Airteagal 47 agus an doiciméadacht theicniúil dá dtagraítear in Airteagal 11 agus go ndearna an soláthraí an nós imeachta um measúnú comhréireachta iomchuí;
(b)
keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in Article 47, the technical documentation and, if applicable, the certificate issued by the notified body;
(b)
sonraí teagmhála an tsoláthraí a cheap an t-ionadaí údaraithe, cóip den dearbhú comhréireachta AE dá dtagraítear in Airteagal 47, an doiciméadacht theicniúil agus, más infheidhme, an deimhniú arna eisiúint ag an gcomhlacht faoina dtugtar fógra, a choimeád ar fáil do na húdaráis inniúla agus do na húdaráis náisiúnta nó do chomhlachtaí dá dtagraítear in Airteagal 74(10), ar feadh tréimhse 10 mbliana tar éis an córas intleachta saorga ardriosca a chur ar an margadh nó a chur i mbun seirbhíse;
(c)
provide a competent authority, upon a reasoned request, with all the information and documentation, including that referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by the high-risk AI system, to the extent such logs are under the control of the provider;
(c)
ar iarraidh réasúnaithe, an fhaisnéis agus an doiciméadacht uile is gá, lena n-áirítear an fhaisnéis agus an doiciméadacht dá dtagraítear i bpointe (b) den fhomhír seo, a chur ar fáil d’údarás inniúil chun a léiriú go gcomhlíonann córas intleachta saorga ardriosca na ceanglais a leagtar amach i Roinn 2, lena n-áirítear rochtain ar na logaí, dá dtagraítear in Airteagal 12(1), a ghineann an córas intleachta saorga ardriosca go huathoibríoch sa mhéid atá na logaí sin faoi rialú an tsoláthraí;
(d)
cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system;
(d)
comhoibriú le húdaráis inniúla, ar iarraidh réasúnaithe, in aon ghníomhaíocht a dhéanfaidh na húdaráis sin maidir leis an gcóras intleachta saorga ardriosca, go háirithe chun na rioscaí a bhaineann leis an gcóras intleachta saorga ardriosca a laghdú agus a mhaolú;
(e)
where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct.
(e)
i gcás inarb infheidhme, na hoibleagáidí clárúcháin dá dtagraítear in Airteagal 49(1) a chomhlíonadh nó, más é an soláthraí féin a dhéanann an clárú, a áirithiú go bhfuil an fhaisnéis dá dtagraítear i bpointe 3 de Roinn A d’Iarscríbhinn VIII ceart.
The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the competent authorities, on all issues related to ensuring compliance with this Regulation.
Leis an sainordú, tabharfar de chumhacht don ionadaí údaraithe déileáil leis na húdaráis inniúla, sa bhreis ar an soláthraí nó ina ionad, maidir le gach saincheist a bhaineann le comhlíonadh an Rialacháin seo a áirithiú.
4. The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons therefor.
4. Foirceannfaidh an t-ionadaí údaraithe an sainordú má mheasann sé, nó má tá cúis aige lena mheas, go bhfuil an soláthraí ag gníomhú contrártha lena oibleagáidí faoin Rialachán seo. I gcás den sórt sin, cuirfidh sé an t-údarás ábhartha faireachais margaidh, agus, i gcás inarb infheidhme, an comhlacht ábhartha faoina dtugtar fógra, ar an eolas láithreach faoi fhoirceannadh an tsainordaithe agus faoi na cúiseanna atá leis.
Oibleagáidí allmhaireoirí
1. Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this Regulation by verifying that:
1. Sula gcuirfear córas intleachta saorga ardriosca ar an margadh, áiritheoidh allmhaireoirí go bhfuil an córas i gcomhréir leis an Rialachán seo trína fhíorú:
(a)
the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the high-risk AI system;
(a)
gur chuir soláthraí an chórais intleachta saorga ardriosca an nós imeachta ábhartha um measúnú comhréireachta dá dtagraítear in Airteagal 43 i bhfeidhm;
(b)
the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;
(b)
gur tharraing an soláthraí suas an doiciméadacht theicniúil i gcomhréir le hAirteagal 11 agus Iarscríbhinn IV;
(c)
the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in Article 47 and instructions for use;
(c)
go bhfuil an mharcáil CE is gá ar an gcóras agus go bhfuil an dearbhú comhréireachta AE dá dtagraítear in Airteagal 47 agus na treoracha úsáide is gá ag gabháil leis;
(d)
the provider has appointed an authorised representative in accordance with Article 22(1).
(d)
go bhfuil ionadaí údaraithe ceaptha ag an soláthraí i gcomhréir le hAirteagal 22(1).
2. Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to that effect.
2. I gcás ina bhfuil cúis leordhóthanach ag allmhaireoir a mheas nach bhfuil córas intleachta saorga ardriosca i gcomhréir leis an Rialachán seo, nó gur falsaíodh é, nó gur cuireadh doiciméadacht fhalsaithe leis, ní chuirfidh sé an córas ar an margadh go dtí go mbeidh sé tugtha chun comhréireachta. I gcás ina bhfuil riosca ag baint leis an gcóras intleachta saorga ardriosca de réir bhrí Airteagal 79(1), cuirfidh an t-allmhaireoir soláthraí an chórais, an t-ionadaí údaraithe agus na húdaráis faireachais margaidh ar an eolas faoi sin.
3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable.
3. Cuirfidh allmhaireoirí a n-ainm, a dtrádainm cláraithe nó a dtrádmharc cláraithe agus an seoladh ag ar féidir teagmháil a dhéanamh leo maidir leis an gcóras intleachta saorga ardriosca agus ar a phacáistíocht nó sa doiciméadacht a ghabhann leis an gcóras, i gcás inarb infheidhme.
4. Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise its compliance with the requirements set out in Section 2.
4. Áiritheoidh allmhaireoirí, fad is atá córas intleachta saorga ardriosca faoina bhfreagracht, nach gcuirfidh dálaí stórála ná iompair, i gcás inarb infheidhme, comhlíonadh na gceanglas a leagtar amach i Roinn 2 i mbaol.
5. Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU declaration of conformity referred to in Article 47.
5. Coinneoidh allmhaireoirí, go ceann tréimhse 10 mbliana tar éis an córas intleachta saorga ardriosca a chur ar an margadh nó i mbun seirbhíse, cóip den deimhniú arna eisiúint ag an gcomhlacht faoina dtugtar fógra, i gcás inarb infheidhme, de na treoracha úsáide agus den dearbhú comhréireachta AE dá dtagraítear in Airteagal 47.
6. Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, they shall also ensure that the technical documentation can be made available to those authorities.
6. Cuirfidh allmhaireoirí ar fáil do na húdaráis inniúla ábhartha, ar iarraidh réasúnaithe, an fhaisnéis agus an doiciméadacht uile is gá, lena n-áirítear an fhaisnéis agus an doiciméadacht dá dtagraítear i mír 5, chun a léiriú go gcomhlíonann córas intleachta saorga ardriosca na ceanglais a leagtar amach i Roinn 2, i dteanga a bheidh sothuigthe dóibh. Chun na críche sin, áiritheoidh siad freisin gur féidir an doiciméadacht theicniúil a chur ar fáil do na húdaráis sin.
7. Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it.
7. Comhoibreoidh allmhaireoirí leis na húdaráis inniúla náisiúnta ábhartha in aon ghníomhaíocht a dhéanfaidh na húdaráis sin maidir le córas intleachta saorga ardriosca arna chur ar an margadh ag na hallmhaireoirí, go háirithe chun na rioscaí a bhaineann leis a laghdú agus a mhaolú.
Obligations of distributors
Oibleagáidí na ndáileoirí
1. Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as laid down in Article 16, points (b) and (c) and Article 23(3).
1. Sula gcuirfidh dáileoirí córas intleachta saorga ardriosca ar fáil ar an margadh, fíoróidh siad go bhfuil an mharcáil comhréireachta CE is gá ar an gcóras, go bhfuil cóip den dearbhú comhréireachta AE dá dtagraítear in Airteagal 47 agus na treoracha úsáide is gá ag gabháil leis, agus gur chomhlíon soláthraí agus allmhaireoir an chórais sin, de réir mar is infheidhme, a gcuid oibleagáidí faoi seach a leagtar síos i bpointí (b) agus (c) d’Airteagal 16 agus in Airteagal 23(3).
2. Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI system available on the market until the system has been brought into conformity with those requirements. Furthermore, where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect.
2. I gcás ina measann dáileoir, nó a bhfuil cúis aige a mheas, ar bhonn na faisnéise atá ina sheilbh, nach bhfuil córas intleachta saorga ardriosca i gcomhréir leis na ceanglais a leagtar amach i Roinn 2, ní chuirfidh sé an córas intleachta saorga ardriosca sin ar an margadh go dtí go mbeidh an córas tugtha chun comhréireachta leis na ceanglais sin. Thairis sin, i gcás ina bhfuil riosca ag baint leis an gcóras intleachta saorga ardriosca de réir bhrí Airteagal 79(1), cuirfidh an dáileoir soláthraí nó allmhaireoir an chórais, de réir mar is infheidhme, ar an eolas faoi sin.
3. Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2.
3. Áiritheoidh dáileoirí, fad is atá córas intleachta saorga ardriosca faoina bhfreagracht, nach gcuirfidh dálaí stórála ná iompair, i gcás inarb infheidhme, comhlíonadh na gceanglas a leagtar amach i Roinn 2 i mbaol.
4. A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, in particular, of the non-compliance and of any corrective actions taken.
4. Dáileoir a mheasann, nó a bhfuil cúis aige a mheas, ar bhonn na faisnéise atá ina sheilbh, nach bhfuil córas intleachta saorga a chuir sé ar fáil ar an margadh i gcomhréir leis na ceanglais a leagtar amach i Roinn 2, déanfaidh sé na bearta ceartaitheacha is gá chun an córas sin a thabhairt chun comhréireachta leis na ceanglais sin, chun é a astarraingt nó chun é a aisghairm, nó áiritheoidh sé go ndéanfaidh an soláthraí, an t-allmhaireoir nó aon oibreoir ábhartha, de réir mar is iomchuí, na bearta ceartaitheacha sin. I gcás ina bhfuil riosca ag baint leis an gcóras intleachta saorga ardriosca de réir bhrí Airteagal 79(1), cuirfidh an dáileoir an soláthraí nó an t-allmhaireoir den chóras agus údaráis inniúla atá inniúil maidir leis an gcóras intleachta saorga ardriosca lena mbaineann, agus tabharfaidh sé tuairisc, go háirithe, ar an neamhchomhlíonadh agus ar aon bheart ceartaitheach a rinneadh.
5. Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to demonstrate the conformity of that system with the requirements set out in Section 2.
5. Ar iarraidh réasúnaithe ó údarás inniúil ábhartha, cuirfidh dáileoirí córais intleachta saorga ardriosca an fhaisnéis agus an doiciméadacht uile is gá ar fáil don údarás sin maidir lena ngníomhaíochtaí de bhun mhíreanna 1 go 4 chun comhréireacht an chórais sin leis na ceanglais a leagtar amach i Roinn 2 a léiriú.
6. Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by it.
6. Comhoibreoidh dáileoirí leis na húdaráis inniúla ábhartha in aon ghníomhaíocht a dhéanfaidh na húdaráis sin maidir le córas intleachta saorga ardriosca a chuir na dáileoirí ar fáil ar an margadh, go háirithe chun na rioscaí a bhaineann leis a laghdú agus a mhaolú.
Responsibilities along the AI value chain
Freagrachtaí feadh shlabhra luacha na hintleachta saorga
1. Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:
1. Measfar gur soláthraí córas intleachta saorga ardriosca é aon dáileoir, allmhaireoir, úsáideoir gairmiúil nó aon tríú páirtí eile chun críocha an Rialacháin seo agus beidh sé faoi réir oibleagáidí an tsoláthraí faoi Airteagal 16, in aon cheann de na cásanna seo a leanas:
(a)
they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated;
(a)
cuireann siad a n-ainm nó a dtrádmharc ar chóras intleachta saorga ardriosca atá curtha ar an margadh nó i mbun seirbhíse cheana, gan dochar do shocruithe conarthacha lena sonraítear go leithdháilfear na hoibleagáidí ar shlí eile;
(b)
they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;
(b)
déanann siad modhnú substaintiúil ar chóras intleachta saorga ardriosca a cuireadh ar an margadh nó i mbun seirbhíse cheana agus ar bhealach go leanfaidh sé de bheith ina chóras intleachta saorga ardriosca de bhun Airteagal 6;
(c)
they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6.
(c)
modhnaíonn siad an chríoch atá beartaithe do chóras intleachta saorga, lena n-áirítear córas intleachta saorga ilchuspóireach, nach bhfuil aicmithe mar chóras ardriosca agus a cuireadh ar an margadh nó i mbun seirbhíse cheana ar bhealach a fhágann go mbeidh an córas intleachta saorga lena mbaineann ina chóras intleachta saorga ardriosca i gcomhréir le hAirteagal 6.
2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation.
2. Sna himthosca dá dtagraítear i mír 1, ní mheasfar a thuilleadh gur soláthraí an chórais intleachta saorga áirithe sin chun críocha an Rialacháin seo é an soláthraí a chuir an córas intleachta saorga ar an margadh ar dtús nó chuir i mbun seirbhíse é ar dtús. Oibreoidh an soláthraí tosaigh sin i ndlúthchomhar le soláthraithe nua agus cuirfidh sé an fhaisnéis is gá ar fáil agus soláthróidh sé an rochtain theicniúil agus an cúnamh eile a mbeifí ag súil go réasúnta leis agus a bheadh riachtanach chun na hoibleagáidí a leagtar amach sa Rialachán seo a chomhlíonadh, go háirithe maidir le comhlíonadh an mheasúnaithe comhréireachta ar chórais intleachta saorga ardriosca. Ní bheidh feidhm ag an mír seo i gcásanna ina bhfuil sé sonraithe go soiléir ag an soláthraí tosaigh nach bhfuil a chóras intleachta saorga le hathrú ina chóras intleachta saorga ardriosca agus, dá bhrí sin, ní thagann sé faoin oibleagáid an doiciméadacht a thabhairt ar láimh.
3. In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:
3. I gcás córas intleachta saorga ardriosca ar comhpháirteanna sábháilteachta táirgí iad atá cumhdaithe faoi reachtaíocht chomhchuibhithe an Aontais a liostaítear i Roinn A d’Iarscríbhinn I, measfar gurb é monaróir na dtáirgí soláthraí an chórais intleachta saorga ardriosca, agus beidh sé faoi réir na n-oibleagáidí faoi Airteagal 16 i gceachtar de na cásanna seo a leanas:
(a)
the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;
(a)
i gcás ina gcuirtear an córas intleachta saorga ardriosca ar an margadh in éineacht leis an táirge faoi ainm nó trádmharc mhonaróir an táirge;
(b)
the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.
(b)
i gcás ina gcuirtear an córas intleachta saorga ardriosca i mbun seirbhíse faoi ainm nó trádmharc mhonaróir an táirge tar éis don táirge a bheith curtha ar an margadh.
4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence.
4. Déanfaidh soláthraí córais intleachta saorga ardriosca agus an tríú páirtí a sholáthraíonn córas intleachta saorga, uirlisí, seirbhísí, comhpháirteanna, nó próisis a úsáidtear nó a chomhtháthaítear i gcóras intleachta saorga ardriosca, trí chomhaontú i scríbhinn, an fhaisnéis, na cumais, an rochtain theicniúil, agus an cúnamh eile is gá a shonrú bunaithe ar an úrscothacht ghnáthaitheanta, chun a chur ar chumas sholáthraí an chórais intleachta saorga ardriosca na hoibleagáidí a leagtar amach sa Rialachán seo a chomhlíonadh go hiomlán. Ní bheidh feidhm ag an mír seo maidir le tríú páirtithe a dhéanann uirlisí, seirbhísí, próisis, nó comhpháirteanna, seachas samhlacha intleachta saorga ilchuspóireacha, inrochtana don phobal faoi cheadúnas saor agus foinse oscailte.
The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format.
Féadfaidh an Oifig um Intleacht Shaorga téarmaí eiseamláireacha deonacha do chonarthaí a fhorbairt agus a mholadh idir soláthraithe córas intleachta saorga ardriosca agus tríú páirtithe a sholáthraíonn uirlisí, seirbhísí, comhpháirteanna nó próisis a úsáidtear le haghaidh córais intleachta saorga ardriosca nó a chomhtháthaítear i gcórais intleachta saorga ardriosca. Nuair a bheidh na téarmaí eiseamláireacha deonacha á bhforbairt aige, ba cheart don Oifig um Intleacht Shaorga ceanglais chonarthacha a chur san áireamh a d’fhéadfadh a bheith infheidhme in earnálacha sonracha nó i gcásanna gnó sonracha. Foilseofar na téarmaí eiseamláireacha deonacha agus beidh siad ar fáil saor in aisce i bhformáid leictreonach sho-úsáidte.
5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law.
5. Tá míreanna 2 agus 3 gan dochar don ghá cearta maoine intleachtúla, faisnéis rúnda ghnó agus rúin trádála a urramú agus a chosaint i gcomhréir le dlí an Aontais agus leis an dlí náisiúnta.
Obligations of deployers of high-risk AI systems
Oibleagáidí úsáideoirí gairmiúla na gcóras intleachta saorga ardriosca
1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6.
1. Déanfaidh úsáideoirí gairmiúla córas intleachta saorga ardriosca bearta teicniúla agus eagraíochtúla iomchuí chun a áirithiú go n-úsáidfidh siad na córais sin i gcomhréir leis na treoracha úsáide a ghabhann leo, de bhun mhíreanna 3 agus 6.
2. Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support.
2. Sannfaidh úsáideoirí gairmiúla maoirseacht dhaonna do dhaoine nádúrtha a bhfuil an inniúlacht, an oiliúint agus an t-údarás is gá, chomh maith leis an tacaíocht is gá, acu.
3. The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under Union or national law and to the deployer’s freedom to organise its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
3. Tá na hoibleagáidí a leagtar amach i míreanna 1 agus 2 gan dochar d’oibleagáidí eile úsáideoirí gairmiúla faoi dhlí an Aontais nó faoin dlí náisiúnta ná do shaoirse an úsáideora ghairmiúil a chuid acmhainní féin agus a ghníomhaíochtaí féin a eagrú chun na bearta formhaoirseachta daonna arna gcur in iúl ag an soláthraí a chur chun feidhme.
4. Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
4. Gan dochar do mhíreanna 1 agus 2, sa mhéid go bhfeidhmíonn an t-úsáideoir gairmiúil smacht ar na sonraí ionchuir, áiritheoidh an t-úsáideoir gairmiúil sin go mbeidh na sonraí ionchuir ábhartha agus ionadaíoch go leordhóthanach i bhfianaise na críche atá beartaithe don chóras intleachta saorga ardriosca.
5. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities.
5. Déanfaidh úsáideoirí gairmiúla faireachán ar oibriú an chórais intleachta saorga ardriosca ar bhonn na dtreoracha úsáide agus, i gcás inarb ábhartha, cuirfidh siad na soláthraithe ar an eolas i gcomhréir le hAirteagal 72. I gcás ina mbeidh cúis ag úsáideoirí gairmiúla a mheas go bhféadfadh riosca de réir bhrí Airteagal 79(1) a bheith ag baint leis an gcóras intleachta saorga mar thoradh ar úsáid an chórais intleachta saorga ardriosca i gcomhréir leis na treoracha, cuirfidh siad an soláthraí nó an dáileoir agus an t-údarás ábhartha faireachais margaidh ar an eolas gan moill mhíchuí, agus cuirfidh siad úsáid an chórais sin ar fionraí. I gcás ina sainaithníonn úsáideoirí gairmiúla teagmhas tromchúiseach, láithreach bonn, cuirfidh siad an soláthraí ar dtús agus ansin an t-allmhaireoir nó an dáileoir agus na húdaráis ábhartha faireachais margaidh ar an eolas faoin teagmhas sin. Mura féidir leis an úsáideoir gairmiúil teagmháil a dhéanamh leis an soláthraí, beidh feidhm ag Airteagal 73 mutatis mutandis. Ní chumhdófar leis an oibleagáid sin sonraí oibríochtúla íogaire úsáideoirí gairmiúla córas intleachta saorga ar údaráis forfheidhmithe dlí iad.
For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the relevant financial service law.
I gcás úsáideoirí gairmiúla ar institiúidí airgeadais iad atá faoi réir ceanglais maidir lena rialachas, socruithe nó próisis inmheánacha faoi dhlí sheirbhísí airgeadais an Aontais, measfar go gcomhlíontar an oibleagáid faireacháin a leagtar amach sa chéad fhomhír trí na rialacha maidir le socruithe, próisis agus sásraí an rialachais inmheánaigh de bhun an dlí seirbhísí airgeadais ábhartha a chomhlíonadh.
6. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data.
6. Coimeádfaidh úsáideoirí gairmiúla córas intleachta saorga ardriosca na logaí a ghintear go huathoibríoch leis an gcóras intleachta saorga ardriosca sin a mhéid atá na logaí sin faoina rialú, ar feadh tréimhse is iomchuí don chríoch atá beartaithe don chóras intleachta saorga ardriosca, tréimhse 6 mhí ar a laghad, mura bhforáiltear a mhalairt i ndlí an Aontais nó sa dlí náisiúnta is infheidhme, go háirithe i ndlí an Aontais maidir le cosaint sonraí pearsanta.
Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the relevant Union financial service law.
Maidir le húsáideoirí gairmiúla ar institiúidí airgeadais iad atá faoi réir ceanglais maidir lena rialachas, socruithe nó próisis inmheánacha faoi dhlí sheirbhísí airgeadais an Aontais, coinneoidh siad na logaí mar chuid den doiciméadacht a choimeádtar de bhun dhlí sheirbhísí airgeadais ábhartha an Aontais.
7. Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of workers and their representatives.
7. Sula gcuirfear córas intleachta saorga ardriosca i mbun seirbhíse nó sula n-úsáidfear é san ionad oibre, cuirfidh úsáideoirí gairmiúla ar fostóirí iad in iúl d’ionadaithe na n-oibrithe agus do na hoibrithe lena mbaineann go mbeidh siad faoi réir úsáid an chórais intleachta saorga ardriosca. Cuirfear an fhaisnéis sin ar fáil, i gcás inarb infheidhme, i gcomhréir leis na rialacha agus na nósanna imeachta a leagtar síos i ndlí agus cleachtas an Aontais agus i ndlí agus cleachtas náisiúnta maidir le faisnéis a bhaineann le hoibrithe agus a n-ionadaithe.
8. Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system and shall inform the provider or the distributor.
8. Úsáideoirí gairmiúla córas intleachta saorga ardriosca ar údaráis phoiblí nó institiúidí, comhlachtaí, oifigí nó gníomhaireachtaí an Aontais iad, comhlíonfaidh siad na hoibleagáidí clárúcháin dá dtagraítear in Airteagal 49. I gcás ina bhfaigheann na húsáideoirí gairmiúla sin amach nár cláraíodh an córas intleachta saorga ardriosca a bheartaíonn siad a úsáid i mbunachar sonraí an Aontais dá dtagraítear in Airteagal 71, ní úsáidfidh siad an córas sin agus cuirfidh siad an soláthraí nó an dáileoir ar an eolas.
9. Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.
9. I gcás inarb infheidhme, bainfidh úsáideoirí gairmiúla córas intleachta saorga ardriosca úsáid as an bhfaisnéis arna soláthar faoi Airteagal 13 den Rialachán seo chun a n-oibleagáid measúnú tionchair ar chosaint sonraí a dhéanamh faoi Airteagal 35 de Rialachán (AE) 2016/679 nó Airteagal 27 de Threoir (AE) 2016/680 a chomhlíonadh.
10. Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific criminal offence.
10. Gan dochar do Threoir (AE) 2016/680, faoi chuimsiú imscrúdú ar chuardach spriocdhírithe ar dhuine atá faoi amhras nó atá ciontaithe i gcion coiriúil a dhéanamh, iarrfaidh úsáideoir gairmiúil córais intleachta saorga ardriosca le haghaidh córas cian-sainaitheanta bithmhéadraí iar-aimseartha údarú, ex ante, nó gan moill mhíchuí agus tráth nach déanaí ná 48 uair an chloig, ó údarás breithiúnach nó údarás riaracháin a bhfuil a chinneadh ceangailteach agus faoi réir athbhreithniú breithiúnach, chun an córas sin a úsáid, ach amháin i gcás ina n-úsáidtear é chun duine a d’fhéadfadh a bheith faoi amhras a shainaithint den chéad uair bunaithe ar fhíorais oibiachtúla infhíoraithe a bhaineann go díreach leis an gcion. Beidh gach úsáid teoranta don mhéid atá fíor-riachtanach chun imscrúdú a dhéanamh ar chion coiriúil sonrach.
If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted.
Má dhiúltaítear don údarú a iarrtar de bhun na chéad fomhíre, cuirfear deireadh, le héifeacht láithreach, le húsáid an chórais cian-sainaitheanta bithmhéadraí iar-aimseartha atá nasctha leis an údarú iarrtha sin agus scriosfar na sonraí pearsanta atá nasctha le húsáid an chórais intleachta saorga ardriosca ar iarradh an t-údarú ina leith.
In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of such post-remote biometric identification systems.
I gcás ar bith, ní úsáidfear córas intleachta saorga ardriosca mar sin le haghaidh córas cian-sainaitheanta iar-aimseartha chun críocha fhorfheidhmiú an dlí ar bhealach nach bhfuil spriocdhírithe, gan aon nasc le cion coiriúil, le himeacht coiriúil, le fíorbhagairt atá ann nó fíorbhagairt intuartha maidir le cion coiriúil, ná le duine sonrach atá ar iarraidh a chuardach. Áiritheofar nach bhféadfaidh na húdaráis forfheidhmithe dlí aon chinneadh a dhéanamh a mbeidh éifeacht dhíobhálach dhlíthiúil aige ar dhuine bunaithe ar aschur na gcóras cian-sainaitheanta bithmhéadraí iar-aimseartha sin.
This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of Directive (EU) 2016/680 for the processing of biometric data.
Tá an mhír seo gan dochar d’Airteagal 9 de Rialachán (AE) 2016/679 agus d’Airteagal 10 de Threoir (AE) 2016/680 i ndáil le sonraí bithmhéadracha a phróiseáil.
Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file and shall be made available to the relevant market surveillance authority and the national data protection authority upon request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities.
Gan beann ar chuspóir na húsáide ná cuspóir an úsáideora ghairmiúil, déanfar gach úsáid a bhaintear as na córais intleachta saorga ardriosca sin a dhoiciméadú sa chomhad póilíneachta ábhartha agus cuirfear ar fáil í don údarás ábhartha faireachais margaidh agus don údarás náisiúnta um chosaint sonraí arna iarraidh sin, cé is moite de nochtadh sonraí oibríochtúla íogaire a bhaineann le forfheidhmiú an dlí. Beidh an fhomhír seo gan dochar do na cumhachtaí a thugtar d’údaráis mhaoirseachta le Treoir (AE) 2016/680.
Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law enforcement. The reports may be aggregated to cover more than one deployment.
Cuirfidh úsáideoirí gairmiúla tuarascálacha bliantúla faoi bhráid na n-údarás ábhartha faireachais margaidh agus na n-údarás náisiúnta um chosaint sonraí maidir leis an úsáid a bhaineann siad as córas cian-sainaitheanta bithmhéadraí iar-aimseartha, cé is moite de nochtadh sonraí oibríochtúla íogaire a bhaineann le forfheidhmiú an dlí a nochtadh. Féadfar na tuarascálacha a chomhiomlánú chun níos mó ná imscaradh amháin a chumhdach.
Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric identification systems.
Féadfaidh na Ballstáit, i gcomhréir le dlí an Aontais, dlíthe níos sriantaí a thabhairt isteach maidir le húsáid córas cian-sainaitheanta bithmhéadraí iar-aimseartha.
11. Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive (EU) 2016/680 shall apply.
11. Maidir le húsáideoirí gairmiúla na gcóras intleachta saorga ardriosca dá dtagraítear in Iarscríbhinn III a dhéanann cinntí nó a chabhraíonn le cinntí a dhéanamh maidir le daoine nádúrtha, déanfaidh siad, gan dochar d’Airteagal 50 den Rialachán seo, na daoine nádúrtha a chur ar an eolas go bhfuil siad faoi réir úsáid an chórais intleachta saorga ardriosca. I gcás córas intleachta saorga ardriosca a úsáidtear chun críocha fhorfheidhmiú an dlí, beidh feidhm ag Airteagal 13 de Threoir (AE) 2016/680.
12. Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to the high-risk AI system in order to implement this Regulation.
12. Comhoibreoidh úsáideoirí gairmiúla leis na húdaráis inniúla ábhartha maidir le haon ghníomhaíocht a dhéanfaidh na húdaráis sin i ndáil leis an gcóras intleachta saorga ardriosca chun an Rialachán seo a chur chun feidhme.
Fundamental rights impact assessment for high-risk AI systems
Measúnú tionchair ar chearta bunúsacha le haghaidh córais intleachta saorga ardriosca
1. Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, deployers shall perform an assessment consisting of:
1. Sula n-úsáidfear córas intleachta saorga ardriosca dá dtagraítear in Airteagal 6(2), cé is moite de chórais intleachta saorga ardriosca atá beartaithe lena n-úsáid sa réimse a liostaítear i bpointe 2 d’Iarscríbhinn III, déanfaidh úsáideoirí gairmiúla ar comhlachtaí iad a rialaítear leis an dlí poiblí, nó ar eintitis phríobháideacha iad a sholáthraíonn seirbhísí poiblí, agus úsáideoirí gairmiúla na gcóras intleachta saorga ardriosca dá dtagraítear i bpointí 5 (b) agus (c) d’Iarscríbhinn III, measúnú ar an tionchar a d’fhéadfadh a bheith ag úsáid an chórais sin ar chearta bunúsacha. Chun na críche sin, áireofar an méid seo a leanas sa mheasúnú a dhéanfaidh úsáideoirí gairmiúla:
(a)
a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
(a)
tuairisc ar na próisis a bheidh ag an úsáideoir gairmiúil agus an córas intleachta saorga ardriosca á úsáid i gcomhréir leis an gcríoch atá beartaithe dó;
(b)
a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
(b)
tuairisc ar an tréimhse ama ina bhfuil sé beartaithe gach córas intleachta saorga ardriosca a úsáid, agus a mhinice a úsáidfear é;
(c)
the categories of natural persons and groups likely to be affected by its use in the specific context;
(c)
catagóirí daoine nádúrtha agus grúpaí ar dócha go ndéanfaidh a úsáid sa chomhthéacs sonrach difear dóibh;
(d)
the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13;
(d)
na rioscaí sonracha díobhála ar dócha go mbeidh tionchar acu ar na catagóirí daoine nádúrtha nó ar na grúpaí daoine a shainaithnítear de bhun phointe (c) den mhír seo, agus an fhaisnéis a thabharfaidh an soláthraí de bhun Airteagal 13 á cur san áireamh;
(e)
a description of the implementation of human oversight measures, according to the instructions for use;
(e)
tuairisc ar chur chun feidhme na mbeart formhaoirseachta daonna, de réir na dtreoracha úsáide;
(f)
the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms.
(f)
na bearta atá le déanamh i gcás ina dtarlaíonn na rioscaí sin, lena n-áirítear na socruithe maidir le rialachas inmheánach agus sásraí gearáin.
2. The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.
2. Tá feidhm ag an oibleagáid a leagtar síos i mír 1 maidir le céadúsáid an chórais intleachta saorga ardriosca. Féadfaidh an t-úsáideoir gairmiúil, i gcásanna comhchosúla, brath ar mheasúnuithe tionchair ar chearta bunúsacha a rinneadh roimhe sin nó ar mheasúnuithe tionchair a rinne soláthraí cheana. Más rud é, le linn úsáid an chórais intleachta saorga ardriosca, go measann an t-úsáideoir gairmiúil gur athraigh aon cheann de na heilimintí a liostaítear i mír 1 nó nach bhfuil sé cothrom le dáta a thuilleadh, déanfaidh an t-úsáideoir gairmiúil na bearta is gá chun an fhaisnéis a thabhairt cothrom le dáta.
3. Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.
3. A luaithe a bheidh an measúnú dá dtagraítear i mír 1 den Airteagal seo déanta, tabharfaidh an t-úsáideoir gairmiúil fógra don údarás faireachais margaidh faoi na torthaí a bhí air, lena n-áirítear an teimpléad comhlánaithe dá dtagraítear i mír 5 den Airteagal a chur isteach mar chuid den fhógra. Sa chás dá dtagraítear in Airteagal 46(1), féadfar úsáideoirí gairmiúla a dhíolmhú ón oibleagáid sin fógra a thabhairt.
4. If any of the obligations laid down in this Article is already met through the data protection impact assessment conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact assessment.
4. Má comhlíonadh cheana féin aon cheann de na hoibleagáidí a leagtar síos san Airteagal seo tríd an measúnú tionchair ar chosaint sonraí arna dhéanamh de bhun Airteagal 35 de Rialachán (AE) 2016/679 nó Airteagal 27 de Threoir (AE) 2016/680, comhlánóidh an measúnú tionchair ar chearta bunúsacha dá dtagraítear i mír 1 den Airteagal seo an measúnú tionchair sin ar chosaint sonraí.
5. The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers in complying with their obligations under this Article in a simplified manner.
5. Forbróidh an Oifig um Intleacht Shaorga teimpléad le haghaidh ceistneora, lena n-áirítear trí uirlis uathoibrithe, chun cabhrú le húsáideoirí gairmiúla a gcuid oibleagáidí faoin Airteagal seo a chomhlíonadh ar bhealach simplithe.
Na húdaráis a thugann fógra
1. Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. Those procedures shall be developed in cooperation between the notifying authorities of all Member States.
1. Déanfaidh gach Ballstát ar a laghad údarás amháin a thugann fógra a ainmniú nó a chur ar bun, údarás a bheidh freagrach as na nósanna imeachta is a gá a chur ar bun agus a dhéanamh leis na comhlachtaí um measúnú comhréireachta a mheasúnú, a ainmniú agus maidir le fógra a thabhairt dóibh mar aon le maidir le faireachán a dhéanamh orthu. Forbrófar na nósanna imeachta sin i gcomhar idir údaráis na mBallstát uile a thugann fógra.
2. Member States may decide that the assessment and monitoring referred to in paragraph 1 is to be carried out by a national accreditation body within the meaning of, and in accordance with, Regulation (EC) No 765/2008.
2. Féadfaidh na Ballstáit a chinneadh go bhfuil comhlacht náisiúnta creidiúnaithe leis an measúnú agus an faireachán dá dtagraítear i mír 1 a dhéanamh, de réir bhrí Rialachán (CE) Uimh. 765/2008 agus i gcomhréir leis.
3. Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies, and that the objectivity and impartiality of their activities are safeguarded.
3. Bunófar, eagrófar agus riarfar údaráis a thugann fógra ar bhealach chun aon choinbhleacht leasa le comhlachtaí um measúnú comhréireachta a sheachaint, agus chun go gcaomhnófar oibiachtúlacht agus neamhchlaontacht a ngníomhaíochtaí.
4. Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.
4. Eagrófar údaráis a thugann fógra ar bhealach a n-áiritheoidh gur daoine inniúla, nach ionann iad agus na daoine a rinne an measúnú ar na comhlachtaí sin, a dhéanfaidh cinntí a bhaineann le fógra a thabhairt do chomhlachtaí um measúnú comhréireachta.
5. Notifying authorities shall offer or provide neither any activities that conformity assessment bodies perform, nor any consultancy services on a commercial or competitive basis.
5. Ní thairgfidh ná ní sholáthróidh na húdaráis a thugann fógra aon ghníomhaíocht a dhéanann comhlachtaí um measúnú comhréireachta ná seirbhísí comhairliúcháin ar bhonn tráchtála nó ar bhonn iomaíoch.
6. Notifying authorities shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78.
6. Déanfaidh na húdaráis a thugann fógra rúndacht na faisnéise a fhaigheann siad a choimirciú, i gcomhréir le hAirteagal 78.
7. Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper performance of their tasks. Competent personnel shall have the necessary expertise, where applicable, for their function, in fields such as information technologies, AI and law, including the supervision of fundamental rights.
7. Beidh líon leordhóthanach de phearsanra inniúil ar fáil dóibh ag na húdaráis a thugann fógra chun a gcúraimí a fheidhmiú i gceart. Beidh an saineolas is gá ag an bpearsanra inniúil, i gcás inarb infheidhme, dá bhfeidhm, i réimsí ar nós na dteicneolaíochtaí faisnéise, intleacht shaorga agus an dlí, lena n-áirítear maoirseacht ar chearta bunúsacha.
Application of a conformity assessment body for notification
Iarratas ar fhógra ó chomhlacht um measúnú comhréireachta
1. Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member State in which they are established.
1. Déanfaidh comhlachtaí um measúnú comhréireachta iarratas ar fhógra a chur faoi bhráid an údaráis a thugann fógra de chuid an Bhallstáit ina bhfuil siad bunaithe.
2. The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the types of AI systems for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article 31.
2. Beidh ag gabháil leis an iarratas ar fhógra tuairisc ar na gníomhaíochtaí measúnaithe comhréireachta, an modúl nó na modúil measúnaithe comhréireachta agus cineálacha na gcóras intleachta saorga a maíonn an comhlacht um measúnú comhréireachta go bhfuil sé inniúil ina leith, chomh maith le deimhniú creidiúnaithe, más ann dó, arna eisiúint ag comhlacht creidiúnaithe náisiúnta lena ndearbhaítear go gcomhlíonann an comhlacht um measúnú comhréireachta na ceanglais a leagtar síos in Airteagal 31.
Any valid document related to existing designations of the applicant notified body under any other Union harmonisation legislation shall be added.
Cuirfear isteach aon doiciméad bailí a bhaineann le hainmniúcháin an chomhlachta is iarratasóir faoina dtugtar fógra faoi aon reachtaíocht chomhchuibhithe eile de chuid an Aontais.
3. Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 31.
3. I gcás nach féidir leis an gcomhlacht um measúnú comhréireachta lena mbaineann deimhniú creidiúnaithe a sholáthar, soláthróidh sé don údarás a thugann fógra an fhianaise dhoiciméadach uile is gá chun a chomhlíontacht leis na ceanglais a leagtar síos in Airteagal 31 a fhíorú agus a aithint agus chun faireachán tráthrialta a dhéanamh air sin.
4. For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate. The notified body shall update the documentation referred to in paragraphs 2 and 3 of this Article whenever relevant changes occur, in order to enable the authority responsible for notified bodies to monitor and verify continuous compliance with all the requirements laid down in Article 31.
4. Maidir le comhlachtaí faoina dtugtar fógra a ainmnítear faoi aon reachtaíocht chomhchuibhithe eile de chuid an Aontais, féadfar gach doiciméad agus deimhniú atá nasctha leis na hainmniúcháin sin a úsáid chun tacú lena nós imeachta ainmniúcháin faoin Rialachán seo, de réir mar is iomchuí. Tar éis don chomhlacht faoina dtugtar fógra a bheith ainmnithe, déanfaidh sé an doiciméadacht dá dtagraítear i míreanna 2 agus 3 den Airteagal seo a nuashonrú aon uair a tharlaíonn athruithe ábhartha, le cur ar chumas an údaráis atá freagrach as comhlachtaí faoina dtugtar fógra faireachán a dhéanamh ar chomhlíonadh leanúnach na gceanglas go léir a leagtar síos in Airteagal 31 agus é sin a fhíorú.
Nós imeachta um fhógra a thabhairt
1. Notifying authorities may notify only conformity assessment bodies which have satisfied the requirements laid down in Article 31.
1. Maidir leis na húdaráis a thugann fógra, féadfaidh siad fógra a thabhairt do na comhlachtaí um measúnú comhréireachta a bhfuil na ceanglais a leagtar síos in Airteagal 31 comhlíonta acu, agus do na comhlachtaí sin amháin.
2. Notifying authorities shall notify the Commission and the other Member States, using the electronic notification tool developed and managed by the Commission, of each conformity assessment body referred to in paragraph 1.
2. Na húdaráis a thugann fógra, tabharfaidh siad fógra don Choimisiún agus do na Ballstáit eile tríd an uirlis leictreonach um fhógra a thabhairt arna forbairt agus arna bainistiú ag an gCoimisiún, maidir le gach comhlacht um measúnú comhréireachta dá dtagraítear i mír 1.
3. The notification referred to in paragraph 2 of this Article shall include full details of the conformity assessment activities, the conformity assessment module or modules, the types of AI systems concerned, and the relevant attestation of competence. Where a notification is not based on an accreditation certificate as referred to in Article 29(2), the notifying authority shall provide the Commission and the other Member States with documentary evidence which attests to the competence of the conformity assessment body and to the arrangements in place to ensure that that body will be monitored regularly and will continue to satisfy the requirements laid down in Article 31.
3. Beidh san fhógra dá dtagraítear i mír 2 den Airteagal seo tuairisc iomlán ar na gníomhaíochtaí um measúnú comhréireachta, ar an modúl nó ar na modúil um measúnú comhréireachta agus ar na cineálacha córas intleachta saorga lena mbaineann, agus ar dhearbhú ábhartha na hinniúlachta. I gcás nach mbeidh fógra bunaithe ar dheimhniú creidiúnaithe dá dtagraítear in Airteagal 29(2), déanfaidh an t-údarás a thugann fógra fianaise dhoiciméadach a sholáthar don Choimisiún agus do na Ballstáit eile lena ndearbhaítear inniúlacht an chomhlachta um measúnú comhréireachta mar aon leis na socruithe a bheidh i bhfeidhm lena áirithiú go ndéanfar faireachán tráthrialta ar an gcomhlacht sin agus go leanfaidh sé de bheith ag comhlíonadh na gceanglas a leagtar síos in Airteagal 31.
4. The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States within two weeks of a notification by a notifying authority where it includes an accreditation certificate referred to in Article 29(2), or within two months of a notification by the notifying authority where it includes documentary evidence referred to in Article 29(3).
4. Ní fhéadfaidh an comhlacht um measúnú comhréireachta lena mbaineann gníomhaíochtaí comhlachta faoina dtugtar fógra a fheidhmiú ach amháin mura ndéanann an Coimisiún ná na Ballstáit eile aon agóidí ina choinne laistigh de 2 sheachtain ó thráth déanta fógra ó údarás a thugann fógra i gcás deimhniú creidiúnaithe dá dtagraítear in Airteagal 29(2) a bheith ann nó laistigh de 2 mhí ó thráth déanta fógra ó údarás a thugann fógra i gcás fianaise dhoiciméadach dá dtagraítear in Airteagal 29(3) a bheith san fhógra.
5. Where objections are raised, the Commission shall, without delay, enter into consultations with the relevant Member States and the conformity assessment body. In view thereof, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant conformity assessment body.
5. I gcás ina ndéantar agóidí, rachaidh an Coimisiún i gcomhairle, gan mhoill, leis na Ballstáit ábhartha agus leis an gcomhlacht um measúnú comhréireachta. Ag féachaint don mhéid sin, cinnfidh an Coimisiún an bhfuil bonn cirt leis an údarú nó nach bhfuil. Díreoidh an Coimisiún a chinneadh chuig an mBallstát lena mbaineann agus chuig an gcomhlacht ábhartha um measúnú comhréireachta.
Requirements relating to notified bodies
Na ceanglais a bhaineann leis na comhlachtaí faoina dtugtar fógra
1. A notified body shall be established under the national law of a Member State and shall have legal personality.
1. Bunófar comhlacht faoina dtugtar fógra faoi dhlí náisiúnta Ballstáit agus beidh pearsantacht dhlítheanach aige.
2. Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements.
2. Comhlíonfaidh comhlachtaí faoina dtugtar fógra na ceanglais maidir le heagrúchán, bainistíocht cáilíochta, acmhainní agus próiseas is gá chun na cúraimí sin a chomhlíonadh, chomh maith le ceanglais chibearshlándála oiriúnacha.
3. The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall ensure confidence in their performance, and in the results of the conformity assessment activities that the notified bodies conduct.
3. Áiritheofar leis an struchtúr eagrúcháin, leithdháileadh freagrachtaí, na línte tuairiscithe agus oibriú na gcomhlachtaí faoina dtugtar fógra go mbeidh muinín i bhfeidhmíocht na gcomhlachtaí faoina dtugtar fógra agus i dtorthaí na ngníomhaíochtaí um measúnú comhréireachta a dhéanann na comhlachtaí faoina dtugtar fógra.
4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in high-risk AI systems assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed high-risk AI systems that are necessary for the operations of the conformity assessment body, or the use of such high-risk AI systems for personal purposes.
4. Beidh comhlachtaí faoina dtugtar fógra neamhspleách ar sholáthraí córais intleachta saorga ardriosca a ndéanann siad gníomhaíochtaí um measúnú comhréireachta ina leith. Beidh comhlachtaí faoina dtugtar fógra neamhspleách freisin ar aon oibreoir eile a bhfuil leas eacnamaíoch aige sa chóras intleachta saorga ardriosca a ndéantar measúnú air, agus ar aon iomaitheoir de chuid an tsoláthraí chomh maith. Ní choiscfear, leis sin, córais intleachta saorga ardriosca a ndearnadh measúnú orthu a úsáid má tá gá leo i dtaca le hoibríochtaí an chomhlachta um measúnú comhréireachta, ná córais intleachta saorga ardriosca mar iad a úsáid chun críoch pearsanta.
5. Neither a conformity assessment body, its top-level management nor the personnel responsible for carrying out its conformity assessment tasks shall be directly involved in the design, development, marketing or use of high-risk AI systems, nor shall they represent the parties engaged in those activities. They shall not engage in any activity that might conflict with their independence of judgement or integrity in relation to conformity assessment activities for which they are notified. This shall, in particular, apply to consultancy services.
5. Ní bheidh baint dhíreach ag comhlacht um measúnú comhréireachta, ag a lucht bainistíochta ardleibhéil ná ag an bpearsanra a bheidh freagrach as cúraimí an mheasúnaithe comhréireachta a chomhlíonadh le dearadh, forbairt, margú ná úsáid na gcóras intleachta saorga ardriosca, ná ní dhéanfaidh siad ionadaíocht do na páirtithe atá i gceist leis na gníomhaíochtaí sin. Ní rachaidh siad i mbun aon ghníomhaíochta a d’fhéadfadh a bheith i gcoimhlint lena neamhspleáchas breithiúnais nó lena n-ionracas i ndáil le gníomhaíochtaí um measúnú comhréireachta a dtugtar fógra fúthu ina leith. Beidh feidhm ag an gceanglas seo, go háirithe, maidir le seirbhísí comhairleachta.
6. Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities.
6. Eagrófar agus oibreofar na comhlachtaí faoina dtugtar fógra sa chaoi go gcaomhnófar neamhspleáchas, oibiachtúlacht agus neamhchlaontacht a gcuid gníomhaíochtaí. Déanfaidh na comhlachtaí faoina dtugtar fógra struchtúr agus nósanna imeachta a dhoiciméadú agus a chur chun feidhme chun neamhchlaontacht a choimirciú agus chun prionsabail na neamhchlaontachta a chur chun cinn agus a chur i bhfeidhm ar fud na heagraíochta agus ina ngníomhaíochtaí measúnaithe agus i measc a gcuid foirne.
7. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when its disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out.
7. Beidh nósanna imeachta doiciméadaithe i bhfeidhm ag comhlachtaí faoina dtugtar fógra lena áirithiú go gcoimeádfaidh foirne, coistí, fochuideachtaí agus fochonraitheoirí na gcomhlachtaí sin agus aon chomhlacht comhlachais nó foireann de chuid comhlachtaí seachtracha, i gcomhréir le hAirteagal 78, rúndacht na faisnéise a thagann ina seilbh le linn fheidhmiú na ngníomhaíochtaí um measúnú comhréireachta, ach amháin i gcás ina gceanglaítear nochtadh na faisnéise sin de réir an dlí. Beidh sé de cheangal ar fhoireann an chomhlachta faoina dtugtar fógra rúndacht ghairmiúil a urramú maidir leis an bhfaisnéis uile a gheofar le linn a gcuid cúraimí faoin Rialachán seo a chur i gcrích, ach amháin i ndáil le húdaráis a thugann fógra sa Bhallstát ina gcuirtear a gcuid gníomhaíochtaí i gcrích.
8. Notified bodies shall have procedures for the performance of activities which take due account of the size of a provider, the sector in which it operates, its structure, and the degree of complexity of the AI system concerned.
8. Beidh ag comhlachtaí faoina dtugtar fógra nósanna imeachta maidir le feidhmiú na ngníomhaíochtaí ina dtugtar aird chuí ar mhéid an tsoláthraí, an earnáil ina n-oibríonn sé, a struchtúr agus ar a chasta atá an córas intleachta saorga lena mbaineann.
9. Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability is assumed by the Member State in which they are established in accordance with national law or that Member State is itself directly responsible for the conformity assessment.
9. Beidh árachas dliteanais iomchuí ag na comhlachtaí faoina dtugtar fógra le haghaidh a ngníomhaíochtaí um measúnú comhréireachta, ach amháin i gcás ina nglacann an Ballstát ina bhfuil siad bunaithe dliteanas i gcomhréir leis an dlí náisiúnta, nó i gcás ina bhfuil an Ballstát sin féin freagrach go díreach as an measúnú comhréireachta.
10. Notified bodies shall be capable of carrying out all their tasks under this Regulation with the highest degree of professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified bodies themselves or on their behalf and under their responsibility.
10. Beidh sé de chumas ag comhlachtaí faoina dtugtar fógra a gcúraimí uile faoin Rialachán seo a chur i gcrích leis an oiread ionracais ghairmiúil is féidir agus leis an inniúlacht is gá sa réimse sonrach, bíodh na cúraimí sin á ndéanamh ag na comhlachtaí féin faoina dtugtar fógra nó thar a gceann agus faoina bhfreagracht.
11. Notified bodies shall have sufficient internal competences to be able effectively to evaluate the tasks conducted by external parties on their behalf. The notified body shall have permanent availability of sufficient administrative, technical, legal and scientific personnel who possess experience and knowledge relating to the relevant types of AI systems, data and data computing, and relating to the requirements set out in Section 2.
11. Beidh go leor inniúlachtaí inmheánacha ag na comhlachtaí faoina dtugtar fógra chun gur féidir leo meastóireacht éifeachtach a dhéanamh ar na cúraimí a dhéanann páirtithe seachtracha thar a gceann. Beidh foireann leordhóthanach riaracháin, theicniúil, dhlíthiúil agus eolaíoch ar fáil don chomhlacht faoina dtugtar fógra i gcónaí, foireann ag a bhfuil taithí agus eolas a bhaineann leis na cineálacha ábhartha córas intleachta saorga, cineálacha sonraí agus ríomhaireacht sonraí ábhartha agus leis na ceanglais a leagtar amach i Roinn 2.
12. Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part directly, or be represented in, European standardisation organisations, or ensure that they are aware and up to date in respect of relevant standards.
12. Beidh comhlachtaí faoina dtugtar fógra rannpháirteach i ngníomhaíochtaí comhordúcháin dá dtagraítear in Airteagal 38. Ina theannta sin, glacfaidh siad páirt go díreach in eagraíochtaí Eorpacha um chaighdeánú, nó déanfar ionadaíocht orthu sna heagraíochtaí sin, nó áiritheoidh siad go bhfuil siad ar an eolas faoi na caighdeáin ábhartha agus cothrom le dáta ina leith.
Operational obligations of notified bodies
Oibleagáidí oibríochtúla na gcomhlachtaí faoina dtugtar fógra
1. Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment procedures set out in Article 43.
1. Fíoróidh comhlachtaí faoina dtugtar fógra comhréireacht na gcóras intleachta saorga ardriosca i gcomhréir leis na nósanna imeachta um measúnú comhréireachta a leagtar amach in Airteagal 43.
2. Notified bodies shall avoid unnecessary burdens for providers when performing their activities, and take due account of the size of the provider, the sector in which it operates, its structure and the degree of complexity of the high-risk AI system concerned, in particular in view of minimising administrative burdens and compliance costs for micro- and small enterprises within the meaning of Recommendation 2003/361/EC. The notified body shall, nevertheless, respect the degree of rigour and the level of protection required for the compliance of the high-risk AI system with the requirements of this Regulation.
2. Seachnóidh comhlachtaí faoina dtugtar fógra ualaí nach bhfuil gá leo ar sholáthraithe agus a ngníomhaíochtaí á ndéanamh acu, agus tabharfaidh siad an aird chuí ar mhéid an tsoláthraí, ar an earnáil ina n-oibríonn sé, ar a struchtúr agus ar a chasta atá an córas intleachta saorga ardriosca lena mbaineann, go háirithe d’fhonn ualaí riaracháin agus costais chomhlíontachta a íoslaghdú do mhicrifhiontair agus d’fhiontair bheaga de réir bhrí Mholadh 2003/361/CE. Urramóidh an comhlacht faoina dtugtar fógra mar sin féin an méid smachta agus an leibhéal cosanta a éilítear chun go gcomhlíonfaidh an córas intleachta saorga ardriosca ceanglais an Rialacháin seo..
3. Notified bodies shall make available and submit upon request all relevant documentation, including the providers’ documentation, to the notifying authority referred to in Article 28 to allow that authority to conduct its assessment, designation, notification and monitoring activities, and to facilitate the assessment outlined in this Section.
3. Déanfaidh na comhlachtaí faoina dtugtar fógra an doiciméadacht ábhartha uile, lena n-áirítear doiciméadacht an tsoláthraí, a chur ar fáil agus a chur faoi bhráid an údaráis a thugann fógra dá dtagraítear in Airteagal 28, arna iarraidh sin orthu, ionas go mbeidh an t-údarás sin in ann dul i mbun a ghníomhaíochtaí maidir le measúnú, ainmniú, fógra a thabhairt agus faireachán agus chun an measúnú a leagtar amach sa Roinn seo a éascú.
1. The notifying authority shall notify the Commission and the other Member States of any relevant changes to the notification of a notified body via the electronic notification tool referred to in Article 30(2).
1. Tabharfaidh an t-údarás a thugann fógra fógra don Choimisiún agus do na Ballstáit eile maidir le haon athruithe ábhartha ar fhógra a thabhairt faoi chomhlacht faoina dtugtar fógra tríd an uirlis leictreonach fógra dá dtagraítear in Airteagal 30(2).
2. The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.
2. Beidh feidhm ag na nósanna imeachta a leagtar síos in Airteagail 29 agus 30 maidir le leathnú ar raon feidhme an fhógra.
For changes to the notification other than extensions of its scope, the procedures laid down in paragraphs (3) to (9) shall apply.
I gcás athruithe ar an bhfógra cé is moite de leathnú ar a raon feidhme, beidh feidhm ag na nósanna imeachta a leagtar síos i míreanna (3) go (9).
3. Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying authority and the providers concerned as soon as possible and, in the case of a planned cessation, at least one year before ceasing its activities. The certificates of the notified body may remain valid for a period of nine months after cessation of the notified body’s activities, on condition that another notified body has confirmed in writing that it will assume responsibilities for the high-risk AI systems covered by those certificates. The latter notified body shall complete a full assessment of the high-risk AI systems affected by the end of that nine-month-period before issuing new certificates for those systems. Where the notified body has ceased its activity, the notifying authority shall withdraw the designation.
3. I gcás ina gcinnfidh an comhlacht faoina dtugtar fógra deireadh a chur lena ghníomhaíochtaí um measúnú comhréireachta, cuirfidh sé é sin in iúl don údarás a thugann fógra agus do na soláthraithe lena mbaineann, a luaithe is féidir agus má shocraítear deireadh a chur le gníomhaíochtaí, bliain amháin ar a laghad roimh dó deireadh a chur lena chuid gníomhaíochtaí. Féadfaidh na deimhnithe ón gcomhlacht faoina dtugtar fógra a bheith fós bailí go ceann tréimhse 9 mí tar éis deireadh a chur le gníomhaíochtaí an chomhlachta faoina dtugtar fógra ar choinníoll gur dhearbhaigh comhlacht eile faoina dtugtar fógra i scríbhinn go ngabhfadh sé na freagrachtaí air féin maidir leis na córais intleachta saorga ardriosca a chumhdaítear faoi na deimhnithe sin. Déanfaidh an comhlacht eile faoina dtugtar fógra measúnú iomlán ar na córais intleachta saorga ardriosca a ndéantar difear dóibh faoi dheireadh na tréimhse 9 mí sin roimh dó deimhnithe nua a eisiúint le haghaidh na gcóras sin. I gcás ina bhfuil an comhlacht faoina dtugtar fógra tar éis deireadh a chur lena chuid gníomhaíochtaí, tarraingeoidh an t-údarás a thugann fógra siar an t-ainmniú.
4. Where a notifying authority has sufficient reason to consider that a notified body no longer meets the requirements laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body no longer meets the requirements laid down in Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the designation as appropriate, depending on the seriousness of the failure to meet those requirements or fulfil those obligations. It shall immediately inform the Commission and the other Member States accordingly.
4. I gcás ina bhfuil cúis imleor ag údarás a thugann fógra a mheas nach bhfuil na ceanglais a leagtar síos in Airteagal 31 á gcomhlíonadh a thuilleadh ag comhlacht faoina dtugtar fógra, nó nach bhfuil sé ag comhlíonadh a chuid oibleagáidí, déanfaidh an t-údarás sin a thugann fógra an t-ábhar a imscrúdú gan mhoill agus a dhúthracht a chaitheamh leis. Sa chomhthéacs sin, cuirfidh sé an comhlacht faoina dtugtar fógra lena mbaineann ar an eolas maidir leis na hagóidí a rinneadh agus tabharfaidh sé an deis dó a thuairimí a chur in iúl. Má thagann an t-údarás a thugann fógra ar an gconclúid nach bhfuil na ceanglais a leagtar síos in Airteagal 31 á gcomhlíonadh a thuilleadh ag an gcomhlacht faoina dtugtar fógra nó nach bhfuil sé ag comhlíonadh a chuid oibleagáidí, déanfaidh an t-údarás sin an t-ainmniú a shrianadh, cuirfidh sé ar fionraí é nó déanfaidh sé é a aistarraingt, de réir mar is iomchuí, ag brath ar a thromchúisí atá an mhainneachtain na ceanglais sin nó na hoibleagáidí sin a chomhlíonadh. Cuirfidh sé an Coimisiún agus na Ballstáit eile ar an eolas láithreach dá réir sin.
5. Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall inform the providers concerned within 10 days.
5. I gcás inar cuireadh ainmniú comhlachta faoina dtugtar fógra ar fionraí, inar cuireadh srian air nó ina ndearnadh é a tharraingt siar ina iomláine nó i bpáirt, déanfaidh sé na soláthraithe lena mbaineann a chur ar an eolas laistigh de 10 lá.
6. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall take appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to notifying authorities in other Member States and to market surveillance authorities at their request.
6. I gcás ina ndéantar ainmniú a shrianadh, a chur ar fionraí nó a tharraingt siar, déanfaidh an t-údarás a thugann fógra na bearta iomchuí lena áirithiú go ndéanfar comhaid an chomhlachta faoina dtugtar fógra a choinneáil agus cuirfidh sé ar fáil iad d’údaráis a thugann fógra i mBallstáit eile agus d’údaráis faireachais margaidh arna iarraidh sin dóibh.
7. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall:
7. I gcás ina ndéanfar ainmniú a shrianadh, a chur ar fionraí nó a tharraingt siar, déanfaidh an t-údarás a thugann fógra an méid seo a leanas:
(a)
assess the impact on the certificates issued by the notified body;
(a)
measúnú ar an tionchar atá ar na deimhnithe a eisíonn an comhlacht faoina dtugtar fógra;
(b)
submit a report on its findings to the Commission and the other Member States within three months of having notified the changes to the designation;
(b)
tuarascáil lena gcuid fionnachtana a chur faoi bhráid an Choimisiúin agus na mBallstát eile, laistigh de 3 mhí tar éis fógra faoi na hathruithe ar an ainmniú a thabhairt;
(c)
require the notified body to suspend or withdraw, within a reasonable period of time determined by the authority, any certificates which were unduly issued, in order to ensure the continuing conformity of high-risk AI systems on the market;
(c)
ceangal a chur ar an gcomhlacht faoina dtugtar fógra aon deimhniú a eisíodh go míchuí a chur ar fionraí nó a tharraingt siar, laistigh de thréimhse réasúnta ama a chinnfidh an t-údarás chun comhréireacht leanúnach na gcóras intleachta saorga ardriosca atá ar an margadh a áirithiú;
(d)
inform the Commission and the Member States about certificates the suspension or withdrawal of which it has required;
(d)
an Coimisiún agus na Ballstáit a chur ar an eolas faoi dheimhnithe a d’éiligh sé a chur ar fionraí nó a tharraingt siar;
(e)
provide the national competent authorities of the Member State in which the provider has its registered place of business with all relevant information about the certificates of which it has required the suspension or withdrawal; that authority shall take the appropriate measures, where necessary, to avoid a potential risk to health, safety or fundamental rights.
(e)
gach faisnéis ábhartha a chur ar fáil d’údaráis inniúla náisiúnta an Bhallstáit ina bhfuil a oifig chláraithe ag an soláthraí maidir leis na deimhnithe atá sé ag éileamh a chur ar fionraí nó a tharraingt siar; déanfaidh an t-údarás sin na bearta iomchuí is gá chun baol ionchasach do shláinte, do shábháilteacht nó do chearta bunúsacha a sheachaint.
8. With the exception of certificates unduly issued, and where a designation has been suspended or restricted, the certificates shall remain valid in one of the following circumstances:
8. Cé is moite de na deimhnithe a eisíodh go míchuí agus i gcás ina ndéanfar an t-ainmniú a chur ar fionraí nó a shrianadh, leanfaidh na deimhnithe de bheith bailí i gceann de na cúinsí seo a leanas:
(a)
the notifying authority has confirmed, within one month of the suspension or restriction, that there is no risk to health, safety or fundamental rights in relation to certificates affected by the suspension or restriction, and the notifying authority has outlined a timeline for actions to remedy the suspension or restriction; or
(a)
dhearbhaigh an t-údarás a thugann fógra, laistigh d’aon mhí amháin tar éis an chur ar fionraí nó an tsrianta, nach ann do bhaol don tsláinte, don tsábháilteacht nó do chearta bunúsacha maidir le deimhnithe a ndearnadh difear dóibh de bharr na fionraí nó an tsrianta agus tá amlíne gníomhaíochtaí curtha i láthair ag an údarás a thugann fógra chun an fhionraí nó an srianadh a réiteach; nó
(b)
the notifying authority has confirmed that no certificates relevant to the suspension will be issued, amended or re-issued during the course of the suspension or restriction, and states whether the notified body has the capability of continuing to monitor and remain responsible for existing certificates issued for the period of the suspension or restriction; in the event that the notifying authority determines that the notified body does not have the capability to support existing certificates issued, the provider of the system covered by the certificate shall confirm in writing to the national competent authorities of the Member State in which it has its registered place of business, within three months of the suspension or restriction, that another qualified notified body is temporarily assuming the functions of the notified body to monitor and remain responsible for the certificates during the period of suspension or restriction.
(b)
tá sé dearbhaithe ag an údarás a thugann fógra nach ndéanfar aon deimhnithe a bhaineann leis an bhfionraí a eisiúint, a leasú, nó a atheisiúint, le linn na fionraí nó an tsrianta, agus go luafaidh an t-údarás an bhfuil an cumas ag an gcomhlacht faoina dtugtar fógra leanúint d’fhaireachán a dhéanamh agus de bheith freagrach as deimhnithe atá ann cheana a eisíodh do thréimhse na fionraí nó an tsrianta; i gcás ina gcinnfidh an t-údarás a thugann fógra nach bhfuil sé de chumas ag an gcomhlacht faoina dtugtar fógra tacú le deimhnithe a eisíodh cheana féin, dearbhóidh soláthraí an chórais a chumhdaítear leis an deimhniú i scríbhinn d’údaráis inniúla náisiúnta an Bhallstáit ina bhfuil a oifig chláraithe, laistigh de 3 mhí ón bhfionraí nó ón srianadh, go bhfuil feidhmeanna an chomhlachta faoina dtugtar fógra i ndáil le faireachán a dhéanamh ar na deimhnithe agus bheith freagrach astu á nglacadh ag comhlacht cáilithe eile faoina dtugtar fógra, ar bhonn sealadach, le linn na tréimhse fionraí nó srianta.
9. With the exception of certificates unduly issued, and where a designation has been withdrawn, the certificates shall remain valid for a period of nine months under the following circumstances:
9. Cé is moite de na deimhnithe a eisíodh go míchuí, agus i gcás ina ndéanfar an t-ainmniú a tharraingt siar, leanfaidh na deimhnithe de bheith bailí go ceann tréimhse 9 mí sna cúinsí seo a leanas:
(a)
the national competent authority of the Member State in which the provider of the high-risk AI system covered by the certificate has its registered place of business has confirmed that there is no risk to health, safety or fundamental rights associated with the high-risk AI systems concerned; and
(a)
tá sé dearbhaithe ag údarás inniúil náisiúnta an Bhallstáit ina bhfuil a oifig chláraithe ag soláthraí an chórais intleachta saorga ardriosca a chumhdaítear leis an deimhniú nach ngabhann aon bhaol don tsláinte, don tsábháilteacht ná do chearta bunúsacha leis na córais intleachta saorga ardriosca atá i gceist; agus
(b)
another notified body has confirmed in writing that it will assume immediate responsibility for those AI systems and completes its assessment within 12 months of the withdrawal of the designation.
(b)
tá sé dearbhaithe ag comhlacht eile faoina dtugtar fógra i scríbhinn go nglacfaidh sé chuige láithreach an fhreagracht maidir leis na córais intleachta saorga sin agus go mbeidh measúnú orthu curtha i gcrích aige laistigh de 12 mhí ón uair a rinneadh an t-ainmniú a tharraingt siar.
In the circumstances referred to in the first subparagraph, the national competent authority of the Member State in which the provider of the system covered by the certificate has its place of business may extend the provisional validity of the certificates for additional periods of three months, which shall not exceed 12 months in total.
Sna cúinsí dá dtagraítear sa chéad fhomhír, féadfaidh údarás inniúil náisiúnta an Bhallstáit ina bhfuil a oifig chláraithe ag an soláthraí córais a chumhdaítear leis an deimhniú síneadh ama a chur le bailíocht shealadach na ndeimhnithe go ceann tréimhsí breise 3 mhí, gan a bheith níos faide ná 12 mhí san iomlán.
The national competent authority or the notified body assuming the functions of the notified body affected by the change of designation shall immediately inform the Commission, the other Member States and the other notified bodies thereof.
Cuirfidh an t-údarás inniúil náisiúnta nó an comhlacht faoina dtugtar fógra a ghlacann feidhmeanna an chomhlachta faoina dtugtar fógra a mbeidh tionchar ag an athrú ar an ainmniú air an Coimisiún, na Ballstáit eile agus na comhlachtaí eile faoina dtugtar fógra ar an eolas faoin méid sin láithreach.
Challenge to the competence of notified bodies
Agóid in aghaidh inniúlacht na gcomhlachtaí faoina dtugtar fógra
1. The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and of its applicable responsibilities.
1. Imscrúdóidh an Coimisiún, i gcás inar gá, gach cás a bhfuil amhras air ina leith maidir le hinniúlacht comhlachta faoina dtugtar fógra nó maidir le comhlacht faoina dtugtar fógra a bheith ag leanúint de chomhlíonadh na gceanglas a leagtar síos in Airteagal 31 agus na bhfreagrachtaí is infheidhme atá air.
2. The notifying authority shall provide the Commission, on request, with all relevant information relating to the notification or the maintenance of the competence of the notified body concerned.
2. Cuirfidh an t-údarás a thugann fógra gach faisnéis ar fáil don Choimisiún, arna iarraidh sin, maidir leis an bhfógra a thug an comhlacht faoina dtugtar fógra lena mbaineann nó maidir le cothabháil a inniúlachta.
3. The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially in accordance with Article 78.
3. Áiritheoidh an Coimisiún gur ar mhodh rúin a dhéileálfar leis an bhfaisnéis íogair uile a fhaightear le linn a chuid imscrúduithe de bhun an Airteagail seo, i gcomhréir le hAirteagal 78.
4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements for its notification, it shall inform the notifying Member State accordingly and request it to take the necessary corrective measures, including the suspension or withdrawal of the notification if necessary. Where the Member State fails to take the necessary corrective measures, the Commission may, by means of an implementing act, suspend, restrict or withdraw the designation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
4. I gcás ina bhfionnfaidh an Coimisiún nach gcomhlíonann comhlacht faoina dtugtar fógra na ceanglais maidir lena fhógra, nó nach gcomhlíonann sé iad a thuilleadh, cuirfidh sé an méid sin in iúl don Bhallstát a thugann fógra dá réir sin agus iarrfaidh sé air na bearta ceartaitheacha is gá a dhéanamh, lena n-áirítear an fógra a fhionraí nó a tharraingt siar, más gá sin. I gcás ina dteipeann ar an mBallstát na bearta ceartaitheacha is gá a ghlacadh, féadfaidh an Coimisiún, trí bhíthin gníomh cur chun feidhme, an t-ainmniú a chur ar fionraí, a shrianadh nó a tharraingt siar. Glacfar an gníomh cur chun feidhme sin i gcomhréir leis an nós imeachta scrúdúcháin dá dtagraítear in Airteagal 98(2).
Harmonised standards and standardisation deliverables
Caighdeáin chomhchuibhithe agus spriocanna insoláthartha um chaighdeánú
1. High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations.
1. Córais intleachta saorga ardriosca nó samhlacha intleachta saorga ilchuspóireacha atá i gcomhréir le caighdeáin chomhchuibhithe nó le codanna díobh, agus ar foilsíodh a dtagairtí in Iris Oifigiúil an Aontais Eorpaigh i gcomhréir le Rialachán (AE) Uimh. 1025/2012, toimhdeofar iad a bheith i gcomhréir leis na ceanglais a leagtar amach i Roinn 2 den Chaibidil seo nó, de réir mar is infheidhme, leis na hoibleagáidí a leagtar amach i gCaibidil V, Ranna 2 agus 3, den Rialachán seo, a mhéid a chumhdaítear na ceanglais nó na hoibleagáidí sin leis na caighdeáin sin.
2. In accordance with Article 10 of Regulation (EU) No 1025/2012, the Commission shall issue, without undue delay, standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation requests covering obligations set out in Chapter V, Sections 2 and 3, of this Regulation. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle, and on the energy-efficient development of general-purpose AI models. When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum.
2. I gcomhréir le hAirteagal 10 de Rialachán (AE) Uimh. 1025/2012, eiseoidh an Coimisiún, gan moill mhíchuí, iarrataí ar chaighdeánú lena gcumhdófar na ceanglais uile a leagtar amach i Roinn 2 den Chaibidil seo agus, de réir mar is infheidhme, iarrataí ar chaighdeánú lena gcumhdaítear oibleagáidí a leagtar amach i gCaibidil V, Ranna 2 agus 3, den Rialachán seo. Leis an iarraidh ar chaighdeánú, iarrfar freisin spriocanna insoláthartha maidir le próisis tuairiscithe agus doiciméadachta chun feidhmíocht acmhainní na gcóras intleachta saorga a fheabhsú, amhail ídiú fuinnimh agus acmhainní eile an chórais intleachta saorga ardriosca a laghdú le linn a shaolré, agus maidir le samhlacha intleachta saorga ilchuspóireacha a fhorbairt ar bhealach atá tíosach ar fhuinneamh. Agus iarraidh ar chaighdeánú á hullmhú aige, rachaidh an Coimisiún i gcomhairle leis an mBord agus leis na geallsealbhóirí ábhartha, lena n-áirítear an fóram comhairleach.
When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that high-risk AI systems or general-purpose AI models placed on the market or put into service in the Union meet the relevant requirements or obligations laid down in this Regulation.
Agus iarraidh ar chaighdeánú á heisiúint aige chuig eagraíochtaí Eorpacha um chaighdeánú, sonróidh an Coimisiún nach mór na caighdeáin a bheith soiléir, comhsheasmhach, lena n-áirítear leis na caighdeáin a forbraíodh sna hearnálacha éagsúla le haghaidh táirgí a chumhdaítear le reachtaíocht chomhchuibhithe reatha an Aontais a liostaítear in Iarscríbhinn I, agus é mar aidhm leo a áirithiú go gcomhlíonann córais intleachta saorga ardriosca nó samhlacha intleachta saorga ilchuspóireacha a chuirtear ar an margadh nó a chuirtear i mbun seirbhíse san Aontas na ceanglais ábhartha nó oibleagáidí a leagtar síos sa Rialachán seo.
The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU) No 1025/2012.
Iarrfaidh an Coimisiún ar na heagraíochtaí Eorpacha um chaighdeánú fianaise a sholáthar maidir lena ndícheall na cuspóirí dá dtagraítear sa chéad fhomhír agus sa dara fomhír den mhír seo a chomhlíonadh i gcomhréir le hAirteagal 24 de Rialachán (AE) Uimh. 1025/2012.
3. The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, to contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and to enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU) No 1025/2012.
3. Féachfaidh rannpháirtithe an phróisis caighdeánaithe le hinfheistíocht agus nuálaíocht san intleacht shaorga a chur chun cinn, lena n-áirítear trí chinnteacht dhlíthiúil a mhéadú, chomh maith le hiomaíochas agus fás mhargadh an Aontais, agus le rannchuidiú le comhar domhanda maidir le caighdeánú a neartú agus caighdeáin idirnáisiúnta atá ann cheana i réimse na hintleachta saorga atá comhsheasmhach le luachanna, cearta bunúsacha agus leasanna an Aontais á gcur san áireamh, agus leis an rialachas il-gheallsealbhóra lena n-áirithítear ionadaíocht chothrom ar leasanna agus rannpháirtíocht éifeachtach na ngeallsealbhóirí ábhartha uile i gcomhréir le hAirteagail 5, 6, agus 7 de Rialachán (AE) Uimh. 1025/2012 a fheabhsú.
1. The Commission may adopt, implementing acts establishing common specifications for the requirements set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V where the following conditions have been fulfilled:
1. Féadfaidh an Coimisiún gníomhartha cur chun feidhme a ghlacadh lena mbunaítear sonraíochtaí coiteanna le haghaidh na gceanglas a leagtar amach i Roinn 2 den Chaibidil seo nó, de réir mar is infheidhme, le haghaidh na n-oibleagáidí a leagtar amach i Ranna 2 agus 3 de Chaibidil V i gcás inar comhlíonadh na coinníollacha seo a leanas:
(a)
the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and:
(a)
gur iarr an Coimisiún, de bhun Airteagal 10(1) de Rialachán (AE) Uimh. 1025/2012, ar cheann amháin nó níos mó d’eagraíochtaí Eorpacha um chaighdeánú caighdeán comhchuibhithe a dhréachtú le haghaidh na gceanglas a leagtar amach i Roinn 2 den Chaibidil seo, nó, de réir mar is infheidhme, le haghaidh na n-oibleagáidí a leagtar amach i Ranna 2 agus 3 de Chaibidil V, agus:
(i)
the request has not been accepted by any of the European standardisation organisations; or
(i)
nár ghlac aon cheann de na heagraíochtaí Eorpacha um chaighdeánú leis an iarraidh; nó
(ii)
the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) No 1025/2012; or
(ii)
nach ndéantar na caighdeáin chomhchuibhithe lena dtugtar aghaidh ar an iarraidh sin a sholáthar laistigh den sprioc-am a leagtar síos i gcomhréir le hAirteagal 10(1) de Rialachán (AE) Uimh. 1025/2012; nó
(iii)
the relevant harmonised standards insufficiently address fundamental rights concerns; or
(iii)
nach dtugtar aghaidh go leordhóthanach leis na caighdeáin chomhchuibhithe ábhartha ar ábhair imní maidir le cearta bunúsacha; nó
(iv)
the harmonised standards do not comply with the request; and
(iv)
nach gcomhlíonann na caighdeáin chomhchuibhithe an iarraidh; agus
(b)
no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or, as applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period.
(b)
nár foilsíodh aon tagairt do chaighdeáin chomhchuibhithe lena gcumhdaítear na ceanglais dá dtagraítear i Roinn 2 den Chaibidil seo, nó, de réir mar is infheidhme, na hoibleagáidí dá dtagraítear i Ranna 2 agus 3 de Chaibidil V in Iris Oifigiúil an Aontais Eorpaigh i gcomhréir le Rialachán (AE) Uimh. 1025/2012 agus nach meastar go bhfoilseofar aon tagairt den sórt sin laistigh de thréimhse réasúnta.
When drafting the common specifications, the Commission shall consult the advisory forum referred to in Article 67.
Agus na sonraíochtaí coiteanna á ndréachtú aige, rachaidh an Coimisiún i gcomhairle leis an bhfóram comhairleach dá dtagraítear in Airteagal 67.
The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance with the examination procedure referred to in Article 98(2).
Déanfar na gníomhartha cur chun feidhme dá dtagraítear sa chéad fhomhír den mhír seo a ghlacadh i gcomhréir leis an nós imeachta scrúdúcháin dá dtagraítear in Airteagal 98(2).
2. Before preparing a draft implementing act, the Commission shall inform the committee referred to in Article 22 of Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this Article to be fulfilled.
2. Sula n-ullmhóidh an Coimisiún dréachtghníomh cur chun feidhme, cuirfidh sé in iúl don choiste dá dtagraítear in Airteagal 22 de Rialachán (AE) Uimh. 1025/2012 go measann sé na coinníollacha atá leagtha síos i mír 1 den Airteagal seo a bheith comhlíonta.
3. High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations.
3. Maidir le córais intleachta saorga ardriosca nó samhlacha intleachta saorga ilchuspóireacha atá i gcomhréir leis na sonraíochtaí coiteanna dá dtagraítear i mír 1, nó le codanna de na sonraíochtaí sin, toimhdeofar iad a bheith i gcomhréir leis na ceanglais a leagtar amach i Roinn 2 den Chaibidil seo nó, de réir mar is infheidhme, i gcomhréir leis na hoibleagáidí dá dtagraítear i Ranna 2 agus 3 de Chaibidil V, a mhéid a chumhdaítear na ceanglais nó na hoibleagáidí sin leis na sonraíochtaí coiteanna sin.
4. Where a harmonised standard is adopted by a European standardisation organisation and proposed to the Commission for the publication of its reference in the Official Journal of the European Union, the Commission shall assess the harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to a harmonised standard is published in the Official Journal of the European Union, the Commission shall repeal the implementing acts referred to in paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter or, as applicable, the same obligations set out in Sections 2 and 3 of Chapter V.
4. I gcás ina nglacfaidh eagraíocht Eorpach um chaighdeánú caighdeán comhchuibhithe agus ina molfaidh sí don Choimisiún a thagairt a fhoilsiú in Iris Oifigiúil an Aontais Eorpaigh, déanfaidh an Coimisiún measúnú ar an gcaighdeán comhchuibhithe i gcomhréir le Rialachán (AE) Uimh. 1025/2012. Nuair a fhoilseofar tagairt do chaighdeán comhchuibhithe in Iris Oifigiúil an Aontais Eorpaigh, aisghairfidh an Coimisiún na gníomhartha cur chun feidhme dá dtagraítear i mír 1, nó codanna díobh, lena gcumhdaítear na ceanglais chéanna a leagtar amach i Roinn 2 den Chaibidil seo nó, de réir mar is infheidhme, na hoibleagáidí céanna a leagtar amach i Ranna 2 agus 3 de Chaibidil V.
5. Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto.
5. I gcás nach gcomhlíonann soláthraithe córas intleachta saorga ardriosca nó samhlacha intleachta saorga ilchuspóireacha na sonraíochtaí coiteanna dá dtagraítear i mír 1, léireoidh siad go cuí gur ghlac siad réitigh theicniúla lena gcomhlíontar na ceanglais dá dtagraítear i Roinn 2 den Chaibidil seo nó, de réir mar is infheidhme, na hoibleagáidí a leagtar amach i Ranna 2 agus 3 de Chaibidil V go leibhéal atá coibhéiseach leo ar a laghad.
6. Where a Member State considers that a common specification does not entirely meet the requirements set out in Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it shall inform the Commission thereof with a detailed explanation. The Commission shall assess that information and, if appropriate, amend the implementing act establishing the common specification concerned.
6. I gcás ina measann Ballstát nach gcomhlíonann sonraíocht choiteann na ceanglais a leagtar amach i Roinn 2 nó, de réir mar is infheidhme, na hoibleagáidí a leagtar amach i Ranna 2 agus 3 de Chaibidil V ina n-iomláine, cuirfidh sé an Coimisiún ar an eolas faoi sin le míniú mionsonraithe. Déanfaidh an Coimisiún measúnú ar an eolas sin agus, i gcás inarb iomchuí, déanfaidh sé an gníomh cur chun feidhme lena mbunaítear an tsonraíocht choiteann lena mbaineann a leasú.
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on:
1. Maidir le córais intleachta saorga ardriosca a liostaítear i bpointe 1 d’Iarscríbhinn III, más rud é, agus é á léiriú go gcomhlíonann córas intleachta saorga ardriosca na ceanglais a leagtar amach i Roinn 2, gur chuir an soláthraí i bhfeidhm na caighdeáin chomhchuibhithe dá dtagraítear in Airteagal 40, nó, i gcás inarb infheidhme, na sonraíochtaí coiteanna dá dtagraítear in Airteagal 41, roghnóidh an soláthraí ceann de na nósanna imeachta um measúnú comhréireachta seo a leanas bunaithe ar:
(a)
the internal control referred to in Annex VI; or
(a)
an rialú inmheánach dá dtagraítear in Iarscríbhinn VI; nó
(b)
the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
(b)
an measúnú ar an gcóras bainistíochta cáilíochta agus ar an measúnú ar an doiciméadacht theicniúil, i gcomhar le comhlacht faoina dtugtar fógra, dá dtagraítear in Iarscríbhinn VII.
In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where:
Agus é á léiriú go gcomhlíonann córas intleachta saorga ardriosca na ceanglais a leagtar amach i Roinn 2, leanfaidh an soláthraí an nós imeachta um measúnú comhréireachta a leagtar amach in Iarscríbhinn VII sa chás:
(a)
harmonised standards referred to in Article 40 do not exist, and common specifications referred to in Article 41 are not available;
(a)
nach ann do chaighdeáin chomhchuibhithe dá dtagraítear in Airteagal 40, agus nach bhfuil na sonraíochtaí coiteanna dá dtagraítear in Airteagal 41 ar fáil;
(b)
the provider has not applied, or has applied only part of, the harmonised standard;
(b)
nár chuir an soláthraí an caighdeán comhchuibhithe i bhfeidhm nó nár chuir sé ach cuid den chaighdeán comhchuibhithe i bhfeidhm;
(c)
the common specifications referred to in point (a) exist, but the provider has not applied them;
(c)
gur ann do na sonraíochtaí coiteanna dá dtagraítear i bpointe (a), ach nár chuir an soláthraí i bhfeidhm iad;
(d)
one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the part of the standard that was restricted.
(d)
gur foilsíodh ceann amháin nó níos mó de na caighdeáin chomhchuibhithe dá dtagraítear i bpointe (a) maille le srian agus ar an gcuid sin amháin den chaighdeán ar cuireadh srian uirthi.
For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body.
Chun críocha an nós imeachta um measúnú comhréireachta dá dtagraítear in Iarscríbhinn VII, féadfaidh an soláthraí aon cheann de na comhlachtaí faoina dtugtar fógra a roghnú. Ach nuair a bheidh sé beartaithe an córas intleachta saorga ardriosca a bheith á chur i mbun seirbhíse ag údaráis forfheidhmithe dlí, údaráis inimirce nó údaráis tearmainn chomh maith le hinstitiúidí, comhlachtaí, oifigí nó gníomhaireachtaí an Aontais, gníomhóidh an t-údarás faireachais margaidh dá dtagraítear in Airteagal 74(8) nó (9), de réir mar is infheidhme, mar chomhlacht faoina dtugtar fógra.
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body.
2. Maidir le córais intleachta saorga ardriosca dá dtagraítear i bpointí 2 go pointe 8 d’Iarscríbhinn III, leanfaidh soláthraithe an nós imeachta um measúnú comhréireachta bunaithe ar rialú inmheánach dá dtagraítear in Iarscríbhinn VI, nach ndéantar foráil leis maidir le rannpháirtíocht comhlachta faoina dtugtar fógra.
3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.
3. Maidir le córais intleachta saorga ardriosca atá faoi chumhdach reachtaíocht chomhchuibhithe an Aontais a liostaítear i Roinn A d’Iarscríbhinn I, leanfaidh an soláthraí an nós imeachta ábhartha um measúnú comhréireachta de réir mar a cheanglaítear faoi na gníomhartha dlí sin. Beidh feidhm ag na ceanglais a leagtar amach i Roinn 2 den Chaibidil seo maidir leis na córais intleachta saorga ardriosca sin agus beidh siad mar chuid den mheasúnú sin. Beidh feidhm freisin ag pointí 4.3., 4.4., 4.5. agus ag an gcúigiú mír de phointe 4.6 d’Iarscríbhinn VII.
For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts.
Chun críocha an mheasúnaithe sin, na comhlachtaí faoina dtugtar fógra ar tugadh fógra fúthu faoi na gníomhartha dlí sin, beidh siad i dteideal rialú a dhéanamh ar chomhréireacht na gcóras intleachta saorga ardriosca leis na ceanglais a leagtar amach i Roinn 2, ar choinníoll go ndearnadh measúnú ar chomhlíontacht na gcomhlachtaí faoina dtugtar fógra leis na ceanglais a leagtar síos in Airteagal 31(4), (5), (10) agus (11) i gcomhthéacs an nós imeachta um fhógra a thabhairt faoi na gníomhartha dlí sin.
Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter.
I gcás ina gcuireann gníomh dlí a liostaítear i Roinn A d’Iarscríbhinn I ar chumas mhonaróir an táirge a roghnú gan a bheith páirteach i measúnú comhréireachta tríú páirtí, ar choinníoll gur chuir an monaróir sin na caighdeáin chomhchuibhithe uile i bhfeidhm lena gcumhdaítear na ceanglais ábhartha uile, ní fhéadfaidh an monaróir sin úsáid a bhaint as an rogha sin ach amháin má tá caighdeáin chomhchuibhithe nó, i gcás inarb infheidhme, na sonraíochtaí coiteanna dá dtagraítear in Airteagal 41 curtha i bhfeidhm aige freisin, lena gcumhdaítear na ceanglais uile a leagtar amach i Roinn 2 den Chaibidil seo.
4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer.
4. Cuirfear nós imeachta nua um measúnú comhréireachta i bhfeidhm maidir le córais intleachta saorga ardriosca, ar córais iad a bhí faoi réir nós imeachta um measúnú comhréireachta cheana, i gcás modhnú substaintiúil, gan beann ar an mbeartaítear an córas modhnaithe a dháileadh a thuilleadh nó an leanann an t-úsáideoir gairmiúil reatha d’úsáid a bhaint as.
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.
Maidir le córais intleachta saorga ardriosca a leanann de bheith ag foghlaim tar éis iad a bheith curtha ar an margadh nó curtha i mbun seirbhíse, ní modhnú substaintiúil a bheidh in athruithe ar an gcóras intleachta saorga ardriosca agus ar a fheidhmíocht, athruithe arna réamhchinntiú ag an soláthraí tráth an mheasúnaithe comhréireachta tosaigh agus ar cuid iad den fhaisnéis atá sa doiciméadacht theicniúil dá dtagraítear i bpointe 2(f) d’Iarscríbhinn IV.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annexes VI and VII by updating them in light of technical progress.
5. Tugtar de chumhacht don Choimisiún gníomhartha tarmligthe a ghlacadh i gcomhréir le hAirteagal 97 chun Iarscríbhinní VI agus VII a leasú trína dtabhairt cothrom le dáta i bhfianaise an dul chun cinn theicniúil.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and resources among notified bodies.
6. Tugtar de chumhacht don Choimisiún gníomhartha tarmligthe a ghlacadh i gcomhréir le hAirteagal 97 lena leasaítear míreanna 1 agus 2 den Airteagal seo chun na córais intleachta saorga ardriosca dá dtagraítear i bpointe 2 go pointe 8 d’Iarscríbhinn III a chur faoi réir an nós imeachta um measúnú comhréireachta dá dtagraítear in Iarscríbhinn VII nó faoi réir codanna de. Glacfaidh an Coimisiún na gníomhartha tarmligthe sin, agus éifeachtacht an nós imeachta um measúnú comhréireachta á cur san áireamh aige bunaithe ar an rialú inmheánach dá dtagraítear in Iarscríbhinn VI maidir leis na rioscaí don tsláinte, don tsábháilteacht, agus do chosaint na gceart bunúsach a thugtar leis na córais sin, a chosc nó a íoslaghdú chomh maith le hinfhaighteacht inniúlachtaí leordhóthanacha na gcomhlachtaí faoina dtugtar fógra.
1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established.
1. Maidir le deimhnithe arna n-eisiúint ag comhlachtaí faoina dtugtar fógra i gcomhréir le hIarscríbhinn VII, déanfar iad a tharraingt suas i dteanga is éasca do na húdaráis ábhartha sa Bhallstát ina bhfuil an comhlacht faoina dtugtar fógra bunaithe a thuiscint.
2. Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid.
2. Beidh na deimhnithe bailí ar feadh na tréimhse a léirítear orthu, tréimhse nach mairfidh níos faide ná 5 bliana i gcás na gcóras intleachta saorga a chumhdaítear le hIarscríbhinn I, agus 4 bliana i gcás na gcóras intleachta saorga a chumhdaítear le hIarscríbhinn III. Arna iarraidh sin don soláthraí, féadfar bailíocht an deimhnithe a shíneadh ar feadh tréimhsí breise, nach mbeidh aon cheann acu níos faide ná 5 bliana i gcás na gcóras intleachta saorga a chumhdaítear faoi Iarscríbhinn I, agus 4 bliana i gcás na gcóras intleachta saorga a chumhdaítear faoi Iarscríbhinn III, bunaithe ar athmheasúnú i gcomhréir leis na nósanna imeachta um measúnú comhréireachta is infheidhme. Beidh aon fhorlíonadh ar dheimhniú bailí ar choinníoll gur bailí don deimhniú a fhorlíonann sé.
3. Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.
3. Má chinneann comhlacht faoina dtugtar fógra nach bhfuil na ceanglais a leagtar síos i Roinn 2 á gcomhlíonadh ag córas intleachta saorga a thuilleadh, déanfaidh sé an deimhniú a eisíodh a chur ar fionraí, a tharraingt siar nó cuirfidh sé srian air, agus prionsabal na comhréireachta á chur san áireamh, ach amháin má dhéantar comhlíonadh na gceanglas sin a áirithiú le gníomhaíocht cheartaitheach iomchuí a dhéanfaidh an soláthraí laistigh de sprioc-am a leagfaidh an comhlacht faoina dtugtar fógra síos. Tabharfaidh an comhlacht faoina dtugtar fógra na cúiseanna lena chinneadh.
An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available.
Beidh nós imeachta achomhairc ar fáil i gcoinne chinntí na gcomhlachtaí faoina dtugtar fógra, lena n-áirítear achomhairc maidir le deimhnithe comhréireachta a eisítear.
Derogation from conformity assessment procedure
Maolú ar nós imeachta um measúnú comhréireachta
1. By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay.
1. De mhaolú ar Airteagal 43 agus ar iarraidh a bhfuil údar cuí léi, féadfaidh aon údarás faireachais margaidh a údarú córais intleachta saorga ardriosca shonracha a chur ar an margadh nó a chur i mbun seirbhíse laistigh de chríoch an Bhallstáit lena mbaineann, ar chúiseanna eisceachtúla a bhaineann leis an tslándáil phoiblí nó le cosaint beatha agus sláinte daoine, le cosaint an chomhshaoil nó le príomhshócmhainní tionsclaíocha agus bonneagair a chosaint. Is ar feadh tréimhse theoranta a bheidh an t-údarú sin, fad a bheidh na nósanna imeachta um measúnú comhréireachta is gá á gcur i gcrích, agus na cúiseanna eisceachtúla lena dtugtar údar maith leis an maolú á gcur san áireamh. Tabharfar faoi na nósanna imeachta sin gan moill mhíchuí.
2. In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded.
2. I gcás práinne a bhfuil údar cuí léi ar chúiseanna eisceachtúla a bhaineann leis an tslándáil phoiblí nó i gcás garbhagairt shonrach, shuntasach agus láithreach ar bheatha nó ar shábháilteacht fhisiceach daoine nádúrtha, féadfaidh údaráis forfheidhmithe dlí nó údaráis cosanta sibhialta córas intleachta saorga ardriosca sonrach a chur i mbun seirbhíse gan an t-údarú dá dtagraítear i mír 1 ar choinníoll go n-iarrtar an t-údarú sin le linn na húsáide nó ina diaidh gan moill mhíchuí. Má dhiúltaítear an t-údarú dá dtagraítear i mír 1, cuirfear deireadh le húsáid an chórais intleachta saorga ardriosca le héifeacht láithreach, agus déanfar torthaí agus aschuir uile na húsáide sin a dhiúscairt láithreach.
3. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities.
3. Ní eiseofar an t-údarú dá dtagraítear i mír 1 ach amháin má chinneann an t-údarás faireachais margaidh go gcomhlíonann an córas intleachta saorga ardriosca ceanglais Roinn 2. Tabharfaidh an t-údarás faireachais margaidh fógra don Choimisiún agus do na Ballstáit eile faoi aon údarú arna eisiúint de bhun mhíreanna 1 agus 2. Ní chumhdófar leis an oibleagáid sin sonraí oibríochtúla íogaire maidir le gníomhaíochtaí údaráis forfheidhmithe dlí.
4. Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.
4. Más rud é, laistigh de 15 lá féilire tar éis dóibh an fhaisnéis dá dtagraítear i mír 3 a fháil, nach ndéanann Ballstát ná an Coimisiún agóid maidir le húdarú arna eisiúint ag údarás faireachais margaidh i mBallstát i gcomhréir le mír 1, measfar bonn cirt a bheith leis an údarú sin.
5. Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 3 to be unfounded, the Commission shall, without delay, enter into consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant operators.
5. Más rud é, laistigh de 15 lá féilire tar éis dóibh an fógra dá dtagraítear i mír 3 a fháil, go ndéanann Ballstát agóidí i gcoinne údarú arna eisiúint ag údarás faireachais margaidh i mBallstát eile, nó más rud é go measann an Coimisiún go bhfuil an t-údarú contrártha le dlí an Aontais nó le conclúid na mBallstát maidir le comhlíontacht an chórais dá dtagraítear i mír 3 a bheith gan bhunús, rachaidh an Coimisiún i gcomhairle leis an mBallstát ábhartha gan mhoill. Rachfar i gcomhairle leis na hoibreoirí lena mbaineann agus beidh an deis acu a dtuairimí a chur in iúl. Ag féachaint don mhéid sin, cinnfidh an Coimisiún cé acu atá bonn cirt leis an údarú nó nach bhfuil. Díreoidh an Coimisiún a chinneadh chuig an mBallstát lena mbaineann agus chuig na hoibreoirí lena mbaineann.
6. Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance authority of the Member State concerned.
6. I gcás ina measann an Coimisiún nach bhfuil bonn cirt leis an údarú, déanfaidh údarás faireachais margaidh an Bhallstáit lena mbaineann é a aistarraingt.
7. For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall apply.
7. Maidir le córais intleachta saorga ardriosca a bhaineann le táirgí a chumhdaítear le reachtaíocht chomhchuibhithe an Aontais a liostaítear i Roinn A d’Iarscríbhinn I, ní bheidh feidhm ach ag na maoluithe ar an measúnú comhréireachta a bhunaítear i reachtaíocht chomhchuibhithe sin an Aontais.
EU declaration of conformity
Dearbhú comhréireachta AE
1. The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request.
1. Déanfaidh an soláthraí dearbhú comhréireachta AE i scríbhinn atá inléite ag meaisín agus is féidir a shíniú i scríbhinn nó go leictreonach a tharraingt suas le haghaidh gach córais intleachta saorga ardriosca agus coimeádfaidh sé ar fáil do na húdaráis inniúla náisiúnta é go ceann 10 mbliana tar éis an córas intleachta saorga ardriosca a chur ar an margadh nó i mbun seirbhíse. Leis an dearbhú comhréireachta AE, sainaithneofar an córas intleachta saorga ardriosca dá bhfuil sé tarraingthe suas. Déanfar cóip den dearbhú comhréireachta AE a chur faoi bhráid na n-údarás inniúil náisiúnta ábhartha arna iarraidh sin.
2. The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available.
2. Sonrófar sa dearbhú comhréireachta AE go gcomhlíonann an córas intleachta saorga ardriosca lena mbaineann na ceanglais a leagtar amach i Roinn 2. Beidh sa dearbhú comhréireachta AE an fhaisnéis a leagtar amach in Iarscríbhinn V agus aistreofar é go dtí teanga is éasca d’údaráis inniúla náisiúnta na mBallstát ina gcuirfear an córas intleachta saorga ardriosca ar an margadh nó ar fáil a thuiscint.
3. Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates.
3. I gcás ina bhfuil córais intleachta saorga ardriosca faoi réir reachtaíocht chomhchuibhithe eile de chuid an Aontais lena n-éilítear dearbhú comhréireachta AE freisin, déanfar dearbhú comhréireachta AE aonair a tharraingt suas i ndáil le gach dlí de chuid an Aontais is infheidhme maidir leis an gcóras intleachta saorga ardriosca. Áireofar sa dearbhú an fhaisnéis uile is gá chun reachtaíocht chomhchuibhithe an Aontais a shainaithint lena mbaineann an dearbhú.
4. By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate.
4. Agus an dearbhú comhréireachta AE á tharraingt suas aige, glacfaidh an soláthraí freagracht as comhlíonadh na gceanglas a leagtar amach i Roinn 2. Coinneoidh an soláthraí an dearbhú comhréireachta AE cothrom le dáta de réir mar is iomchuí.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex V by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress.
5. Tugtar de chumhacht don Choimisiún gníomhartha tarmligthe a ghlacadh i gcomhréir le hAirteagal 97 chun Iarscríbhinn V a leasú chun an dearbhú comhréireachta AE a leagtar amach san Iarscríbhinn sin a thabhairt cothrom le dáta, chun gnéithe is gá i bhfianaise an dul chun cinn theicniúil a thabhairt isteach.
1. The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.
1. Beidh an mharcáil CE faoi réir na bprionsabal ginearálta a leagtar amach in Airteagal 30 de Rialachán (CE) Uimh. 765/2008.
2. For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means.
2. Maidir le córais intleachta saorga ardriosca a chuirtear ar fáil go digiteach, ní úsáidfear marcáil dhigiteach CE, ach amháin más féidir í a rochtain go héasca tríd an gcomhéadan óna bhfuil rochtain ar an gcóras sin, nó trí chód meaisín-inléite nó trí mhodh leictreonach eile atá inrochtana go héasca.
3. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
3. Déanfar an mharcáil CE a ghreamú go feiceálach inléite doscriosta maidir le córais intleachta saorga ardriosca. Mura bhfuil sé sin indéanta nó mura bhfuil gá léi mar gheall ar chineál an chórais intleachta saorga ardriosca, greamófar í den phacáistíocht nó den doiciméadacht a ghabhann leis an gcóras, de réir mar is iomchuí.
4. Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.
4. I gcás inarb infheidhme, i ndiaidh na marcála CE, beidh uimhir aitheantais an chomhlachta faoina dtugtar fógra atá freagrach as na nósanna imeachta um measúnú comhréireachta a leagtar amach in Airteagal 43. Greamóidh an comhlacht féin faoina dtugtar fógra nó, arna ordú sin dóibh, an soláthraí nó ionadaí údaraithe an tsoláthraí uimhir aitheantais an chomhlachta faoina dtugtar fógra. Cuirfear an uimhir aitheantais in iúl freisin ar aon ábhar fógraíochta ar a luaitear go bhfuil na ceanglais maidir le marcáil CE á gcomhlíonadh ag an gcóras intleachta saorga.
5. Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law.
5. I gcásanna ina bhfuil córais intleachta saorga ardriosca faoi réir dlí eile de chuid an Aontais lena ndéantar foráil freisin maidir le marcáil CE a ghreamú, léireofar le marcáil CE go gcomhlíonann an córas intleachta saorga ardriosca ceanglais an dlí eile sin freisin.
1. Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71.
1. Maidir le córas intleachta saorga ardriosca a liostaítear in Iarscríbhinn III, cé is moite de na córais intleachta saorga ardriosca dá dtagraítear i bpointe 2 d’Iarscríbhinn III, sula gcuirtear ar an margadh nó i mbun seirbhíse é, cláróidh an soláthraí nó, i gcás inarb infheidhme, an t-ionadaí údaraithe iad féin agus a gcóras sa bhunachar sonraí de chuid an Aontais dá dtagraítear in Airteagal 71.
2. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71.
2. Maidir le córas intleachta saorga arb é meas an tsoláthraí nach bhfuil ardriosca ag baint leis de réir Airteagal 6(3), sula gcuirtear ar an margadh nó i mbun seirbhíse é, cláróidh an soláthraí sin nó, i gcás inarb infheidhme, an t-ionadaí údaraithe, iad féin agus an córas sin i mbunachar sonraí an Aontais dá dtagraítear in Airteagal 71.
3. Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71.
3. Sula ndéanfar córas intleachta saorga ardriosca a liostaítear in Iarscríbhinn III a chur i mbun seirbhíse nó a úsáid, cé is moite de chórais intleachta saorga ardriosca a liostaítear i bpointe 2 d’Iarscríbhinn III, déanfaidh úsáideoirí gairmiúla ar údaráis phoiblí, institiúidí, comhlachtaí, oifigí agus gníomhaireachtaí de chuid an Aontais nó daoine atá ag gníomhú thar a gceann iad féin a chlárú, an córas a roghnú agus a úsáid a chlárú i mbunachar sonraí an Aontais dá dtagraítear in Airteagal 71.
4. For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in:
4. Maidir le córais intleachta saorga ardriosca dá dtagraítear i bpointí 1, 6 agus 7 d’Iarscríbhinn III, i réimsí fhorfheidhmiú an dlí, na himirce, an tearmainn agus an bhainistithe rialaithe teorann, beidh an clárú dá dtagraítear i míreanna 1, 2 agus 3 den Airteagal seo i gcuid neamhphoiblí shlán de bhunachar sonraí an Aontais dá dtagraítear in Airteagal 71, agus ní áireofar ann ach an fhaisnéis seo a leanas, de réir mar is infheidhme, dá dtagraítear:
(a)
Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9;
(a)
i Roinn A, pointí 1 go 10 d’Iarscríbhinn VIII, cé is moite de phointí 6, 8 agus 9;
(b)
Section B, points 1 to 5, and points 8 and 9 of Annex VIII;
(b)
i Roinn B, pointí 1 go 5, agus pointí 8 agus 9 d’Iarscríbhinn VIII;
(c)
Section C, points 1 to 3, of Annex VIII;
(c)
i Roinn C, pointí 1 go 3, d’Iarscríbhinn VIII;
(d)
points 1, 2, 3 and 5, of Annex IX.
(d)
i bpointí 1, 2, 3, agus 5, d’Iarscríbhinn IX.
Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective restricted sections of the EU database listed in the first subparagraph of this paragraph.
Ní bheidh rochtain ach ag an gCoimisiún agus ag na húdaráis náisiúnta dá dtagraítear in Airteagal 74(8) ar na ranna srianta sin faoi seach de bhunachar sonraí an Aontais atá liostaithe sa chéad fhomhír den mhír seo.
5. High-risk AI systems referred to in point 2 of Annex III shall be registered at national level.
5. Déanfar na córais intleachta saorga ardriosca dá dtagraítear i bpointe 2 d’Iarscríbhinn III a chlárú ar an leibhéal náisiúnta.
Transparency obligations for providers and deployers of certain AI systems
Oibleagáidí trédhearcachta do sholáthraithe agus d’úsáideoirí gairmiúla córas intleachta saorga áirithe
1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence.
1. Maidir le córais intleachta saorga atá ceaptha idirghníomhú go díreach le daoine nádúrtha, áiritheoidh soláthraithe go ndéanfar na córais sin a dhearadh agus a fhorbairt ar bhealach a gcuirfidh na daoine nádúrtha lena mbaineann ar an eolas go bhfuil siad ag idirghníomhú le córas intleachta saorga, ach amháin más léir sin ó thaobh duine nádúrtha atá measartha eolasach, géarchúiseach agus cáiréiseach, agus na himthosca agus comhthéacs na húsáide á gcur san áireamh. Ní bheidh feidhm ag an oibleagáid sin maidir le córais intleachta saorga arna n-údarú leis an dlí chun cionta coiriúla a bhrath, a chosc, a imscrúdú nó a ionchúiseamh, faoi réir coimircí iomchuí maidir le cearta agus saoirsí tríú páirtithe, ach amháin má bhíonn na córais sin ar fáil don phobal chun cion coiriúil a thuairisciú.
2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.
2. Soláthraithe córas intleachta saorga, lena n-áirítear córais intleachta saorga ilchuspóireacha, lena ngintear ábhar sintéiseach fuaime, íomhá, físe nó téacs, áiritheoidh siad go ndéanfar aschuir an chórais intleachta saorga a mharcáil i bhformáid mheaisín-inléite agus go mbeidh siad inbhraite mar ábhar a gineadh nó a ionramháladh go saorga. Áiritheoidh soláthraithe go mbeidh a réitigh theicniúla éifeachtach, idir-inoibritheach, láidir agus iontaofa a mhéid is indéanta sin go teicniúil, agus sainiúlachtaí agus teorainneacha cineálacha éagsúla ábhair, costais an chur chun feidhme agus an úrscothacht a nglactar leo i gcoitinne á gcur san áireamh, mar a d’fhéadfaí a léiriú sna caighdeáin theicniúla ábhartha. Ní bheidh feidhm ag an oibleagáid sin a mhéid a dhéanann na córais intleachta saorga feidhm chúnta le haghaidh eagarthóireacht chaighdeánach nó nach ndéanann siad athrú substaintiúil ar na sonraí ionchuir arna soláthar ag an úsáideoir gairmiúil ná ar shéimeantaic na sonraí sin, nó i gcás ina n-údaraítear le dlí iad chun cionta coiriúla a bhrath, a chosc, a imscrúdú nó a ionchúiseamh.
3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law.
3. Cuirfidh úsáideoirí gairmiúla córas aitheanta mothúchán nó córas catagóirithe bhithmhéadraigh na daoine nádúrtha atá nochta dó ar an eolas faoi oibriú an chórais, agus próiseálfaidh siad na sonraí pearsanta i gcomhréir le Rialacháin (AE) 2016/679 agus (AE) 2018/1725 agus Treoir (AE) 2016/680, de réir mar is infheidhme. Ní bheidh feidhm ag an oibleagáid sin maidir le córais intleachta saorga a úsáidtear le haghaidh catagóiriú bithmhéadrach agus aithint mothúchán, a cheadaítear de réir dlí chun cionta coiriúla a bhrath, a chosc agus a imscrúdú, faoi réir coimircí iomchuí do chearta agus saoirsí tríú páirtithe agus i gcomhréir le dlí an Aontais.
4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
4. Maidir le húsáideoirí gairmiúla córais intleachta saorga a ghineann nó a ionramhálann ábhar íomhá, fuaime nó físe ar domhainbhrionnú é, nochtfaidh siad go ndearnadh an t-ábhar a ghiniúint nó a ionramháil go saorga. Ní bheidh feidhm ag an oibleagáid sin i gcás ina bhfuil an úsáid údaraithe le dlí chun cion coiriúil a bhrath, a chosc, a imscrúdú nó a ionchúiseamh. I gcás ina bhfuil an t-ábhar ina chuid de shaothar nó de chlár atá ar aon dul le saothar nó clár ar léir é a bheith ealaíonta, cruthaitheach, aorach nó ficseanúil, tá na hoibleagáidí trédhearcachta a leagtar amach sa mhír seo teoranta do nochtadh gur ann d’inneachar ginte nó cúbláilte den sórt sin ar bhealach iomchuí nach gcuireann isteach ar thaispeáint ná ar theachtadh an tsaothair.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
I gcás úsáideoirí gairmiúla córais intleachta saorga a ghineann nó a ionramhálann téacs a fhoilsítear chun an pobal a chur ar an eolas faoi ábhair a bhaineann le leas an phobail, nochtfaidh siad go ndearnadh an téacs a ghiniúint nó a ionramháil go saorga. Ní bheidh feidhm ag an oibleagáid sin i gcás ina n-údaraítear le dlí an úsáid chun cionta coiriúla a bhrath, a chosc, a imscrúdú nó a ionchúiseamh nó i gcás ina ndearnadh próiseas athbhreithnithe dhaonna nó rialaithe eagarthóireachta ar an inneachar a ghintear le IS agus i gcás ina bhfuil freagracht eagarthóireachta ar dhuine nádúrtha nó dlítheanach as an ábhar a fhoilsiú.
5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall co