Agefi Luxembourg - février 2025

By Vincent WELLENS, Avocat à la Cour, OttavioCOVOLO,AvocatàlaCour&Aline BLEICHER,Avocate,NautaDutilhAvocats Luxembourg S.à r.l. O n 4 February 2025, the Euro- peanCommission released the first set of guidelines regarding the definitionof artificial intelligence (“AI”) pursuant to the AIAct (the “DefinitionGuide- lines”) and the prohibitedAI prac- tices (the “ProhibitedAIGuide- lines”). TheseGuidelineswere pu- blished further to the firstwave of provisions of theAIAct entering into applicationon 2 February 2025 regardingprohibitedAI practices andAI literacy, bothofwhich re- quire anunderstanding ofwhat constitutes a (prohibited)AI or not. Ismy systemanAI or not? This is the question that entities must grasp to determine whether they fall within the scope of the AI Act or not. How an AI system differs from more “traditional” systems by its non-deter- ministic nature, i.e., a systemwhich – by looking at the code - is predictable in every manner in terms of input/output generally would not amount to an AI. TheDefinitionGuidelines in this regard provide further details regarding eachof the seven elements comprising the AI systemdefinition. Key clarifications include that: - theAImay but does not have to possess adaptiveness or self-learning capabilities afterdeployment(beyondwhatitwasini- tially trainedon). -itdoesnotmatterwhethertheAIsystem is used for its intended purpose or not, it willremainanAIsystemforthepurposes of theAIAct. - the definition of inference under the AI Act does not contradict but is not fully alignedwiththesamedefinitionunderthe ISO/IEC 22989 standard on AI concepts andterminology.Bothhoweveraimtogo beyond the simple input/output logic of traditional systems (or “ basic data process- ing ”)withtheAIbeingcapableofinferring howrather thanwhat output to generate. Inferences not stemming from data (e.g., rule-based approaches, pattern recogni- tion, or trial-and-error strategies) are ex- cluded fromthe definition. TheDefinitionGuidelinesalsonotablylist some examples of techniques which would enable the inference of theAI sys- tem, echoing back to the list proposed by the European Commission in the first draft of theAI Act but later removed fol- lowingthevoteonthetext(includingma- chine learning, (un)supervised learning, reinforcement learning, deep learning, logic- andknowledge-based approaches, …). However, the Definition Guidelines do exclude a certain number of AI sys- tems, notably linear or logistic regression methods. Whilst such exclusion is wel- come as it is widely used inmarket anal- ysis and financial risk assessments, it is understood as a subset of supervised learningmethodstodevelopAIsandmay thereforecreateuncertaintyastothescope of such exclusion, in particular as the in- ferences of suchAI must only contribute to its “ computational performance ” rather than bettering its “ decision making models in an intelligent way ” (in other words favouring efficiency rather than the qual- ity of the results, which may be docu- mented with the appropriate perfor- mance benchmarks (e.g., in TFLOPS)). If your system does amount to an AI, the DefinitionGuidelines further point to the grandfathering clause for high-risk AI systems put into the EU market or into service before 2 August 2026 (triggering the obligations only upon significant changesintheirdesigns)andthatthevast majority of (low-risk) systems will not be subject to any regulatory requirements. IsmyAI prohibitedor not? Ifyouansweredthefirstquestionintheaf- firmative, as from2 February 2025 theAI system which amounts to a prohibited practice cannot be put into the market or use.TheexplicitlyprohibitedAIpractices, setoutunderArticle5(1)oftheAIAct,en- compass a range of activities, including butnotlimitedto(i)manipulativeandde- ceptiveAIsystems,(ii)AIsystemsthatex- ploit vulnerabilities based on age, disability, or socio-economic conditions, (iii) social scoring mechanisms, (iv) massive facial image scraping systems, (v)emotionrecognitiontechnologies,and (vi) AI-drivenbiometriccategorizationfor sensitive data. The 140-page long Prohib- itedAIGuidelines aimat bringing further clarity as to the extent of such prohibition and the exceptions included in such list. The Prohibited AI Guidelines notably highlightthatsuchprohibitionsdonotex- tend to the research and development (R&D)phaseofthedevelopmentoftheAI, citing as an example that “ AI developers have the freedom to experiment and test new functionalitieswhichmightinvolvetechniques that could be seen as manipulative […] recog- nisingthatearly-stageR&Disessentialforre- finingAI technologies and ensuring that they meet safety and ethical standards prior to their placingonthemarket ”.Tobehowevernoted thatthedevelopmentoftheAIwillnever- thelessbesubjecttoconsiderationssuchas data protection by design and by default, aswellasethicalandprofessionalrequire- ments. On this note, a number of providers of general-purpose AI models have put in place codes of conducts or ethics policies which are binding upon their users, and which may curb certain applications used for testingpurposes. It also stems from the Prohibited AI Guidelines that the interplay between the AI Act and the other legislation, notably theGDPR, will be a key consideration for its enforcement. For instance, the Euro- pean Commission cites dark patterns withinthemeaningoftheDigitalServices Act (“ DSA ”) as an example ofmanipula- tive or deceptive techniques under theAI Act, when they are likely to cause signifi- cantharm.Asanotherexample,theguide- lines also strictly follow the SCHUFA de- cision from the CJEU holding that a decision will be assessed as automated pursuanttotheGDPReveniftheautoma- tion (i.e., the AI) was only used in the courseofthedecision-makingprocessand a humangave the final validation. AnotherkeylessonfromtheProhibitedAI Guidelines seems to be the will to recog- nise the current practices in the financial sector as not being subject to the above prohibitions. The European Commission clarifies that the social scoringprohibition doesnottargetthescoringoflegalentities, except if such score reflects an aggregate of the individuals within the entity. Fur- thermore, whilst the risks regarding such social score being used for assessing the credit worthiness of an individual is part of the concerns behind such prohibition, creditscoringandriskscoringinthefinan- cial and insurance sector should be deemed as a “ legitimate ” (i.e., non-prohib- ited) practice. To be nevertheless noted that such practices will likely lead to a high-riskAIsystemexceptwhereusedfor the purpose of detectingfinancial fraud. Acommonreasoningstemmingfromthe Prohibited AI Guidelines, echoing the above,istherelianceonotherexistingreg- ulationsassourcesforindicationstoframe theAIAct’s requirementswithmore pre- cision. Theguidelines for example refer to the EBA’s Guidelines on loan origination and monitoring (EBA/GL/2020/06) as a reference point for compliance and thereby determining whether the credit- rating practice is prohibited or not. Re- garding the prediction of criminal offences, a practice which is prohibited both for law enforcement authorities and private entities when tasked with public missions, such as in anti-money launder- ing (AML) checks, the guidelines clarify that strict compliance withAML require- ments (in terms of dataminimisation and to the extent thereareobjectiveandverifi- ablemarkersofAMLrisk)“ willensurethat theuseofindividualcrimepredictionAIsystem for anti-money laundering purposes falls out- side the scope of the prohibition ”. Onthescopeoftheprohibition,theguide- lines notably confirm that the AI system released under a free and open-source li- cenceremainssubjecttotheprohibitionof the above practices. The guidelines also notethattheprohibitionrefersspecifically to a practice, rather than to the AI as a whole. In other words, an AI system which is capable of a prohibited practice (e.g., a chatbot being used for manipula- tive and deceptive techniques) is not for- bidden from being placed upon the market to the extent the required guardrails are put in place to avoid it (in- cluding refusals by the AI regarding cer- tain instructions and monitoring the compliancewith such limitation). TobenotedthatforanAIdeployedinthe workplace, as another example of inter- playwith other legislations, themonitor- ing of employees’ use of suchAI in their workplacemaytriggerthespecificprovi- sions of the Luxembourg Labour Code. Furthermore, on the basis of the article 86 of theAIAct, such employees as affected persons may lodge a complaint with the relevant market surveillance authority (the CNPD along with other sectorial bodies pursuant to the bill of law 8476 supplementingtheAIAct).Regardingen- forcement, the European Commission also confirms that where a conduct amounts to multiple violations of the AI Act,onlyonepenaltyshallbeimposedfor such conduct within the maximum amounts set out in the regulation (i.e., EUR 35.000.000 or up to 7 % of its total worldwide annual turnover for the pre- cedingfinancialyear,whicheverishigher in case of a prohibitedAI practice). What about biometric surveillance? Biometric data is treated as a special cate- goryof personal dataunder theAIAct, as wellasunderotherapplicableregulations suchastheGDPR.However,whereasthe GDPRrequires biometricdata to serve an identification function, the AI Act adopts abroaderinterpretationthatencompasses biometric data even if it does not enable uniqueidentification.TheAIActexplicitly prohibitstwotypesofAIpracticesinvolv- ingbiometricdata.Itisprohibitedtoplace onthemarket,putintoserviceforthisspe- cific purpose or use biometric categorisa- tion systems that “ categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philo- sophical beliefs, sex life or sexual orientation ”. This use could indeed be used for politi- cally targetedmessages or todiscriminate againstaperson.TheEuropeanCommis- sionneverthelesspointsoutthatcertainAI uses shouldnot be prohibited, such as for example online marketplaces apps filters categorisingbodilyfeaturestoallowacon- sumertopreviewaproductonthemifthe person is not being individually cate- gorised,orifthesystemisnotdeducingor inferring other sensitive characteristics. TheAIActalsoprohibitsreal-timeremote biometric identification (“ RBI ”) systems inpublic spaces for lawenforcement pur- poses. These systems enable automatic recognition of “ physical, physiological, be- havioural, or psychological human features for thepurposeofestablishingtheidentityofanat- uralpersonbycomparingbiometricdataofthat individual to biometric data of individuals storedinadatabase ”.Deployersofthesereal- time RBI systems that capture instanta- neouslymeasurablephysicalorbehaviou- ral characteristics into machine-readable biometricdataare therefore concernedby thisprohibition.“real-time”inthiscontext shouldbeunderstoodasprocessing“ with- out any significant delay ” suchas before the personislikelytohavelefttheplacewhere the biometric datawas captured. The Guidelines further confirm that the practices of biometric verification (or au- thentication) fall outside the scope of the prohibition, such as where AI is used to comparedatabetweena reference image (the template) and data presented througha sensor (e.g., smartphone, pass- port, IDCard,…). Regarding the issue of surveillance, the above must not be con- fusedwith the prohibition regarding the inferenceofemotionsintheareaofwork- placeandeducation institutions (save for medical or safety reasons). The Prohib- ited AI Guidelines in this regard clarify that a bank may deploy an AI-powered camera system “ to detect suspicious cus- tomers,forexampletoconcludethatsomebody is about to commit a robbery ” on the condi- tion that “ no employees are being trackedand there are sufficient safeguards ”. Employees in this regard should be understood broadly as any staff irrespective of their status, and including prospective em- ployees during the recruitment process. What should I donow? Itisessentialtomapanddeterminewhich systemsamounttoAIsystempursuantto theDefinitionGuidelines in order to then filterandceasetheuse,ifnecessary,ofany prohibitedAIpracticepursuanttothePro- hibitedAIGuidelines,assuchprohibitions have entered into application since 2 February 2025 (and even though no au- thority inLuxembourg is yet empowered toenforcetheseprovisions).Furthertothe mapping exercise, entitiesmust then turn to the assessment of theAI systemfroma risk perspective, with high-riskAIs being subject to the main regulatory require- ments under theAIAct. In the meantime, it should be noted that provisions on AI literacy have also en- tered into application from 2 February 2025, thus requiring entities to set out training to ensure such literacy within their staff. In practice, and as was seen with the GDPR, there is for the moment nomarket consensus as to the contents of such trainings, with entities required to strike a balance between high level infor- mation and technical jargon. European Commission publishes first set of guidelines onAI Février 2025 48 AGEFI Luxembourg Informatique financière Abonnement aumensuel (journal + éditiondigitale) 1an(11numéros)=55€abonnementpourLuxembourgetBelgique-65€pourautrespays L’édition digitale du mensuel en ligne sur notre site Internet www.agefi.lu est accessible automatiquement aux souscripteurs de l’éditionpapier. NOM:....................................................................................................................................................................... ADRESSE:.............................................................................................................................................................. LOCALITÉ:............................................................................................................................................................ PAYS:....................................................................................................................................................................... TELEPHONE:...................................................................................................................................................... EMAIL:.................................................................................................................................................................... - Je verse ……€ au compte d’AGEFI Luxembourg à la BIL / LU71 0020 1562 9620 0000 (BIC/Swift : BILLLULL) -Jedésireunefacture :...................................................................................................................................... -N°TVA : ................................................................................................................................................................ Abonnement aumensuel en ligne Sivouspréférezvousabonnerenligne,rendez-vousàlapage‘S’abonner’surnotresiteIn- ternet https://www.agefi.lu/Abonnements.aspx Abonnement à notre newsletter / Le Fax quotidien (5 jours/semaine, du lundi auvendredi) Informations en ligne sur https://www.agefi.lu/Abonnements.aspx Abonnez-vous / Subscribe L'IAva transformer la gestion de fortune L 'intelligence artificielle (IA) va transformer le secteur de la gestion de fortune, a dé- claré à Reuters le 14 févrierMartin Moeller, responsable de l'IApour les services financiers chezMicro- soft, ajoutant que cette technologie permettra à de petits acteurs de ri- valiser plus facilement avec de grandes banques. Ensynthétisantdelargesvolumesdedon- néesfinancières,l'IApermettraàdepetites équipes d'effectuer le travail de divisions entières, estime le spécialiste. "L'IA géné- rativevaredéfinirlacompétition.L'IA,par exemple, abaisserademanière considéra- ble les barrières à l'entrée pour les start- ups, unpeucomme Internet et lanuméri- sation l'ont fait il y a quelques dizaines d'années", expliqueMartinMoeller. Depuisdébut 2024, leprestataire suédois de services de paiement Klarna utilise l'IA développée par le partenaire de Microsoft OpenAI, le logiciel effectuant le travail de 700 employés. L'IApourrait également faciliter le travail des "family offices", des gérants de fortunes pour les familles les plus riches. "Les banques qui pour le moment n'ont pas été très actives dans le segment de la gestion de fortune pourraient y entrer à l'aide de l'IA et sans avoir à beaucoup investirdanslesconseillersclients",détaille le responsable. L'IAprofite par ailleurs de la transformation des habitudes des clients, les jeunes entrepreneurs étant de plus en plus enclins à placer eux-mêmes leur argent, souligne Martin Moeller, ce qui pousse les banques à proposer à leurs clientsd'utiliserl'IApourregroupertoutes lesinformationsfinancièreslesconcernant. L'IA ne donne pour le moment pas de conseils d'investissement, mais des modèles d'IAdisposant d'une capacité de prise de décision autonome devraient commencer à apparaître d'ici deux ans. Source : Reuters

RkJQdWJsaXNoZXIy Nzk5MDI=