Organising regular events for all members of the firm is pretty much synonymous with the Liedekerke approach 'we work hard but play hard as well'. A number of events such as the Liedekerke Summer Event, the Liedekerke After Summer Event, the Liedekerke Revue, our regular afterwork drinks throughout the year, ‘brain maniac’ breakfasts in the firm, etc… are recurring events that allow us to connect with each other more outside the professional working environment which obviously has a positive impact to the cooperation in the office as well. Soak up the cool atmosphere that is strong at these events by watching some after-event movies.

On 22 January 2024, the final text of the AI Act became public (it was published by Luca Bertuzzi of Euractiv on its LinkedIn page). Although not yet formally and finally adopted and published in the Official Journal of the European Union, we understand that this text will not be subject anymore to (extensive) changes.

1. What is it about?

The AI Act is a regulation (meaning it has direct effect in the Belgian legal order) laying down harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’), including a.o. prohibition of certain AI systems, specific rules regarding high-risk AI systems and harmonised transparency rules.

2. What is an AI system?

An “AI system” is defined as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

AI systems are deemed “high-risk” if:

  • the following conditions are cumulatively met:

(i)     the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the EU harmonisation legislation listed in Annex II (e.g. EU legislation on the safety of toys, on recreational watercrafts, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment); and

(ii)   such AI system is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the EU harmonisation legislation listed in Annex II;

  • it is listed in Annex III, which includes e.g. AI systems intended to be used:

- as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity;

- to determine access or admission or to assign natural persons to educational institutions;

- to evaluate learning outcomes or to detect inappropriate behaviour of students during tests;

- for recruitment or selection (evaluate candidates) and to make decisions regarding promotion or termination of work relationships;

- to evaluate creditworthiness of natural persons.

If it can be demonstrated that the AI systems on the list of Annex III do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making (e.g. if the AI system is intended to perform a narrow procedural task or is intended to improve the result of a previously completed human activity), they will not be deemed high-risk;

  • it conducts profiling of natural persons.

 3. To whom will the AI Act apply?

The AI Act shall have a broad scope of application. It shall apply to:

  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country;

  • deployers of AI systems that have their place of establishment or who are located within the Union;

  • providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in
    the EU;

  • product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

  • authorised representatives of providers, which are not established in the Union;

  • affected persons that are located in the Union.

The AI Act shall not apply to (non-exhaustive):

  • AI systems and models, including their output, specifically developed and put into service for the sole purpose of scientific research and development;

  • AI systems exclusively used for military, defence or national security purposes;

  • research, testing and development activities regarding AI systems or models prior to being placed on the market or put into service;

  • natural persons using AI systems in the course of a purely personal non-professional activity;

  • AI systems released under free and open source licences (but exceptions apply).

The AI Act holds an extensive array of obligations for providers (as well as importers and distributors) of AI systems, less for deployers of AI systems. Nevertheless, there are important obligations one should comply with and/or considerations one should keep in mind when deploying an AI system within the company (see point 5 below).

In addition, please be aware that a deployer will be qualified as ‘provider’ of an AI system (and thus subject to the extensive obligations placed on such providers under the AI Act) if:

  • they put their name or trademark on a high-risk AI system already placed on the market or put into service;

  • they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service;

  • they modify the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such manner that the AI system becomes a high risk AI system.

4. What AI systems are prohibited?

The following AI practices are prohibited:

  • AI systems using subliminal techniques beyond a person’s consciousness or
    purposefully manipulative or deceptive techniques, with the objective to distort behaviour and impair informed decision-making, causing significant harm;

  • AI systems exploiting vulnerabilities due to age, disability or a specific social or economic situation, causing significant harm;

  • biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;

  • AI systems evaluating or classifying individuals or groups based on social behaviour or known, inferred or predicted personal characteristics (‘social score’) leading to detrimental or unfavourable treatment;

  • ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, except is specific circumstances listed in the regulation (e.g. targeted search for victims of trafficking);

  • AI systems making risk assessments of natural persons in order to assess or predict the risk of a natural person to commit a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics, except if it is used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;

  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;

  • AI systems to infer emotions of a natural person in the areas of workplace and education institutions except in cases where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

5. What to do if I want to deploy an AI system within my company?

If a company considers deploying AI systems, the following aspects should be kept in mind (non-exhaustive):

  • conduct an “AI analysis”, mapping what AI systems you have in place, whether they can be qualified as prohibited or high-risk and in relation to each of these AI systems, whether you act as deployer or provider (or could be qualified as such);

  • establish thorough documentation and risk assessment processes for the use of AI systems;

  • refrain from using prohibited AI systems (except where legally allowed);

  • publish an AI policy within your company, setting out guidelines and best practices for your staff using AI systems in the context of their job performance;

  • take measures to ensure, to your best extent, a sufficient level of AI literacy of the staff and other persons dealing with the operation and use of AI systems on your behalf, taking into account their technical knowledge, experience, education;

  • with regard to the use of high-risk AI systems:

- take appropriate technical and organisational measures to ensure you use such AI systems in accordance with the instructions of use accompanying these AI systems;

- install human oversight of high-risk AI systems aiming at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used (e.g. biases in the output of the high-risk AI systems). Such human oversight must be assigned to natural persons who have the necessary competence, training and authority, as well as the necessary support;

- to the extent you have control over the input data, you shall ensure that such input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system;

- monitor the operation of the high-risk AI system on the basis of the instructions of use and when risks or serious incidents are identified, inform the provider (and where legally required also the distributor and relevant market surveillance authority and suspend the use of the system);

- to the extent under you control, keep the logs of the high-risk AI system for a period appropriate to the intended use of the high-risk AI system, with a minimum of six months;

- when legally required, conduct a fundamental rights impact assessment prior to using the high-risk AI system (e.g. if you are a private operator offering public services);

- prior to using the high-risk AI system, deployers who are employers shall inform workers representatives and the affected workers that they will be subject to such system;

- cooperate with the relevant national competent authorities on any action those authorities take in relation with the high-risk system in order to implement the AI Act;

- deployers of high-risk AI systems that are public authorities and deployers of a high-risk AI system for post-remote biometric identification have additional obligations;

  • inform the natural persons subject/exposed to AI systems of the use thereof (in certain circumstances this is mandatory, e.g. when using an emotion recognition system or a biometric categorisation system, AI systems that generate deep fake or generate text which is published with the purpose of informing the public on matters of public interest (in which cases you have to disclose that the deep fake content/text has been artificially generated or manipulated, except where there has been human control and the editorial responsibility lies with a natural or legal person), high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons);

  • where required, conduct a data protection impact assessment (DPIA) under the GDPR prior to using an (high-risk) AI system and comply with the other relevant obligations under the GDPR (e.g. information obligation in case personal data is processed by the AI system).

6. When will the AI Act apply?

Once entered into force, the AI Act will in principle apply after 24 months (with some exceptions, e.g. the provisions regarding the prohibited systems will apply after 6 months).

Please note that the Commission shall develop guidelines on the practical implementation of this regulation.

7. What are the fines under the AI Act?

The following are the relevant fines for deployers of AI systems:

  • non-compliance with the prohibited AI practices is subject to an administrative fine of up to 35 million euro or if the offender is a company, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher;

  • non-compliance with the obligations when using high-risk AI systems and certain transparency obligations is subject to an administrative fine of up to 15 million euro or if the offender is a company up to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher;

  • the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 7,5 million euro, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher

In case of SMEs, including start-ups, each fine referred to in the relevant provisions of the AI Act shall be up to the percentages or amount referred to in such provision whichever of the two is lower.

***

For any further questions related to AI systems, please contact our IP, IT and data protection team by sending an e-mail to IP/IT-team@liedekerke.com.

Author

Back to overview