The legal challenges of PropTech - (II) The use of artificial intelligence under the AI Act, the GDPR and liability rules

Published on 16 December 2024

Download the article in PDF format

The term PropTech (a contraction of "Property" and "Technology") refers to the use of technology and digital tools in the real estate sector at all stages of the value chain, from construction to asset management and property management, including transactions conducted on the market for selling or renting properties (or even potential financing rounds).

As in other sectors (health, law, insurance, etc.), established players in property management or asset management, as well as many startups, have recognized the benefits of leveraging constantly evolving technologies to automate existing processes or offer new services to all stakeholders, and particularly end users.

Examples include 3D printing and virtual (or augmented) reality in the design and construction phase of a real estate project; the valuation of buildings using artificial intelligence (AI) tools; online platforms for property sale or rental listings; ; or "smart" property management based on the Internet of Things (IoT) or on collaborative tools that maximize the use of coworking or coliving spaces or of parking lots.

LIME offers an analysis of the main legal challenges in PropTech, with this second edition concentrating specifically on artificial intelligence (AI)[1].

[1] For the previous article, dedicated to data valorization in compliance with the GDPR, see The legal challenges of PropTech - (I) How to valorize data while complying with the GDPR?


Artificial intelligence – and in particular generative AI, intended to create content in the form of text or images – offers growth opportunities across various sectors, as it enables the processing of vast amounts of data while automating certain processes, which may be complex or tedious, without human intervention.

The PropTech sector is not exempt from this trend, which is encouraging: AI can be used to evaluate real estate, draft advertisements, prepare agreements, conduct inventory of fixtures, reduce energy consumption (or, more broadly, the environmental impact of the sector), assess the creditworthiness of a prospective buyer or tenant, or even, in a predictive way, anticipate certain trends or behaviors, including the obsolescence of various products, etc.

However, it is essential to properly assess the risks arising from the use of this technology while ensuring strict compliance with potentially applicable legal or regulatory texts. This particularly includes the General Data Protection Regulation (GDPR) and the recent European Regulation on Artificial Intelligence ("AI Act").

1. Application of the AI Act in the PropTech Sector

The AI Act[1] entered into force on August 1, 2024. Although many of its provisions will not take effect until August 2026, its requirements must be proactively anticipated to ensure that forthcoming technological advancements, particularly those involving artificial intelligence, are compliant therewith.

Like other recent regulations (such as the DSA), the AI Act adopts a risk-based approach:

  • When AI presents high risks in terms of health, safety, or fundamental rights protection, the requirements to be complied with by stakeholders are extensive and particularly detailed (potentially leading to a prohibition);
  • Otherwise, legal provisions are mainly limited to prescribing information obligations or promoting the adoption of codes of conduct.

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC) no 300/2008, (EU) no 167/2013, (EU) no 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (artificial intelligence act), is available through the following link: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689.

Typology of AI Systems

Assuming the AI Act is applicable, stakeholders seeking to develop and market AI systems and/or AI models (referred to as "providers" under the AI Act) or those intending to deploy such systems under their own responsibility("deployers" in the AI Act) must proceed to a classification beforehand.

The AI Act delineates four categories of AI systems [1], each subject to differing levels of regulatory scrutiny depending on their associated risk[2].

  • Article 5 of the AI Act establishes a list of practices that are strictly prohibited: subliminal techniques, deliberately manipulative or deceptive practices, social scoring, or even the assessment of the likelihood of committing a criminal offense based on a person's profiling.
  • An AI system is considered high-risk (in the sense of Articles 6 and following of the AI Act) when it constitutes a product (or a safety component of a product) covered by one of the sector-specific legislations listed in Annex I, and is also subject to conformity assessment by a third party. In the real estate sector, this refers to the directive harmonizing the rules concerning elevators and their safety components. Also included in this category, unless exceptions apply, are AI systems used in one of the eight areas listed in Annex III, which range from education to the administration of justice, including employment. The use of AI to assess the creditworthiness of a prospective buyer, an individual, could thus fall under point 5(b) of Annex III (except if the purpose is to detect financial fraud). Numerous conditions must be meticulously met: the objective is to ensure that when placed on the market, AI systems meet the expected safety level (in accordance with prescribed standards), allowing them to be used without risk.
  • AI systems that present a limited risk in terms of transparency. This includes, for example, the chatbot implemented by a real estate developer or a broker to provide general information to clients, make appointments or provide investment advice. Technical solutions that generate summary content in audio or video format (such as artificial “staging” or a virtual tour of an apartment to be built) are also included, as well as content manipulations that constitute hyper-manipulation (deepfake). These systems are primarily subject to information obligations, allowing individuals to know that they are interacting with a machine (and enabling them to decide whether to continue the interaction or not) or to clearly label the content or results generated or manipulated by an AI[3].
  • AI systems presenting a low or no risk (such as a spam filter) can be marketed and used freely. The AI Act does not impose any specific requirements, limiting itself to promoting the development of codes of conduct in this area.

[1] Refer to Article 3(1) of the AI Act for the definition of an "AI system": “A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

[2] The concept is defined in Article 3(2) of the AI Act as “the combination of the probability of an occurrence of harm and the severity of that harm”.

[3] Art. 50 of the AI Act.

General-purpose AI models and general-purpose AI systems

Alongside this classification, which is based on the possible applications of AI systems (and their associated risks), another typology is established, based on the underlying technology.

  • The AI Act indeed establishes rules applicable to general-purpose AI models[1], distinguishing between those that present systemic risk[2] and others[3]. DALL-E or GPT-4 are examples of general-purpose AI models that can generate images or language.
  • These provisions of the AI Act were added during the legislative process to address the challenges posed by generative AIs.
  • In practice, AI systems can be based on general-purpose AI models (referred to as general-purpose AI systems, which can be integrated into other AI systems). ChatGPT is an example of a general-purpose AI system. In the PropTech sector, they can, for example, assist users by drafting real estate advertisements or some contractual documentation based on the information provided.

The AI Act primarily imposes transparency obligations on providers of such models. In the case of systemic risks, the requirements are heightened to assess and mitigate these risks.

[1] The concept is defined in Article 3, 63) of the AI Act. It refers to “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.

[2] The concept of systemic risk is defined in Article 3, 65) of the AI Act.

[3] Art. 51 and following of the AI Act.

2. Obligations regarding the processing of personal data

Companies active in the PropTech environment that wish to use AI must also pay attention to compliance with the GDPR.

Artificial intelligence systems rely on data. Unless such data does not constitute personal data, the GDPR must be observed. Consequently, it will be necessary to classify the stakeholders (providers, deployers, etc.) according to the categories of the GDPR (data controller or processor, notably), while ensuring that they comply with the principles established thereby, regarding the legality of processing, transparency, purpose of processing, data minimization, and so on. This is particularly important for systems that profile potential clients in order to recommend personalized real estate advertisements, or for those that analyze the behaviors of actors in a given area to predict the best time to sell (or buy, including for investment purposes). Reference is made to the previous note dedicated to the application of the GDPR in the PropTech sector[1]. Among other points of attention, companies wishing to leverage AI are advised to draft their privacy policies accurately and to carefully analyze the preferred legal basis, as consent may have disadvantages. Contracts entered into with companies providing training data must also be established with care to prevent potential disputes.

In the field of AI, particular attention must also be paid to Article 22 of the GDPR, which states that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. The use of AI by a real estate intermediary to assess the creditworthiness of potential buyers without human intervention could fall under this prohibition. However, it is lifted if the decision is necessary for the entering into or performance of a contract between the data subject and a data controller, or if there is explicit consent from the data subject.

This requirement must be linked to the obligations set forth in the AI Act. For high-risk AI systems, Article 14 of the AI Act requires, in its first paragraph, that “high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use”.

It is also interesting to put into perspective the right to information under Articles 13-15 of the GDPR and the right to an explanation of individual decisions provided for in Article 86 of the AI Act. These measures indeed allow data subjects who may have been victims of algorithmic biases to challenge the decisions made.

[1] The legal challenges of PropTech - (I) How to valorize data while complying with the GDPR?

3. What responsibilities in the event of damages caused by AI?

In some cases, it cannot be ruled out that the use of AI may cause harm to users or third parties. For example, a disqualified candidate could complain about the discriminatory choice made by the AI system used by the property owner or real estate agent to select their future tenant, potentially due to algorithmic biases. Due to programming errors, mistakes could also be made by the AI systems used to assist architects or engineers when designing and implementing plans. A misconfiguration of the AI system used for predictive maintenance of a building's facilities (lighting, heating, etc.) or to optimize energy consumption based on the occupants' habits could also result in significant harm to users.

In terms of contractual or non-contractual liability, ordinary obligations law remains applicable (mainly Books 5 and 6, as well as the future Book 7), even though uncertainties (or difficulties) cannot be excluded when the behavior of the AI was unpredictable or impossible to explain.

Furthermore, new rules should govern this issue once the recent Directive 2024/2853 on liability for defective products will havebeen transposed by the Member States (December 2026) or if the proposed directive on AI liability is ultimately adopted.

In any case, one must be aware of the financial (and reputational) risks and, in compliance with applicable rules, must ensure these risks are strictly managed (through liability limitation clauses or warranty clauses, for example). Users should thus be informed of the limitations of the tool being offered to them and accept these limitations in advance.

For more information, please feel free to contact Hervé Jacquemin, Julie-Anne Delcorde, Thierry Tilquin

Keep up to date with our news on LinkdIn