Are employees allowed to let AI do their work? Legal aspects of the use of Artificial Intelligence (“AI”) in the employment relationship


Since Chat-GPT, ClaudeAI, Bard and comparable large language models ("LLMs") are in the spotlight, employees have certainly already come up with the idea of giving these LLMs tasks that they should fulfil themselves as part of their employment relationship. But are employees allowed to do so? What legal aspects need to be considered? Employers would do well to address this issue, regardless of whether they want to prevent AI use, encourage or prescribe it.

What does “artificial intelligence” in the context of LLMs actually mean?

AI can come in many different forms. While there have been AI-based applications for many issues, the LLMs are characterised by the facts that

  • they are “self-learning”, i. e. they can determine the most probable solution on the basis of very extensive training data,
  • users can give them freely written tasks (prompts) and
  • the AI outputs a suitable text, whereby the output can usually be a short answer, an essay, a letter, an email, a contract, a translation, a calculation, a programming code or any other type of text in the language chosen by the user.

The LLMs were mostly trained with content available on the internet. In addition, users usually have to agree that their input may be used to train the language model further.

The quality of the results varies. In many cases, they are surprisingly good and match the user input exactly. In some cases, however, the LLM misunderstands the task, overlooks important aspects or invents content, even if the user only wanted to focus on documented information (in the latter case, the AI is said to be “hallucinating”).

Obligation to personally perform the work

In view of the wide usability of AI, there will be many tasks performed by employees that could be carried out by an AI. But can an editor have AI write their article, an employee in customer service draft an email reply or a programmer write their code with AI?

If a company provides its employees with a specific AI application, they are generally obligated to use it. If there are no instructions, however, things are more complicated. When in doubt, Sec. 613 sentence 1 German Civil Code (BGB) obliges employees to perform their work in person. This means that they may not allow another person to do the work in their place.

The problem with regards to AI arises from whether the AI application is categorised as the use of an assistive device or is equated with an auxiliary person.

The prevailing opinion to date classifies the use of AI as the use of an assistive device so that the use of AI would not constitute a violation of the duty to perform work in person. This can be justified by the fact that AI has not been granted its own legal personality at either German or European level. AI is not an auxiliary person in the traditional meaning of a living person.

Also, the purpose of the law is that the person performing the work on behalf of the employee is usually of particular importance. Insofar as the employee ensures a human, conscious decision-making on the AI-based output so that the AI is only used as a suggestion or support, the duty to perform work personally would probably be fulfilled.

Of course, the extent of the AI usage is important as well. If the employee limits him-/herself to formulating prompts for the AI with regard to a work task and uses the result unchanged, this is an indication of a violation of the duty to perform the work in person. However, if the employee ensures intervening in the form of a significant own review of the result or finetuning the generated results by multiple additional prompts to get the desired results, the use of AI should usually not constitute a breach of this duty according to the current assessment.

Employer can prohibit or restrict the use of AI

Employers are free to expressly prohibit the use of AI. In this case, the use of an AI application constitutes a breach of duty, even if it only serves as an aid.

It is also possible to set binding guidelines on the tasks for which employees may use AI and what they must observe when doing so. In the event of violations, employers can – depending on the severity of the violation – take action by issuing a warning or taking further steps under labour law.

Restrictions on the use of AI

Even if the employer has not issued any work instructions, AI may still not be used without restrictions. Depending on the object and extent of use, employees may be obliged to refrain from using AI or at least to inform the employer about the use of AI to prevent damage to the employer’s legal interests. This arises from  the employee’s secondary obligations under the employment contract (Sec. 241 para. 2 German Civil Code). To provide some examples:

  • Employees are not allowed to enter any protected personal data in the AI dialogue window for data protection reasons. Many LLMs are hosted on servers that are subject to lower data protection standards than in the EU. In addition, the LLMs continue to learn from user queries, meaning that it is theoretically possible for other users to gain access to the data entered.
  • Products developed by AI are assumed to not be eligible for copyright protection, as AI cannot be a “creator” within the meaning of (most of the) copyright laws due to its lack of legal personality. Insofar as the employer grants its customers rights of use to products, the involvement of AI in the creation process can put the employer in the unfortunate situation of not having these intellectual property rights (at least not to the full extent) and being liable for damages.

Hence, there are many arguments in favour of an obligation to notify the employer on the use of AI for performing work duties even if its output is only used as a starting point for further processing.

Since employees are often not even aware of the legal risks for their employer when using AI and the resulting duty of disclosure, employers are well advised to examine the relevant risks in relation to their company and set guidelines for the use of AI.

Requirements for the use of AI systems

Additionally, employers themselves also need to be careful when using AI systems in HR work, for example in recruitment (see our corresponding insight). They should also take the upcoming EU Artificial Intelligence Act into account which categorises AI systems into different risk groups, bans certain AI applications that pose an unacceptable risk and strictly regulates high risk AI applications (Initial proposal for a regulation setting harmonized rules for AI in the European Union dated 21 April 2021). Currently, the negotiations on the AI Act are in the final round and might even be concluded in 2023.