What does “artificial intelligence” in the context of LLMs actually mean?
AI can come in many different forms. While there have been AI-based applications for many issues, the LLMs are characterised by the facts that
The LLMs were mostly trained with content available on the internet. In addition, users usually have to agree that their input may be used to train the language model further.
The quality of the results varies. In many cases, they are surprisingly good and match the user input exactly. In some cases, however, the LLM misunderstands the task, overlooks important aspects or invents content, even if the user only wanted to focus on documented information (in the latter case, the AI is said to be “hallucinating”).
Obligation to personally perform the work
In view of the wide usability of AI, there will be many tasks performed by employees that could be carried out by an AI. But can an editor have AI write their article, an employee in customer service draft an email reply or a programmer write their code with AI?
If a company provides its employees with a specific AI application, they are generally obligated to use it. If there are no instructions, however, things are more complicated. When in doubt, Sec. 613 sentence 1 German Civil Code (BGB) obliges employees to perform their work in person. This means that they may not allow another person to do the work in their place.
The problem with regards to AI arises from whether the AI application is categorised as the use of an assistive device or is equated with an auxiliary person.
The prevailing opinion to date classifies the use of AI as the use of an assistive device so that the use of AI would not constitute a violation of the duty to perform work in person. This can be justified by the fact that AI has not been granted its own legal personality at either German or European level. AI is not an auxiliary person in the traditional meaning of a living person.
Also, the purpose of the law is that the person performing the work on behalf of the employee is usually of particular importance. Insofar as the employee ensures a human, conscious decision-making on the AI-based output so that the AI is only used as a suggestion or support, the duty to perform work personally would probably be fulfilled.
Of course, the extent of the AI usage is important as well. If the employee limits him-/herself to formulating prompts for the AI with regard to a work task and uses the result unchanged, this is an indication of a violation of the duty to perform the work in person. However, if the employee ensures intervening in the form of a significant own review of the result or finetuning the generated results by multiple additional prompts to get the desired results, the use of AI should usually not constitute a breach of this duty according to the current assessment.
Employer can prohibit or restrict the use of AI
Employers are free to expressly prohibit the use of AI. In this case, the use of an AI application constitutes a breach of duty, even if it only serves as an aid.
It is also possible to set binding guidelines on the tasks for which employees may use AI and what they must observe when doing so. In the event of violations, employers can – depending on the severity of the violation – take action by issuing a warning or taking further steps under labour law.
Restrictions on the use of AI
Even if the employer has not issued any work instructions, AI may still not be used without restrictions. Depending on the object and extent of use, employees may be obliged to refrain from using AI or at least to inform the employer about the use of AI to prevent damage to the employer’s legal interests. This arises from the employee’s secondary obligations under the employment contract (Sec. 241 para. 2 German Civil Code). To provide some examples:
Hence, there are many arguments in favour of an obligation to notify the employer on the use of AI for performing work duties even if its output is only used as a starting point for further processing.
Since employees are often not even aware of the legal risks for their employer when using AI and the resulting duty of disclosure, employers are well advised to examine the relevant risks in relation to their company and set guidelines for the use of AI.
Requirements for the use of AI systems
Additionally, employers themselves also need to be careful when using AI systems in HR work, for example in recruitment (see our corresponding insight). They should also take the upcoming EU Artificial Intelligence Act into account which categorises AI systems into different risk groups, bans certain AI applications that pose an unacceptable risk and strictly regulates high risk AI applications (Initial proposal for a regulation setting harmonized rules for AI in the European Union dated 21 April 2021). Currently, the negotiations on the AI Act are in the final round and might even be concluded in 2023.