Artificial intelligence: Audit law aspects
As the use of AI and AI tools is currently handled and interpreted differently at each university and no legally binding guidance is available, no binding recommendations can (yet) be made at this point.
Larger universities in particular have issued recommendations within their sphere of influence. Individual legal opinions and guidelines from higher authorities such as the University Forum on Digitization, the Center for Higher Education Development or the German Society for Higher Education Didactics are available. However, there is not (yet) any uniformity or even a binding standard. The following explanations therefore represent an inventory that is constantly being reviewed and updated.
Authorship of AI content
The use of AI tools to create content of all kinds offers new opportunities for teachers and students at universities. From a copyright perspective, the use of AI tools primarily raises the question of authorship of the work if it was produced exclusively or partially using AI. The work as the subject of copyright protection is defined as a personal, intellectual creation of the author in accordance with Section 2 (2) UrhG. A creation is personal if it was created by a human being. The decisive factor for the creation of a protectable work by means of AI tools is therefore the predominance of the human contribution in the creation process.
Labeling of AI content
There are no standardized solutions for the labeling of AI content in examinations (theses) and academic publications. Universities should develop and define an individual regulatory framework in accordance with their fundamental stance on the use of AI in order to create legal certainty for lecturers and students. For guidance on the correct citation of AI content, the Handout from the University of Basel are recommended.
Furthermore, additional labeling obligations may arise from the license and terms of use of the AI software used. By using the AI software, users undertake to comply with the license and terms of use and thus to implement the labeling requirements.
Compliance with the rules of good scientific practice
If third-party content is used, it must be identified with the corresponding source and copyright. It must always be possible for third parties to identify the source. This also applies to content that has been developed in whole or in part using AI applications.
Adaptation of students' declarations of independence
The obligation to state the (permitted) aids used in examinations (final theses) for students is generally based on the students' declarations of independence. In order to maintain the principle of equal opportunities and to avoid attempts at cheating, it is recommended that these be extended to include the point "Use of generative AI tools as permitted aids". If AI-generated content is used, documentation of the AI tools used (list of AI tools; list of prompts) must also be submitted in addition to the declaration of independence. In addition, students' personal responsibility for the use of AI-generated content should be particularly emphasized and the consequences under examination law of unauthorized use should be pointed out.
>>> Here you can find for example the Status of dealing with AI and the Declaration of independence
of the FSU Jena.
Clarification of university regulations and statutes
The regulations on (un)permitted aids are mostly defined in the institutional framework examination regulations or, in particular, in the examination regulations of the degree programs. It is recommended that the examination regulations on generative AI as an aid in examinations be made more precise. This could take the form of a list of permissible and impermissible AI tools, for example. Furthermore, a labeling requirement for AI-generated content and other mandatory requirements, e.g. the specification of prompts (= input commands), can be defined for the case of use.
Dealing with attempts to deceive in suspected cases
If there is a suspicion of attempted cheating through the unauthorized use of generative AI tools as an aid, this must be reported to the examination board immediately. Attempted cheating is difficult to prove in practice. Even the use of technical solutions, such as the use of recognition software, does not currently guarantee reliable or sufficient evidence for the detection of attempted cheating. In cases of suspicion, prima facie evidence applies (= facts which, on the basis of general experience, only permit cheating), but this can be refuted by a counterstatement from the examinee. If there is no clear evidence of cheating, the examination must be assessed despite the suspicion.
1) General legal aspects (framework conditions)
Note: Since the use of AI and AI tools is currently handled and interpreted differently at each university and no legally binding guidance is available, no binding recommendations can (yet) be made at this point.
Larger universities in particular have issued recommendations within their sphere of influence (see, for example, various dossiers from the Hochschulforum Digitalisierung). Individual legal opinions (e.g. Ruhr University Bochum) and guidelines from higher-level bodies such as the aforementioned Hochschulforum Digitalisierung, the Centrum für Hochschulentwicklung or the Deutsche Gesellschaft für Hochschuldidaktik are also available. However, there is currently no uniformity or even binding nature (yet).
The core fields concerned and their interpretations range between the following positions:
- Authorship of generated content (positions vary between: ChatGPT vs. respective user)
- Dealing with references (positions vary between: "already covered by existing regulations" vs. "blind spot in existing regulations")
- Affected by declarations of independence (opinions vary between: "already covered by existing regulations" vs. "blind spot in existing regulations")
2) Affectedness of audit scenarios/questions
Note: The (legal) framework is provided by the applicable examination regulations, declarations of independence or guidelines of the respective university.
Various positions can also be found in this field: One fundamentally possible approach (and one that can currently be found) is to respond to AI tools with a return to conventional, supposedly less vulnerable forms of examination ("renaissance of the oral and the written exam in the classic sense"). On the other hand, the opposite approach can be found, which advocates an increase in the reflective component or, if necessary, the first-time addition of a reflective component in examinations and concentrates more on the adaptation of tasks: e.g. consciously exploring the limits of AI, reduction of pure fact reproduction.
Concrete effects on examination performance/performance records:
- There is currently no need for action for face-to-face examinations, digital face-to-face examinations (if the use of technically controlled environments is permitted) and oral examinations, but written work done at home (including preparations for papers/presentations) and situations in which there would be sufficient time to use AI are critical.
- A general ban on the use of AI tools does not make sense, as there are currently no ways of reliably recognizing AI-generated texts. AI-generated texts are not plagiarized and therefore cannot be detected by plagiarism software.
- Currently offered solutions for recognizing AI-generated texts provide both false positive and false negative results and are therefore not suitable for effectively monitoring a ban.
Recommendations if no changes to the examination format itself are possible:
- Formulate topics or tasks for written assignments that are as concrete/specific as possible (even) more critical look at the list of sources/careful comparison with content; careful reading of the submitted work
- Enable co-authorship between students and AI tool (this may require adaptation of the declarations of independence)
Recommendations if changes to examination formats can be implemented:
- Completion of short technical discussions during or after submission of a paper
- Submission of interim results of the work / consideration of the reflected development process in the assessment (possibly redefining the meaning of portfolio work)
3) Design of teaching or concrete use in teaching
Note: The (legal) framework is provided by the applicable examination regulations, declarations of independence or guidelines of the respective university.
Opportunities - targeted integration of AI in teaching:
- Establish rules for the use of AI for this purpose (range here too: from a general ban to unregulated use, but there are also middle ways: defined)
- Integration of AI/AI tools as a topic or subject of courses (objective: examination of...): e.g. as targeted testing of AI, exploration of borderline areas in the respective discipline, examination and, in particular, discussion
- Dealing with the technical topic at a higher level by understanding the basic functioning of AI tools / finding out the limitations of text-generating AI and critically reflecting on the AI-generated output
4) Use as teaching support or work aid
AI tools also offer teachers a wide range of support and can serve as an aid for teaching and research as well as for numerous administrative tasks.
Practical application examples - relief/support, especially for administrative/routine tasks:
- General use for the creation of (text/image) content: e.g. text templates of all kinds, illustrations/graphics for presentations, creation of tables (also from own data), creation of program code etc., creation/rewording of learning materials in simpler language, translations of existing content etc. (the use from the teacher's point of view is therefore not fundamentally different from the possible use by students).
- Help with or suggestions for the design of learning units: Suggestions for introductory questions, specific cases or problems, creation of advanced organizers (overview of content, learning objectives), suggestions for tasks, worksheets, quizzes, etc.
- Implementation of constructive alignment/help with semester planning: formulate learning objectives (also related to learning objective levels), suggestions for semester planning (with input of the relevant framework conditions), create/revise module descriptions/event descriptions, etc.
- Suggestions for examination scenarios: Suggested wording for examination forms or tasks (e.g. single/multiple choice tasks) as well as for associated assessments (suggestions for sample solutions and suggestions for assessment criteria), suggestions for feedback texts/review texts - note: the actual assessment must be carried out by humans at present!
- Support for all forms of (written) communication: use of suggestions for invitation texts, templates for announcements, news, etc.
The use of AI tools to create content of all kinds offers new opportunities for teachers and students at universities. From a copyright perspective, the use of AI tools primarily raises the question of authorship of the work if it was produced exclusively or partially using AI. The work as the subject of copyright protection is defined as a personal, intellectual creation of the author in accordance with Section 2 (2) UrhG. A creation is personal if it was created by a human being. The decisive factor for the creation of a protectable work by means of AI tools is therefore the predominance of the human contribution in the creation process.
There are no standardized solutions for the labeling of AI content in examinations (theses) and academic publications. Universities should develop and define an individual regulatory framework in accordance with their fundamental stance on the use of AI in order to create legal certainty for lecturers and students. For guidance on the correct citation of AI content, the Handout from the University of Basel are recommended.
Furthermore, additional labeling obligations may arise from the license and terms of use of the AI software used. By using the AI software, users undertake to comply with the license and terms of use and thus to implement the labeling requirements.