Humboldt-Universität zu Berlin - Artificial Intelligence at HU Berlin

Humboldt-Universität zu Berlin | KI@HU | Guidelines for the use of ChatGPT and other generative AI tools

Guidelines for the use of generative AI tools in research, teaching and administration at Humboldt-Universität zu Berlin

Introduction

Generative artificial intelligence[1] such as OpenAI, ChatGPT or Google Gemini are undergoing rapid development and are increasingly being used for more and more tasks - including in the professional context of employees at Humboldt-Universität zu Berlin. Although the benefits, consequences and risks of these AI tools are only partially foreseeable, we can assume that the pace of development in the field of AI will continue to increase in the future.

The HU promotes the use of AI in research, teaching and administration. As with any new technology, we as a university aim to engage constructively with these developments and integrate them safely into our everyday work. In order to be able to act independently as a university, the Computer and Media Service (CMS) develops and hosts its own AI infrastructure based on open source products. This offers the advantage of low-threshold and data protection-compliant use. A range of language models, known as large language models, is currently available for free use by all HU members for various purposes[2]. Unlike commercial products, neither the user's input ("prompts") nor the system's output is stored or tracked.

This guide points out legal constraints and security aspects in connection with the use of artificial intelligence in the various areas of the university. Further up-to-date information on the possible uses and utilisation scenarios of HU's own AI applications is provided on the website2 accompanying this guide. It is regularly updated to reflect the latest findings on the use of AI tools. Furthermore, the guidelines are to be expanded into an AI policy in the future

For the area of study and teaching, please also refer to the "Recommendations on the use of artificial intelligence in coursework and examinations at HU"[3]. Information on citing AI-generated content can be found in the recommendations of Berlin University Publishing[5].

 

Utilisation of the results generated by AI

Results generated by generative AI can be very accurate and precise, however they can also be completely freely invented and incorrect. It is therefore important to critically check the output produced by the AI tool for correctness and not to rely solely on it. The responsibility for further use lies with you. Therefore, please only use the results of generative AI if you are sure that you can judge their quality yourself.

 

Data protection and confidentiality

Information that is shared with generative AI tools is usually no longer protected from this point onwards and can be used by the providers for their own purposes. Under these conditions, basic data protection requirements for processing (purpose limitation, Art. 5 of the GDPR, confidentiality) and transparency (implementation of data subject rights, Art. 15, 16, 17, 18 and 21 of the GDPR, information obligations under Art. 13 and 14 of the GDPR) can no longer be adequately guaranteed.

In addition, many AI systems use external IT systems ("cloud"), which are often located in countries outside the EU. As significantly lower protection standards sometimes apply in these so-called third countries, the processing of personal data in such systems may be unauthorised. It is therefore important to ensure that protected or sensitive information does not fall into the hands of unauthorised persons in this way.

The following guidelines for HU employees are derived from this:

  • Protect confidential data: Never enter classified confidential data or non-public research data into publicly accessible generative AI tools.
  • If the data is not classified as confidential, personal data must be removed or anonymised before it is passed on. You can re-insert this data at a later date if required.
  • When using the HU-internal AI offers of the CMS, the above-mentioned restrictions regarding data protection do not apply, as the system is configured in such a way that all data protection requirements are met.

 

Copyright and patent law

When AI is used, not only personal data but also intellectual property content can be made accessible to third parties. As a result, existing intellectual property rights may be circumvented or intellectual property rights that have not yet been established (e.g. content intended for patenting) may be exploited by third parties.

This gives rise to the following instructions for HU employees:

  • Please take care not to transfer any intellectual property to publicly accessible AI tools. Especially when using generative AI tools, you should ensure that you do not jeopardise any existing or not yet established intellectual property rights.
  • When using the HU-internal AI offers of the CMS, the above-mentioned restrictions regarding copyright and patent law do not apply, as the system is configured in such a way that none of your entries can flow to third parties within the HU or third parties outside the institution.

 

IT security

Generative AI makes it easier for malicious actors to develop sophisticated deceptions on a much larger scale. Novel and creative cybercrime campaigns can therefore be expected in the future.

This results in the following instructions for HU employees:

 

  • Please be careful with AI-supported phishing. Unfortunately, email phishing is becoming increasingly sophisticated with the wording used. Human voices can also be imitated deceptively realistically and can hardly be distinguished from human speech on the phone. In addition, AI-based image and video manipulations open up further opportunities for creative phishing. Please check incoming emails or other messages carefully before replying to them. This applies in particular if confidential or potentially dangerous information is to be transmitted.
  • Continue to follow best security practices[4] and be aware of the possibilities of cybercrime through generative AI. If you are unsure or would like to report suspicious messages, please contact the CMS User Advisory Centre: cms-benutzerberatung@hu-berlin.de.

Procurements

Procurement should also include cloud solutions, subscriptions, subscriptions and contracts for the purchase of IT services. The market for gener ative AI tools is very confusing and the further use of the data entered by many providers is not clear.

  • Before using generative AI tools, you should check them for risks relating to data protection and IT security. If you are unsure, we recommend that you refrain from using them or orientate yourself towards established tools.
  • Before procuring AI tools, consider carrying out a preliminary review or having one carried out. This will allow you to ensure that the software can be used for the intended areas of application and under which technical and legal conditions this is possible. The preliminary review can be carried out together with the CMS, the IT security officer and the data protection officer.

 

If the intended AI software is to be used to process personal data, a preliminary assessment can be used to identify and address fundamental risks before the procurement decision is made. Nevertheless, in addition to the preliminary review, the admissibility of the tool to be procured for the specific application under data protection law may have to be reviewed and approved by the data protection officer.

 

[1] Artificial intelligence (AI) refers to algorithms and tools that can be used to imitate human abilities and behaviour. Generative AI in this sense refers to the use of AI to analyse patterns and structures in existing data in order to generate new content in various formats such as text, music, computer code, images or videos.

[2] https://ki.cms.hu-berlin.de

[3] https://www.hu-berlin.de/de/studium/pservice/empfehlungen_ki_in_pruefungen_hu_2023-09-18.pdf

[4] https://informationssicherheit.hu-berlin.de/de/Phishing

[5] https://www.berlin-universities-publishing.de/ueber-uns/policies/ki-leitlinie/ki-handreichung/index.html