The question of digital sovereignty is becoming more urgent for universities in the age of AI. AI-based applications are not only critical from the perspective of data protection and functional transparency, but also pose a threat to independent thinking. AI literacy, an independent AI infrastructure, and a clearly defined strategic framework are fundamental for defending academic freedom.

Since the beginning of digitalization, universities worldwide have been preoccupied with the question of digital sovereignty. This involves issues such as ensuring that IT applications comply with data protection regulations and reducing technical and financial dependencies. However, artificial intelligence (AI) is challenging digital sovereignty in new ways that go beyond these classic aspects.

Digital and Intellectual Sovereignty in the Age of AI

In schools and universities, students are increasingly relying on large language models in their learning processes. More and more students (as well as experienced academics) are entering tasks into AI-based chat services to see what solutions can be generated. They then continue their work based on these results. This applies not only to fact-oriented tasks, but also to the interpretation of data and the development of arguments. Large language models therefore have a considerable and constantly growing influence on learning outcomes and on the results of scientific studies.

For scientific processes, an unbiased and analytical approach is central. Contrary to what users often assume, however, AI language models are not objective entities. Instead, they are shaped by worldviews in two ways: first, by their training data (and any biases it may contain); and second, by the subsequent fine-tuning (i.e., their adjustment for standard operation), which prevents undesirable response behavior for the public use of the system. This also applies to questions of worldview. After training, models will often answer controversial political questions differently.

In both China and the United States, we can see the strong influence that politics has on tech companies. In the United States, in particular, it has recently become apparent that owners of large tech companies can turn their social media platforms toward the political tide when necessary and use them to influence their users politically. It is important to understand that they can also do this with the large language models they control.

There are already numerous documented examples of this political adjustment of large language models. One example from the United States is the chatbot Grok, which, in the summer of 2025, suddenly and intrusively informed its users about “white genocide” in South Africa—a topic that is a political passion of Grok’s owner, Elon Musk, and is by no means an objective political consensus. Similarly, in China, the DeepSeek model regularly refuses to provide users with information about the Tiananmen Square massacre. We should assume that there are many other adaptations of the models to support specific worldviews.

The problem is that users of generative AI are usually unaware of this fact. There is a great temptation to understand large language models as seemingly objective and competent. This issue is even more problematic when AI models are integrated into everyday technical applications, acting in such seemingly neutral roles as virtual assistants, booking assistants, and other roles—including scientifically oriented ones.

Research into the implications for opinion formation has already begun and is providing concerning insights. For example, in one experiment users had to write a text about the value of social media for society with the assistance of a chatbot. This chatbot had (without the users’ knowledge) certain preprogrammed opinions on the topic, which the users often adopted uncritically for their texts. Such experiments lead to the conclusion that chatbots considerably influence what we think and write.

Clearly, these models pose a significant risk to intellectual sovereignty. To put it bluntly, they are the perfect tools for subliminal manipulation. This should be of particular concern to universities and academia, where free thought is essential. We must understand that ideologically preset large language models can influence us in scientific argumentation without us even being aware.

Given that AI will likely be an integral part of universities for the foreseeable future, what can universities do to prevent the worst effects?

AI-sovereign Infrastructure

First, AI sovereignty is a matter of technical infrastructure. While, in the early days of market-ready AI applications, universities were completely dependent on commercial providers, they are now increasingly taking AI technology into their own hands. From a technical perspective, an AI system at a university usually consists of several components. These include the user interface, the specific service it offers (e.g., a chatbot), and a large language model on which the service is based. In the university context in particular, additional components are also of interest, especially those that allow for a higher degree of factual reliability, such as retrieval-augmented generation. The overall technical system also includes the servers on which the data is processed.

In principle, universities can operate each of these components autonomously. European universities can serve as an example here. In Germany, for instance, universities have collaborated across regional borders to develop interfaces that reduce the need to transmit personal data when accessing commercial large language models. More importantly, they are starting to use adaptable (open source) large language models from different providers and store them in public data centers so that all university members can use this service. Similar developments can be seen in other European countries.

However, even if universities set up their own AI infrastructure, they cannot solve the problem that there are no ideologically neutral large language models. Therefore, universities must also offer several different models and be transparent about which model belongs to whom and how it performs in practice.

Beyond Technology: AI Literacy and Strategy

Even if a university establishes an effective technological framework, it cannot stop there if it aims for AI sovereignty. If users do not understand the fundamentals of AI, they will not use it appropriately. Therefore, nurturing AI literacy among all university members is essential. This includes basic knowledge of technology, law, ethics, and pedagogy with regard to AI.

In the European Union, the AI Act now mandates that all providers and operators of AI applications promote such competencies. In the higher education sector, this has led to the development of self-learning courses and physical workshops, which are implemented both at individual universities and within alliances. Here, too, it is important to realize that not every university has to do this alone, because the content is of interest across the board. Universities should not rely exclusively on offers from commercial providers, as conflicts of interest could prevent open discussion of some AI issues. Furthermore, an important lesson learned from practical implementation is that, in view of rapid changes and for reasons of objectivity, these offerings should not be tool-oriented but should rather provide an understanding of the fundamental, generic characteristics and challenges of AI.

At the same time, it is important not to limit individuals’ skills to only working with AI itself. Rather, universities must raise their staff and students’ awareness of scientific competencies and how to develop and apply them, e.g., critical thinking and research methodology. Consciously teaching and learning these competencies is also part of AI sovereignty in a broad sense.

Finally, AI sovereignty also requires clear framework conditions and strategies for using AI in an institution. This includes clarifying legal issues and allocating responsibilities and processes for deciding AI-related issues.

If universities have a clear AI strategy, if they control critical AI infrastructure and if students and staff are competent in dealing with the technology, then the point has been reached at which these institutions can be considered AI sovereign. It is urgent to strive for this now.


Peter Salden is director of the Center for Teaching and Learning at Ruhr University Bochum, Germany. He heads one of Germany's largest cross-university projects on artificial intelligence in higher education (KI:edu.nrw) and is host to the largest German-language conference on artificial intelligence in higher education (Learning AID). E-mail: [email protected].