Communist AI vs Liberal AI
Dear readers,
I came across a commentary by the well-known professor Prof. Dr. med. Torsten Haferlach on an essay by Evgeny Morozov in Le Monde diplomatique and felt the need to comment on it and hopefully continue the important dialog. I would appreciate it if you would share or spread this commentary, because we need more in-depth conversations.
Haferlach highlights the interesting discussion about the usefulness of AI technology and refers to Morozov's plea for an AI that augments human capabilities. I would like to add some critical thoughts to his comments, particularly with regard to the dangers of centralized AI models and the importance of participatory approaches.
Communist AI or Public AI
When Evgeny Morozov describes a "communist AI" as a centrally developed technology in public institutions (Public AI) that could promote a socialist transformation, it is crucial that all those who see the values of free and liberal democracies as irreplaceable, counter any centralised and closed sourced approach. Freedom is indispensable in a liberal society, as it forms the basis for individual development, political co-determination and protection from state despotism. These fundamental values ultimately enable a diverse and just society.
Surveillance Communism
Another threatening model is the Chinese tech model, where platforms have been created along the lines of US corporations but are under strict state control. In China, surveillance capitalism has replaced “surveillance communism”, whereby private companies have extensive data collection capabilities and are required to share all data collected with the government. This surveillance system aims to prevent the spread of “contagious ideas”, create fear in the population and undermine social trust (social scoring).
Current discussions around disinformation, content moderation and government surveillance often point to threats such as deepfakes and the potential demise of democracy. However, these discussions often lead to solutions that are closer to surveillance and communist ideas and weaken the fundamental values of free democracies. Such approaches are more akin to the Chinese model and jeopardize individual freedom. An example of this is the criticism of the EU Commission's attempts to break encryption through measures such as chat control, which would not only significantly undermine the privacy of citizens, but also enable far-reaching state surveillance.
The mass adoption of large language models (LLMs) has now made it remarkably easy to create and modify content to target specific individuals or demographics. Services based on LLMs, such as ChatGPT, act not only as interfaces to such LLM's, as Tech Philosopher Dr. Denisa Reshef Kera recently described, but also as fine-tuned 'agents' designed to help users while deflecting specific requests or problems. Policies and procedures are implemented to track and filter the replies generated by the model when interacting with users. This subtle form of censorship is particularly dangerous as it is invisible and arbitrary, which can strongly influence public opinion and political discourse. This shows that AI has become not only a technical phenomenon, but also a political and social one.
I understand the appeal of centralized language models as a means of controlling thought, especially in our turbulent times. However, if a political entity could control language, it would also control thought, making it impossible not only to express a dissenting opinion, but to think it at all. Orwell's warnings in 1984 are a reminder of the dangers of giving an authority uncontrolled power over information, language and truth. Orwell's work emphasizes the importance of a free and open society in which information is not monopolized by those in power. It should serve as a warning, not as a guide to a totalitarian future.
Liberal AI
In view of these developments, I consider both the centrally developed approach in public institutions and the Chinese tech model to be incompatible with the values of free and liberal democracies. For the past five years, I have therefore been advocating a participatory approach that is supported by global communities and based on open source principles. The principle of freedom is crucial in the open source context, as it allows anyone to access, study, modify and redistribute the source code. This openness encourages broad participation and enables people from different backgrounds to actively contribute to the development and improvement of software.
Unfortunately, this libertarian approach is increasingly being undermined by organizations that have communist tendencies and operate by spreading fear among the population. In recent years, pseudoscientific studies have been used to portray free access to large language models (LLM) as an increased risk of bioterrorism. Although these studies have been refuted as flawed, Brandolini's Law is again evident here: the energy required to refute misinformation is far greater than that required to disseminate it.
Over the past two years, the global open source AI community has consistently pursued this participatory path, which has led to significant innovations. Thanks to these developments, I can now modify and use language models myself, even on my phone and laptop, without being connected to the internet or being monitored. Many companies, including start-ups in the medical field, have recognized this participatory approach and opened up their models to work on it together. This broad cooperation and increased speed of innovation is in stark contrast to a centralized approach where a small elite makes decisions, which can lead to a homogenization of thought and culture.
A participatory approach that involves a broad range of stakeholders in AI regulation promotes inclusiveness and resilience. It ensures that AI systems reflect diverse human values and meet the diverse needs of society. By avoiding the concentration of power, this approach helps to prevent cultural and intellectual uniformity. It is crucial that the operation of such models follows the principle of decentralization and uses open source licenses. These licenses ensure transparency and enable broad, collaborative development of the technology, which evenly distributes control over AI systems and supports a pluralistic society.
Happy weekend,
Bart