
OUT OF CONTROL
"One day we may have to worry about an all-powerful machine intelligence. But first we need to worry about putting machines in charge of decisions for which they lack the intelligence." [Jon Kleinberg]
Crises everywhere - wars, refugee flows, pandemics, severe weather disasters, polluted seas and soil, contaminated drinking water and, last but not least, empty state coffers: Experts are talking about a polycrisis.
The democracies of the 21st century seem to have run out of solutions and are no longer able to cope with the problems. Is the system incapable, if not to blame?
Democratic politicians are not addressing the concerns and fears of people who vote on the right, but are instead adopting populist issues and ideas and making them acceptable. In order to maintain their power, they form coalitions with right-wing parties, giving them access to money, information and decision-making power. As a result, right-wing populist movements are becoming ever stronger in Europe too. At the same time, society is becoming increasingly insecure and divided.
To what extent have capitalism and new technologies changed the balance of power and is power of disposal determined by the possession of material and immaterial quantities and made possible by the willingness of the ‘powerless’ to give up their own will (creative power/participation)?
Elon Musk openly supports parties such as the AfD with his X, Mark Zuckerberg has abolished fact-checking at Meta (Facebook and Instagram) as a genuflection to President Trump, and the Italian head of state Meloni will use the Starlink technology - also controlled by Musk - for the Italian military. Black Rock and other financial giants - whose demand for electricity is almost insatiable and can continue to grow unchecked - are turning their backs on the promise of the Green Deal in the use of renewable energies, and European governments are banning ‘gendering’ in official texts.
Not so long ago, the talk of artificial intelligence was still driven by different ideas than, for example, chat GPT appears today in the eyes of its users than, for example, route planning in sat navs, the calculation of the compatibility of personalities by a dating platform; by different experiences than those that underlie, for example, the classification of the use of AI by the judiciary in the EU as highly risky and a danger to democracy and the rule of law.
Even deep-fake applications are now available for domestic use. Accordingly, topics previously associated with AI (such as those accumulated in the highly differentiated debate on ‘strategies of appearance’) have shifted from their technophilosophical glamour to questions of technology assessment with concrete practical relevance. Less so where progress has undoubtedly been made, for example in the development of medicines, climate models or weather forecasting - but digitalisation occasionally reminds us of the matrix through accompanying symptoms, such as those found in the tangible reality of life, for example when algorithms perpetuate compromised training material or the ideological conceits of their developers. A new dimension of these processes was reached and became increasingly explosive when algorithms began to be granted ethical and moral decisions when they determine the allocation of housing, loans, jobs, insurance, etc.; when they are used in social media to influence elections and voters; when ethical and moral judgement is required - be it in autonomous driving, be it in a military context, when, as planned, the autonomy of drones includes the decision over the life or death of people.
In the last quarter of a century, ‘strategies of appearance’ have experienced an increasing trivialisation - they have become more commonplace, more tangible; the rapid technological development originally associated with them has, to a certain extent, been overtaken by their general availability; technical language efflorescence has mutated into common slang.
Not only is the shift from (epistemic) considerations to the observation of things taking place without the voices of discourse-typical thinkers, as they were decisive for considerations that are ahead of their subject matter - virtuality, artificial intelligence, simulation, fake have become commonplace outside of the social sciences and humanities. Virtuality, artificial intelligence, simulation and fakes are ‘on everyone's lips’ as real-political, socially relevant problems and areas of interest outside of the social sciences and humanities - in the meantime, the terms that were previously characterised primarily in technological terms have also been transformed into economic, political and ideological realities and have become more tangible.
A certain hybridisation (of everyday life) can be observed: The fake takes on depth in the form and consequence of an alignment with the reality of life. ‘False facts’ and the methods of faking them at various levels are accepted as a given - from industrial cheese and all the other fake food to the projections commonly used in marketing and advertising for alleged sustainability and climate neutrality in the production of goods and services, from realities such as the artificially created food shortages to support prices by destroying food on an industrial scale to social realities, such as the blatant rise in heating costs in poverty in the wake of the war in Ukraine, while being ‘falsified’ by the billions in profits made by energy suppliers, from the virtualisation of money (not only by decoupling it from the value of gold in the 1970s, but also in the form of cryptocurrencies through data mining), to those realities that owe themselves to a flourishing disinformation industry - one in which a Donald Trump becomes the president of the United States, in which a Donald Trump was elected president of the USA, or that of the social upheavals in the UK in the wake of the Brexit referendum - realities that allow disinformation to grow on an industrial scale and generate global economic revenues, i.e. to develop into an industry. i.e. to grow into an economic sector.
The assumption that this hybridisation derives its actual momentum from the digital sphere is obvious.
The mass of internet-based free services has long since become synonymous with the internet itself (as if one were to say roads to vehicles), and the frequency of use and the time spent using them herald an existential dimension: Every MINUTE - the data is a little dated - more than 200 million emails are sent worldwide, more than 200 hours of video are uploaded to YouTube, more than half a million tweets are written, more than 400 new blog posts are published, more than a quarter of a million photos are uploaded to Facebook, and Google records more than 4 million hits every minute (with around 6 billion queries a day). ChatGPT recorded around 100 million monthly active users just two months after its launch, making it the fastest-growing consumer application to date. It took TikTok another nine months to reach 100 million users and Instagram two and a half years.
This sheer volume, the gigantic scale of production, distribution and storage of data and information, however, removes the basis for mental coping. The dialectic of the communication society: the greater the amount of information, which grows merely accumulatively and not integratively, the less chance there is of meaningful processing. However, the lower the chance of meaningful information processing, the lower the chance of rational judgement as a basis for private and public decision-making. The lower the ability and willingness to make rational judgements, the greater the temptation and willingness to follow irrational patterns of interpretation, such as conceding a parallel existence to X as U. If this finding proves to be true for society as a whole, conspiracy theories and the resulting democratic problems will seem like a fad. The digital hidden object picture outlined above demonstrates the shift of life time and content into the digital sphere so impressively that the talk of a parallel world seems entirely appropriate (the once popular idea of an AI that brings itself to life in an act of spontaneous auto-creation in the globally networked computers is probably a thing of the past).
In addition, it speaks of a blind trust that seems all the more peculiar as it would seem that knowledge of data mining is now common knowledge: "With your permission [!] you give us more information about you, about your friends, and we can improve the quality of our searches. We know where you are. We know where you have been. We can know more or less what you're thinking about." What Google CEO Eric Schmidt presented as a vision of the future in 2010 is now a reality, thanks to machine learning. The transformation of users has also been completed: Paying for free services with personal data also means that you are not a customer, not a user, but the product that is being sold. How is it that half of humanity, which is concerned and suspicious of the state when it comes to its data, but enters appointments in Google Calendar to organise its everyday life, makes personal biotracking accessible via wearables on an industrially productive scale?
The 2025 annual programme also highlights the positive aspects of being out of control, of being uncontrollable in the sense of not being predictable, not being codable, not being manipulable and controllable. What does it take to create a sense of hope, to regain agency and participation?
"The [art] represents a search movement. It is an attempt to gain a foothold and direction. In doing so, it also pushes forward into the unknown, into the unknown, into the open, into the not-yet-being, by reaching beyond what has been, beyond what already exists. It reaches towards the unborn. It sets out towards the new, the completely different, the unprecedented. " [Byung-Chul Han]