[spectre] House of Mirrors: Artificial Intelligence as Phantasm, 09 April – 31 July 2022, HMKV, Dortmund/Germany

Inke Arns inke.arns at snafu.de
Thu Jun 16 17:07:54 CEST 2022


Dears,

I realised that I have been silent for such a long time here on this list - and decided to change this.

Please have a look at HMKV’s exhibition on Artificial Intelligence that was curated by Marie Lechner, Francis Hunger and me. I am including the press release.

You can download the entire 200+ page publication as a free PDF via https://www.hmkv.de/shop-en/shop-detail/house-of-mirrors-artificial-intelligence-as-phantasm-magazin-en.html

Here you can find more information https://www.hmkv.de/exhibition/exhibition-detail/house-of-mirrors-artificial-intelligence-as-phantasm.html 

And here’s Kaput's 12 min. video about the show with the curators as talking heads https://youtu.be/StfAcV1H1Vs 

On view until 31 July 2022!

Enjoy.

All the best,
Inke

PS: Please join us for the online symposium on “AI Infrastructures for Civil Society and the Arts“ on Friday, 24 June 2022, 09:00 – 12:00, 13:00 – 15:00!


+++++++++++++++++++


House of Mirrors:
Artificial Intelligence as Phantasm

09 April – 31 July 2022

HMKV Hartware MedienKunstVerein
at the Dortmunder U, level 3
Dortmund, Germany
www.hmkv.de


ABSTRACT

The exhibition House of Mirrors: Artificial Intelligence as Phantasm takes the common clichés about AI as an opportunity to talk about issues such as hidden human labor, algorithmic bias/discrimination, the problem of categorization and classification, and our fantasies about AI. It asks whether (and how) it is possible for us to reclaim agency in this context. Featuring more than 20 artistic works by international artists, the exhibition is divided into seven thematic chapters. The scenography of the exhibition is reminiscent of a giant house of mirrors.

— “Enter the hall of mirrors, which reflects human reality, sometimes in direct reflections, sometimes in a distorting mirror, sometimes through a glass pane that promises transparency or a semi-transparent mirror that reflects on one side and is translucent on the other.” (Inke Arns, Marie Lechner, Francis Hunger – curators) —

ARTISTS: Aram Bartholl, Pierre Cassou-Noguès, Stéphane Degoutin, Sean Dockray, Jake Elwes, Anna Engelhardt, Nicolas Gourault, Adam Harvey + Jules LaPlace, Libby Heaney, Lauren Huret, Zheng Mahler, Lauren Lee McCarthy, Simone C Niquille, Elisa Giardina Papa, Julien Prévieux, Anna Ridler, RYBN, Sebastian Schmieg, Gwenola Wagon, Conrad Weise, Mushon Zer-Aviv

CURATORS: Inke Arns, Francis Hunger, Marie Lechner

PUBLICATION: Inke Arns, Francis Hunger, Marie Lechner (eds.), House of Mirrors: Artificial Intelligence as Phantasm, HMKV exhibition magazine 2022/1, with texts by Inke Arns, Adam Harvey, Francis Hunger and Marie Lechner (design: e o t, Berlin), Dortmund: Kettler, 2022, available as a free PDF download via 

PROGRAMME OF EVENTS: Between April and July 2022, numerous film screenings, lectures, panel discussions, workshops and a symposium will take place as part of the exhibition House of Mirrors: Artificial Intelligence as Phantasm. (See page 12)


An exhibition by the HMKV Hartware MedienKunstVerein

The exhibition is funded by:
Kulturstiftung des Bundes

Funded by:
Die Beauftragte der Bundesregierung für Kultur und Medien

The exhibition is funded by:
Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen

The HMKV is funded by:
Dortmunder U - Zentrum für Kunst und Kreativität
Stadt Dortmund
Ministerium für Kultur und Wissenschaft des Landes Nordrhein-Westfalen
                       
The event programme is funded by:
Stiftung Kunstfonds
Neustart Kultur

Media partners:
Kaput Magazin für Insolvenz und Pop
jungle.world


+++++++++++++++++++
 

PRESS RELEASE

House of Mirrors:
Artificial Intelligence as Phantasm

09 April – 31 July 2022

HMKV Hartware MedienKunstVerein
at the Dortmunder U, level 3
Dortmund, Germany
www.hmkv.de


DETAILED CONCEPT

In the popular imagination, Artificial Intelligence (AI) is frequently misunderstood as a God-like entity that makes “just” and “objective” decisions. However, the term Artificial Intelligence is misleading. Neither are the systems "intelligent" (the artist Hito Steyerl therefore speaks of "artificial stupidity"), nor are they "artificial" in many cases. The term "pattern recognition" is more appropriate – not only because it avoids the notion of “intelligence”, but because it describes more precisely what AI is. Because like a sniffer dog, AI recognizes what it has been trained to recognize in large amounts of data - and is much more efficient at doing so than any human.

At the same time, this is also a problem. AI exclusively mirrors or repeats the things that it has been told to find. AI could therefore be considered a kind of digital “house of mirrors”. Most of us know such a house of mirrors from traditional fun fairs: a labyrinth consisting of glass walls and mirrors, sometimes also distorting ones. Once you entered, it was damned hard to find your way and get out again. And all reflections show only one's own image, one's own input.

AI must be trained by humans to do what it does. This is called “machine learning”. And this is where things get sketchy: AI training datasets are often incomplete or lack diversity – and annotations, because of their inherent bias, can be extremely problematic. One telling example is that of Microsoft’s AI chatbot called “Tay”.

In 2016, Microsoft had the idea to launch an artificial intelligence chatbot that was supposed to converse with the generation of Millennials on Twitter and gradually adopt their language and expressions: "The more you chat with Tay, the smarter she gets." Thanks to machine learning technology, which enables a programme to 'learn' from the data fed to it, Tay expanded its knowledge through interactions with human Twitter users. However, Microsoft did not reckon with the malicious trolls who fed Tay racist, sexist and homophobic comments. Within hours, Tay became a chat bot posting racist, antisemitic and misogynistic tweets such as "I'm a nice person. I hate all people", "Hitler was right. I hate Jews", "Bush caused 9/11 himself and Hitler would have done the job better than the monkey we have now. Our only hope now is Donald Trump" or "I hate all feminists; they should burn in hell.". After only sixteen hours, during which the chatbot posted more than 96,000 tweets, Microsoft was forced to take the AI offline.


Humans train machines

The problem evident in the fate of Microsoft's Tay in particular also applies to AI in general: humans train machines – in this case a chatbot – and these machines will only be as good or as bad as the humans who trained them. If the source material (e.g. pictures of faces) is already subject to strong selection (e.g. only faces of white people), the result delivered by the AI will also be strongly biased: if you present the AI with pictures of people with non-white skin colour, the AI will either not recognise that they are humans or it will classify them as “criminals”.

The story of Tay - or more recently of the South Korean bot Lee Ludai - should be a warning to us all: You have to control the input to AI very carefully, otherwise stupid little Nazis will result. Or, the algorithm will deny you a vital kidney transplant.ii Why? Simply because your skin has the wrong colour. Because algorithms and AI reinforce existing inequalities.

In this case, the system recognised the pattern of shorter life expectancy for black patients (which is based on poorer health care for this segment of the US population) in US health data, and prefers to invest the donor kidney in the (white) patient with a longer life expectancy.iii

It should be clear that current realities (injustices) should not be mistaken for desired futures. However, AI does exactly that: it extrapolates potential futures out of past data, which are the result of either statistics, omissions or prejudices, and thus reproduces existing inequalities. In this case, we could therefore say that AI is a mirror that distorts future realities.

This needs to be countered by radical transparency. According to AI critics and engineers, the data pools used to train the machines should become part of a public debate[i]. The training data needs to be cautiously checked, and the programmers need to be conscious of this problem. If we want AI to reflect our values, then we had better make sure that we teach it some basic human rights.


About the exhibition

The exhibition House of Mirrors: Artificial Intelligence as Phantasm addresses not only algorithmic bias/discrimination in AI but also AI-related issues like hidden human labour, the problem of categorisation and classification and our imaginings and phantasms about AI. It also asks the question of whether (and how, in this context) it is possible to regain agency. More than 20 artworks by 21 artists from ten countries – Australia, China, France, Germany, Israel, Italy, Russia, Switzerland, the UK and the USA – are presented in an exhibition that is subdivided into seven thematic chapters, and the scenography of which is reminiscent of a house of mirrors.

In connection with AI, the curators speak not only of a hall of mirrors, but also of a phantasm or a whole series of phantasms ("narratives") that are associated with AI. These can be optimistic or pessimistic in nature: for example, there is the desire to be relieved from physical and mental labor (digital assistants, care robots, autonomous driving cars, etc.). However, these imaginations can also quickly turn into fears: the fear of machines developing a "superintelligence" and taking over power.


A tour of the exhibition

Enter the hall of mirrors, which reflects human reality, sometimes in direct reflections, sometimes in a distorting mirror, sometimes through a glass pane that promises transparency or a semi-transparent mirror that reflects on one side and is translucent on the other. The mirroring takes place as a complex human-machine configuration, as software, as a machine, in fluid transitions between human labour, automation, pattern recognition, statistics, or, as many say, Artificial Intelligence. Between all the reflections, one can lose one’s orientation. Suddenly our own fears and taboos confront us disguised as phantasms. Nightmares and desire alternate: AI as overpowerer or as redeemer and in any case as "the other'' of man.


LOBBY

In the lobby we are welcomed by Sebastian Schmieg's Decisive Mirror (2019). Our image is captured by the camera and we are immediately classified: we are "42% still alive", "65% imaginary", or "17% one of them". Does the AI perhaps know us better than we know ourselves? Lauren Huret’s video Ways of non-seeing (artificial intelligence is hard to see) (2016) shows eerie scenes that could be taken from the movie Night at the Museum (2006), except that this is the horror version of the film comedy. Therefore, watch out when visiting the House of Mirrors exhibition and do not lose face!


ROOM 1: A Dreamscape of Full Automation

In 1872, Samuel Butler published Erewhon; Or, Over the Range, a visionary novel that explores the hidden and alarming possibilities of the machine. Influenced by Darwin's theories, Butler wonders what would happen if machines were also subject to the laws of evolution. Erewhon (anagram of Nowhere, a real utopia) is an unknown land from which machines have been banished. The narrator learns that four hundred years earlier, technological development there had been very advanced until an Erewhonian scientist proved that machines were destined to replace humans and that their rate of development was infinitely faster than that of humans.

This theory, that they might be in the process of building ever more autonomous machines to replace them, so frightened the Erewhonians that they destroyed the machines and were wary of inventing and building new ones in future. Today, 150 years after the novel's publication, this "dream of full automation", in which everyday life is from now on completely taken over by benevolent machines that have freed the inhabitants from work, as well as from all other worries, is still present.

In the video installation Welcome to Erewhon (2019) by Pierre Cassou-Noguès, Stéphane Degoutin and Gwenola Wagon, it takes the form of a shrill fable made up of a montage of YouTube videos that questions the prevailing notion of a society on autopilot and its deep ambivalence.

This engineer's dream is expressed in particular in the self-driving car, the introduction of which is constantly announced and repeatedly postponed until later. In his video VO (2020), Nicolas Gourault uses the case of Elaine Herzberg, the first pedestrian to be run over by an "autonomous" Uber car, to show how this dream can turn into a nightmare. This tragic accident has also exposed the illusion of a self-driving car by revealing the human labour that learning requires.

AI is neither magical nor immaterial, but is instead based on a global computing infrastructure. Gwenola Wagon and Stéphane Degoutin have photographically documented parts of this global infrastructure in Atlas of the Cloud (2021).

Zheng Mahler’s installation The Master Algorithm (2019) features an AI driven Chinese news presenter, active 24/7. Its ghost-like appearance is created by quickly rotating holographic fans and is reminiscent of the giant urban screens in the dystopian cult movie Blade Runner (1982). Converging China’s social credit score system and the idea of a master algorithm adopted by the Communist Party recalls some of the darkest techno-Orientalist nightmares.


ROOM 2: Ceci n'est pas une pipev

In order to teach machines to see, we need to train them on thousands or even millions of images, collected on the Internet. These data sets (training sets), which form the foundations upon which our learning systems are built, can be viewed as contemporary encyclopaedias, both aiming at describing everything in the world.

To make sense of the world, we need to name, classify, order. Nevertheless, this classification is not easy. Images are charged with multiple and contradictory meanings, are open to interpretation, as Simone C Niquille shows in her video installation Sorting Song (2021).

Anna Ridler's work Laws of Ordered Forms (since 2020) draws attention to the way in which historical taxonomies continue to resonate in modern implementations of machine learning and the problems posed by these classification systems, which tend to perpetuate prejudice and reinforce cultural stereotypes and norms. Ridler’s use of encyclopaedias underscores the way bias, values and beliefs become encoded in knowledge production.


ROOM 3: A Curiosum with Delicately Violent Machines

The distinctions and rules inscribed in algorithms and data sets will automatically be played back and requested over and over again. They exert a gentle force when they serve to regulate human life. For example, American students recognised how they could pass an exam by providing the appropriate keywords to an automated grading system. Or, an access control system based on facial recognition only opens when the people seeking entry smile. The smile, a gift from one person to another, now appears as an imposition. Sometimes more, sometimes less clearly, a gentle violence emerges here, sprung from the engineering dreams of automation.

As suggested by the video Where Is My (Deep) Mind? (2019) of the French artist Julien Prévieux presented in this section, it is not so much the machines that are becoming intelligent as us who are becoming machines, formatting and mechanising our behaviour, impoverishing our repertoires of gestures and words.


ROOM 4: The secret chamber of Artificial Artificial Intelligence

“Artificial Artificial Intelligence” may at first seem like a linguistic error, but the term was coined intentionally.vi When it became apparent that certain promises of Artificial Intelligence could not be kept, but could be replaced and thus fulfilled by the intelligence of cheap labour. This room is about “fake” Artificial Intelligence.

Many forms of work are shrouded beneath the vaporous term Artificial Intelligence, feeding the illusion of automation. Many tasks we think are performed by computers are actually conducted by human beings in a more or less hidden way. For example, these "click workers” educate AI systems for pennies.

Transcription, image annotation, moderation, visual or audio recognition; so many activities delegated to humans in the form of micro-tasks, for little or no pay, that must be implemented in a very short time. Working as a "data cleaner" for companies specialised in the detection of emotions, the Italian artist Elisa Giardina Papa conducted many such strange tasks. She documents her employment as a micro-worker in the 3-channel video installation Cleaning Emotional Data (2020).

These microworkers, according to the sociologist Antonio Casilli (2019), are like "millions of little hands that, day by day, operate the puppet of the weak automation". AI could not function without them. Artist Conrad Weise built an impressive monument to these “millions of little hands”, which he calls <--human-driven condition (2021). This extremely fragmented work is organised via software platforms, of which Amazon's Mechanical Turk is probably the best known. It borrows its name from a famous chess-playing automaton, the Mechanical Turk, created by Baron von Kempelen in the 18th century, around which RYBN’s Human Computers (2016 - ongoing) installation unfolds, tracing the long history of work automation. This automaton, which caused a sensation in its time, was in fact a deception and concealed a human in its gears.

These links between economy, labour and computation are often hidden behind the smooth and reflective interfaces of AI. Questioning the impact of human labour in the future of automation, artist Lauren Lee McCarthy assumed the role of Amazon's virtual assistant for a week, remotely controlling the "smart” homes of consenting individuals. This eerie experiment is documented in her installation LAUREN (2017).


ROOM 5: Cabinet of Eerie Laughter

A ghoulish, derisive laugh, as in a cabinet of horrors, echoes through the house of mirrors, much as it echoes through the statistical data sets that discriminate against people in AI. Bias can arise both from biased data sets and from poor decisions in the creation of the information model of an AI application. It is reinforced by automation.

Mushon Zer-Aviv’s interactive installation Normalizi.ng (2020) is an experimental online research in machine-learning that aims to analyse and understand how we decide who looks more “normal”. It is informed by the work of the French forensics pioneer Bertillon and refers to his “Portrait Parlé” (the speaking portrait), a system for standardising, indexing and classifying the human face. His statistical system was never meant to criminalise the face but it was later widely adopted by both the Eugenics movement and by the Nazis to do exactly that. The online work automates Bertillon’s speaking portraits and visualises how today’s systematic discrimination is aggregated, amplified and conveniently hidden behind the seemingly objective black box of AI.

AI systems operate in the background, in ways that are fundamentally beyond people’s knowledge or control. Those who are classified and assessed by them frequently do not know where, when or how these systems are used. In her video CLASSES (2021), Libby Heaney explores the entanglements between machine learning classification and social class(ification).


ROOM 6: First I scratched the mirror, later I crashed it

Are we helplessly at the mercy of the problematic consequences of automation, categorisation, discrimination, hidden human labour and bias? What possibilities are there for a creative or potentially subversive approach to the problems of AI? Far from offering pragmatic solutions alone, a series of artistic works addresses how it is possible to regain human agency. Initially, there are a few scratches inflicted on the mirrors of the mirror cabinet of AI, a mixture of inscriptions, signposts and vandalism. Later, machine breakers take up the hammer to smash the mirror, not out of destructiveness, but to find out what is behind the mirrors, just as Alice climbs Through the Looking Glass.

Artists provide critical insights into how AI works by exposing its mechanisms. In the exhibition House of Mirrors, Adam Harvey presents a huge mirror, on the surface of which the slogan (at the same time the work’s title) Today’s Selfie Is Tomorrow’s Biometric Profile (2016) is inscribed. Selfies uploaded to social media and other platforms are being used to train Ais, without the consent or even the awareness of their creators. A video explains the UCF Selfie Dataset GAN Anonymization (2020). In addition to this, access to Harvey’s online project exposing.ai (2018-2020) is provided via a terminal. Finally, the presentation is completed by a video interview with the artist, which Francis Hunger conducted in 2021 in the course of the ongoing research project Training the Archive (since 2020). It should come as no surprise that Harvey’s face cannot be recognised.

While face recognition and, based on this, “deepfakes”, have been critically addressed by artists and activists for some time already, machine listening and the coming panacousticon is still an emerging field of research. In order to function, digital assistants like Alexa must constantly scan the acoustic environment. But how and based on which data are they being trained? Do these systems know when we are feeling good - or bad? Can these systems recognise criminal acts on the basis of acoustic data? Who defines what a “criminal sound” is? Revisiting one moment in the history of automated listening, Google's acquisition of YouTube, artist Sean Dockray details how machines become the new target audiences of YouTube in his video Learning from YouTube (2018). Do we, as YouTube users, have rights to the way in which our uploaded content is being used? How can we know what sorts of value systems and politics are being embedded into the neural networks of machine listening - and pre-emptive policing?

Jake Elwes’ The Zizi Show (2020) is an online, interactive, deep fake drag cabaret. As drag in general plays with the rules of normativity, here it is used to queer-norm AI. The bodies in the show have been generated by neural networks trained on a community of drag artists who were filmed at a London cabaret venue closed during the COVID-19 pandemic. The Zizi Show constructs and then deconstructs a virtual cabaret that pushes the limits of what can be imagined on a digital stage.

Anna Engelhardt's historical inquiry Death under Computation (2022) positions Russia’s military use of AI as an outcome of former research in cybernetics and AI during the Soviet Union. Uncovering the historical roots of AI in her work, which are often disguised and hidden through the secrecy of the Cold War Soviet army, leads to questions about today's use. Regaining historical insight and producing new knowledge about it, while making this information available beyond the Russian-speaking community, is another strategy for regaining agency over these systems.
 

ROOM 7: Exit Through the Gift Shop

This rather self-ironic space releases visitors from the hall of mirrors of AI phantasms with a series of humorous works. On the one hand, the title alludes to the documentary film of the same name about the graffiti artist Banksy. At the same time, it actually guides visitors through the HMKV bookshop. With lightness and wit, the works again reflect some of the core themes of the exhibition: automation, categorisation, human labour in relation to labour performed by machines and the possibilities of intervening in the human-machine configurations of AI, as well as future scenarios of animal-machine alliances, from which the human species would be excluded.

Stéphane Degoutin and Gwenola Wagon’s video installation Cat Loves Pig, Dog, Horse, Cow, Rat, Bird, Monkey, Gorilla, Rabbit, Duck, Moose, Deer, Fox, Sheep, Lamb, Baby, Roomba, Nao, Aibo (2017) addresses the recent phenomenon of cats stoically sitting on vacuum cleaning robots. The artists collected videos from YouTube showing just that. These are projected into the exhibition space by a micro-projector fixed on a vacuum cleaning robot in action.

In his video How To Give Your Best Self Some Rest (2021), Sebastian Schmieg looks at vacuum cleaning robots, smart locks, delivery robots and digital assistants as “strategic underperformers”. He suggests that we should follow their example and get some rest.

Just before leaving the exhibition, Aram Bartholl finally presents us with a gift: we can have our portrait taken in a professional photo studio and choose our favourite emoji. The face recognition software then turns the image of our face into a mask that is unreadable for face recognition systems by redrawing our facial lines with the chosen emoji. Hypernormalisation (2021) is a generous gesture by the artist and an eerie gift that we can stick on the refrigerator door in our kitchen. It reminds us that AI is all around us, that we should not forget it, that we can question its (political) use, and that we should finally smash the phantasms concerning it.


 
___________________________________________________________________________

i Justin McCurry, „South Korean AI chatbot pulled from Facebook after hate speech towards minorities”, The Guardian, 14 January 2021, https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook.

ii „How an Algorithm Blocked Kidney Transplants to Black Patients“, in: Wired, 26 October 2020, https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/.

iii See also Helbing, Dirk et al.: “Triage 4.0: On Death Algorithms and Technological Selection” (preprint), ResearchGate, September 2021, https://www.researchgate.net/publication/354293560_Triage_40_On_Death_Algorithms_and_Technological_Selection_Is_Today%27s_Data-Driven_Medical_System_Still_Compatible_with_the_Constitution.

iv Ian Sample: “Computer says no: Why making AIs fair, accountable and transparent is crucial”, The Guardian, 5 November 2017, https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial.

v This title refers to the painting La Trahison des images (en. The Treason of the Images, 1928–29) by the surrealist painter René Magritte. It shows a pipe with the caption “This is not a pipe”.

vi Amazon uses the term "artificial artificial intelligence" for its Amazon Mechanical Turk service, patented in 2001. It refers to processes in computer programs that are outsourced to humans because they can execute them faster than machines. See „Amazon Mechanical Turk“, Wikipedia, https://en.wikipedia.org/wiki/Amazon_Mechanical_Turk, as well as „Artificial artificial intelligence“, The Economist, 10 June 2006, https://www.economist.com/technology-quarterly/2006/06/10/artificial-artificial-intelligence?story_id=7001738.

 

+++++++++++++++++++
 

PROGRAMME OF EVENTS

House of Mirrors:
Artificial Intelligence as Phantasm

09 April – 31 July 2022

HMKV Hartware MedienKunstVerein
at the Dortmunder U, level 3
Dortmund, Germany
www.hmkv.de


Friday, 08 April 2022, 17:00 – 22:00
HMKV at the Dortmunder U | Level 3 & online
Soft Opening of the exhibition House of Mirrors: Artificial Intelligence as Phantasm.
At 20:00, you have the opportunity to take part in a short tour of the exhibition via our livestream here: https://hmkv.de/events/events-details/opening-house-of-mirrors.html  

Saturday, 09 April 2022, 15:00 – 16:30
HMKV at the Dortmunder U | Level 3
Guided tour with the curators Inke Arns, Francis Hunger and Marie Lechner through the exhibition House of Mirrors

Tuesday, 26 April 2022, 16:00 – 17:30
Zoom | Online
Workshop: “Really unfair!? This is how discriminatory AI can be“ with Susanne Rentsch*

Tuesday, 17. May 2022, 15:00 – 18:00
Zoom | Online
Workshop: “Discussing with machines? Testing and understanding the basics of language-based AI“ with  Benjamin Eugster, Hannah Schwaß & Anna-Lena Barner*

Thursday, 02 June 2022, 19:00– 21:30
HMKV at the Dortmunder U | Cinema, ground floor
Short film screening: Frame of Reference I (2020) by Su Yu Hsin, A set of non-computable things (2017) by Charlotte Eifler and How Does Thinking Look Like (2021) by Philipp Schmitt + ensuing discussion with Charlotte Eifler and Su Yu Hsin

Friday, 10 June 2022 19:00 – 22:00
HMKV at the Dortmunder U | Cinema, ground floor
Concert with Dagobert and Kay Shanghai

Wednesday, 15 June 2022, 19:00 – 21:00
HMKV at the Dortmunder U | Cinema, ground floor & online
Talk: “What the Valley calls thinking“ with Adrian Daub, Jonas Lüscher und further guests

Friday, 24 June 2022, 09:00 – 12:00, 13:00 – 15:00
Zoom | Online
Symposium: “AI Infrastructures for Civil Society and the Arts“*

Thursday, 30 June 2022, 19:00 – 20:30
HMKV at the Dortmunder U | Cinema, ground floor
Film screening: Coded Bias (2020) by Shalini Kantaaya

Saturday, 02 July 2022, 13:00 – 16:30
HMKV at the Dortmunder U | Level 3, workshop room
Workshop: “Human Perceptron“ with RYBN*

Saturday, 02 July 2022, 19:00 – 21:00
HMKV at the Dortmunder U | Cinema, ground floor & online
Lecture: "Artificial Intelligence - Promise of Salvation or Capitalist Machine?"
with Timo Daum

Thursday, 28 July 2022, 19:00 – 21:30
HMKV at the Dortmunder U | Cinema, ground floor
Film screening: All That is Solid Melts Into Data (2015) by Boaz Levin & Ryan S. Jeffery followed by a discussion with Boaz Levin

Every 1st, 3rd + 5th Sunday of the month and on holidays, 16:00 - 16:45
In detail: 15/17/18 April, 01/15/26/29 May, 05/06/16/19 June, 03/17/31 July
HMKV at the Dortmunder U | Level 3
Public guided tour through House of Mirrors

Every 2nd + 4th Sunday of the month, 16:00– 16:30
In detail: 10/24 April, 08/22 May, 12/26 June, 10/24 July
Instagram | Online
Live online tour through the exhibition via Instagram live story. Simply participate via the HMKV profile @hmkv_de

* a registration at event at hmkv.de is required

Cf. https://hmkv.de/events.html for detailed and updated Information for the events.


 


More information about the SPECTRE mailing list