a position paper for AI Culture lab

Over the past two years, the phenomenon of Artificial Intelligence has claimed pole position in the rush to be the next technological disruption shaping our near-future. With advances in hardware and data gathering fuelling the ascent of machine learning and associated techniques, the promise of Artificial Intelligence has filled our collective consciousness with a weird mix of hope and dread. It calls to mind dire scenarios in which humanity is annihilated by a robot apocalypse, or has merely become redundant in the automation of everyday life. Equally, AI promises a frictionless society of leisure and automated labour, with at its apex our assimilation into a ‘singularity’ of digitised, transhuman consciousness.

Beyond these spectacular scenario’s, technologies associated with artificial intelligence are rapidly being integrated into diverse realms of human activity. The roll-out of ubiquitous computing creates a universal pathway for AI into our lives, ranging from autonomous vehicles to predictive policing and from micro-targeting feeds to urban surveillance networks, raising complex questions of ethics and control.

“Although science fiction may depict AI robots as the bad guys, some tech giants now employ them for security. Companies like Microsoft and Uber use Knightscope K5 robots to patrol parking lots and large outdoor areas to predict and prevent crime. The robots can read license plates, report suspicious activity and collect data to report to their owners.” — Gartner Top 10 strategic technology trends for 2019

How to make sense of the apparent paradox that AI is predicted to be both our saviour and our undoing? For a start, let’s take a closer look at discourse on Artificial Intelligence, focussing on a couple of tropes (both visual and textual) that express our attitudes – and indeed biases – to the phenomenon. These tropes, oft-repeated ‘facts’ and predictions about AI that appeal to our common sense, seem at times wildly hyperbolic. Yet, by their very superlative character, these hyperboles might help make sense of collective interests in, and expectations of AI technologies as they circulate today.

Right off the bat, the term ‘Artificial Intelligence’ itself gives rise to hyperbole, as this word-pair conflates the intelligence of the human mind and the ‘intelligence’ expressed through algorithms. To put it in more material terms, the human brain and the neural net are conflated. From a material, i.e. physical perspective it is obvious that our brain (sitting right here in your head) and its complex cultural manifestations is really, really different from a coded neural net spinning on a server farm somewhere (operated by someone), communicating in binary with a server farm (or your phone!) somewhere else. Yet, in our imaginary, an abstract impression of sameness persists, irresistibly portraying these two things in overlay.

Images such as this seduce us to see the digitally rendered pattern of the neural net in equal relation to its biological, physical source. We see neural nets and smooth brains, floating convivially in a cybernetic and self-referential cloud of data. These images, a staple in reporting on AI, evoke a sense of meta-ness and control: abstracted from their physical bodies and material connections, they are designed to suggest phenomena we can interface with directly and modulate according to our will.

Read the full paper here.

Also, watch the lecture by Chris Julien about his paper at the Zentrum für Kunst und Medien (ZKM) in Karlsruhe.