The Finite-State Fantasia 1

I’ve been chipping away at a commission for the last few months which isn’t yet announced but I’d like to write about nonetheless. (You may have seen in my last missive that I promised my PhD supervisor I’d write about what I was doing and why. Hereby I announce a return to regular blogging. Although I only get time to sit in front of a computer while travelling so I join you from the Eurostar.) Part of this process is to bring together all the disparate aspects of what I’m doing into something I can practice and work through and someday call a PhD. Consequently, this post serves as something of a rationalisation of how it ties in to a lot of the things I’ve been doing over the 12 months-ish, especially as I try and figure out what my practice is for the next 5 years. I’m going to introduce and try and frame some of the areas of my research interests and what I”m doing about it before introducing the plan for the project in the next post and hoping, student-like, to document the process of its growth over the next few months here before finally reflecting on what happened and how it worked. Or not.

Magic and Technology 

This new project initially started as a response to some of the thinking about technology and the occult I’ve been doing, mostly under the guise of Haunted Machines. This ongoing project with Natalie Kane is currently in its largest and most ambitious manifestation - as a year-long program of event throughout 2017 centred on the Impakt festival in Utrecht, Netherlands. Of course, I have to work out what exactly it is about this line of thinking that appeals to me beyond it’s obvious aesthetic appeal.



While at a panel discussion last night Nicolas Nova prodded me on what the importance of this type of ‘hauntological’ thinking is and why it works. From where it stands now I can see three reasons for the appeal. Firstly, it’s using the tools of the master against them. It’s an out-and-out seizing of the narrative of ‘magic’ and ‘enchantment’ that occupy popular and corporate narratives about what technology is and what it’s for. When that world that you’re trying to critique turns to those metaphors it feels only apt to do the same as if to stretch the analogy even further.

Secondly, magic and the occult has deep and powerful and well-proven ties to politics and powers structures. Skipping over for a moment, all the complex anthropology about what spiritualism and the supernatural actually are, there’s no doubt that they have been used as levers of power in different ways. I’m currently engaged in reading about the classification and examination of Japanese ‘wonder’ in the 19th century and finding the narrative dominated by the politics of a gradually-opening Japan a neat parallel to many political shifts today.

Thirdly, it’s a method of examining ’cultural assimilation.’ I can’t remember where I originally heard the phrase I now use almost daily (I suspect, like most other things Gell) but using the occult as a metaphor is a powerful way of breaking cognitive barriers to have livelier and critical discourses. Whatever your cultural background, we all have a conception of ‘magic’ where we might not have one of ‘packet switching’ or even something ‘ownership.’ A recent paper I read pointed out the ludicrousness of abstract concepts like ‘the market,’ ‘the cloud’ and ‘publics’ being acceptable forms of decision-making orientating bodies but magic, the spiritual and the supernatural which has as equally real and present artefacts and effects being excluded.

But at the route of Haunted Machines is a critique of dominant narratives of technology, seizing a narrative to build powerful analogies to challenge the loudest voices of progress, innovation, disruption etc. etc. etc. So I don’t suppose Haunted Machines is about the occult any more than Animal Farm is about agricultural technologies. Haunted Machines is about power, design, art, technology and the outsider as well as how, for my conception of it, machines are becoming totems of power that dazzle us with trickery.

Rendering 

I’ve been talking about rendering and CGI for a little while. I’m by no means an expert and my skill level is incomparable to other practitioners but I’ve been noticing that I’ve been using it more and more in my practice and have, through doing so, been thinking about why. The obvious appeal is in its ease of use. With Blender as a free platform, even with a steep learning curve, one can be producing good quality images and animation quite quickly and cheaply. This in itself is a major point of interest. I presented a paper at TU Delft last week, which I’ll publish on here in the next week or so, in which I discussed the potentials for such an amazing tool outside of its current applications.

Briefly: Rendering software has largely been the preserve of cinema, advertisers and architects. Consequently, these fields which generally lack much in the way of quality wider social critique (sorry architects, it’s true, you know it, I know it) have the monopoly on visualising and imagining the future. Either they need to get better at pluralising or we need more renders. There’s loads of artists and other practitioners starting to do this and I’ve been keeping my eye out for more. The other interesting point of renders is the actual technical methodology. The computing power required to create believable realities is shrinking rapidly. I could create an image that would fool you into thinking it was real in minutes. This has drastic implications of fields critiquing technology and culture. I point you to the return of Moriarty in Star Trek TNG. Not only could this present an enormous power lever for deceit and trickery but it leads to a genuine question I’ve posed before elsewhere.



Additionally, technically the process is very interesting. Rendering only works at close-high resolution. I could create a 10m x 10m tile of water pretty realistically but it doesn’t scale. The trick only works close up. The haptic and materiality of it are really dynamic and interesting and counterintuitive. Light doesn’t really work the way you think it does, air is really important. Things aren’t made the same surfaces you thought they were. Wesley Goatley explored this a little bit in Watching Mephitic Air, but I keep coming back to the very real material and technical qualities of rendering software as sites of technological critique. There’s also dialogue between how machines see and then create that works through rendering (lit. ‘to redeem/return’) where how a machine ‘builds’ a model of the world is an aspect of its perception of it.

What’s it doing? 

This is the final and biggest part for this specific project. And I think it ties particularly into Haunted Machines. The brief of this project is about trying to de-mistify the magic, fear and horror around seemingly intelligent, autonomous objects by getting inside the machine. What is the world when you have only infra-red sensors with a range of 1.5m that flash 67 times second returning two discrete data points? A totally non-integrated sensorium. Our work in Haunted Machines has shown over and over again that we create narratives that animate devices, systems and services in order to make sense of the cognitive phenomena of machines we are totally and utterly incapable of comprehending. The perfect focal point for this idea is the question ‘What’s it doing?’ A question you probably ask quite regularly, sometimes even gendering it with ‘What’s he/she doing?’ At this exact point we alot agency, will and, I suspect, at the back of our minds, consciousness to the machine.

For now, I’m buying into the theory proposed by Herman von Helmholtz that the brain is a ‘prediction machine:’ The model of the world we have and consciously hold is a ‘best guess’ based on available information. Getting chewy about these words is important (and Goatley specifically chews me out about getting them wrong.) Information is the reduction of uncertainty. Senses are input devices and perception is where these senses are integrated into a model. In fact, several theories attempting to solve the ‘hard problem’ of consciousness propose that consciousness exists simply as integrated perception in animals. We don’t see colour and form as separate and see an object as integrated in all our senses where a machine senses them as separate points.

But the words don’t fit, the stories aren’t right. To be honest a fair bit of me is getting annoyed with ‘machine vision’ projects that show nothing more than the debugging screen. What is the ontological experience of being a machine? Even the word ‘experience’ and ‘ontological’ are problematic here. Can I affect a first-person sympathetic perspective of the machine? Is it worth it?

The point is to tackle a major thrust of my PhD work - that treating ‘smart’ and ‘networked’ objects as if they are more gadgets and office furniture isn’t good enough. It isn’t good enough for things that act on your world and have their own ‘worldview’ (no they don’t, I know) to be glossed over ‘as magical’ while being so many different things to so many different agents. It isn’t good enough for things that you don’t ‘wholly’ own in the Japanese sense of in where you can’t comprehend the entire chain of cause and effect that might dictate actions. Machines exist as 'not-nothings' things that aren't conscious and humans but things that aren't simply tools or other objects. It's reconciling their 'not-nothingness' with how we build our perceptions of them into a lived experience that more often than not delivers them as 'non-integrated perceptive cognates' that I'm going to try and do.