The Potential for Radical Politics in Rendering at From Paper to Pixels.

This is the full paper I gave at the From Paper to Pixels; Transmedial Traffic in Architecture Drawing at the Jaap Bakema Study Centre, TU Delft in association with The New Institute, Rotterdam. It's useful in laying out some of my thoughts about rendering as a potential practice for radical imagination. This is the full 3,700 words including the abstract, I've included some of the slides but not all of them. My apologies. 

Abstract

Architectural renders form the cornerstone of public communication in contemporary architecture. New developments are reported and promoted to the public using highly stylised glamorous and romantic renders that conform to the dominant societal values and politics: aspiration, wealth, executive and family lifestyles, urbanism. This consequently results in an homogenous aesthetic. Rendering software and its application works as a self-reciprocating aesthetic style of tropes that reinforce these dominant politics and values. Images of the future reinforce the way that future is developed [Bassett et al. 2013] and so in creating these renders, developers and architects build a narrow and linear vision of the future in line with the values they espouse. In tun, the software is further developed to reinforce these tropes. [Plummer Fernandez 2014]

Because of their dominance, projects of resistance cite these renders as targets. Campaigns attempting to halt redevelopment, gentrification or expensive public-private projects hold up these renders as leitmotif effigy, symbols of the top-down power the renders represent.

However, the growing accessibility and usability of rendering software presents an alternative space for political action. Rather than simply becoming sites of resistance, rendering software has the potential to be a site of radical, critical imagination. As Nick Srnicek and Alex Williams point out in their book ‘Inventing the Future’ - We ‘can’t resist new worlds into being’ [Srnicek & Williams 2015] but we can begin to render new worlds as alternative visions that wildly challenge the dominant hegemony of rendered futures.

Activists and artists have begun to use rendering software as a tool for creating new imaginaries and feeling critical debate about the shaping of the future. Their projects challenge histories, the laws of physics and create sprawling meta-fictions that exceed the abilities of other media. At the same time these projects make tongue-in cheek attacks at the narratives and aesthetics of architectural rendering and dominant power structures.

This paper lays out a brief analysis of the relationship between future imaginaries and the dominant aesthetics of contemporary rendering software and suggests that this software presents a space for developing political imaginaries that can lead change and critical discussion. It uses examples and case studies from theorists and artists working in the space of 3D rendering to lay out a framework for where these kinds of practices might start to exist and act and suggests tactics and techniques for a new radical culture of rendering.

Introduction

Speaking at London’s Architecture Association in 1971, Adolfo Natalini, founder of the radical architecture firm Superstudio presented an argument for Superstudio’s withdrawal from the work of architecture - producing buildings.


The self-appointed role of Superstudio as an architecture firm remains contentious since not a single one of their instantly recognisable designs was ever built. There are compelling arguments [Elfline, 2016] that the work that Superstudio pursued sat within the context of the leftist politics of the time and that they, like other critical practitioners both contemporary and historical, created obstructions to encourage disruption and engagement rather than buildings that continued the neoliberal hegemony they despised. Whether they created or obstructed, Superstudio recognised that the work of architecture is no more in construction than music is in getting to the end of a song. The work of architecture, design, art and all those fields with the privilege of creativity and critique is in building future imaginaries - renders of the way the world might be, orientating visions of future material products that ‘enchant’ our future.


In this way, architecture uses drawings, plans and increasingly, renders to materialise the future and present it to their audience. People relate to the built world first through its rendering before its construction. In 2015, when controversy surrounded the funding and construction of Thomas Heatherwick’s Garden Bridge in London, it had yet to exist. The future imaginary used was one of two or three renders of the bridge. It became the object around which public debate formed [Di Salvo, 2009] despite being, culturally at least, entirely imaginary.


Human material culture is riddled with future imaginaries. Cinema perhaps leads this field and several studies have pointed out how the penetration and resolution of these imaginaries feeds back into reality [Bassett et al. 2013]. Stephen Spielberg’s 2001 film, Minority Report is a particularly pertinent example. The fictional and completely rendered gestural interface used by Tom Cruise has held sway over 15 years of interface development and technology headlines. As rendering becomes a faster and more affordable alternative to expensive sets and protracted film shoots, television and advertising are following suit. Almost all car adverts in print and moving image are rendered, in 2014, 75% of IKEA catalogue images were 3D renders [Parkin, 2014]. The benefits of this approach are obvious, rendering is cheap, affordable, quickly iterative and allows of circumventing external factors: For example, the troublesome laws of physics. In 3D-rendering, dramatic sunrises can be made to last forever over perfect and empty mountain roads.


The danger is that we are limiting our imagination by creating a homogenous rendered visual culture. Buildings on development hoardings begin to look identical, cars and sofas have the same haptics. The developing visual language of 3D rendering is becoming self-reinforcing in its aspirations. We run the risk of a visual Shazam effect for rendered images.[Thompson, 2014] The Shazam effect, named for the music recognition software, is a phenomena whereby record companies produce music based on the download habits of users thus creating a reinforcing cycle of all music sounding the same. Rendering software developers increasingly develop tools and processes of their software around the demands of some of their biggest clients - architects and advertisers - creating a self-reinforcing aesthetic loop. The 3D digital artist Matthew Plummer-Fernandez claimed once in an interview to be able to recognise the software used to render buildings based on the ‘off-the shelf’ algorithms and plugins [Plummer-Fernandez, 2014].


A very real example of the Shazam effect in play is the work of Crystal CG, a Beijing-based rendering company who offer ‘fast turnaround, high quality and inspiring presentations, including 3D renderings, animations, multimedia and virtual reality.’ The website is packed with identikit steel and glass structures brooding under dramatic sunsets with the requisite amount of shadow-people render ghosts and street greenery to make an appealing frontispiece to any development hoarding. Their partners and clients are almost every major developer, several Olympic development projects and a range of ultra-rich micro-nations. The chances are you’ve seen some of their work and if not one of Crystal CG’s renderings then one of their competitors’.

The visual language epitomised by Crystal CG is thoroughly embedded in visual culture, at least in richer parts of the world. These developments could be anywhere, they are after all produced by artists who have never visited the sites in question and probably know very little of the local culture. They simply materialise a set of plans into something visually appealing and striking. However, the sheer onslaught of this aesthetic is becoming the steering narrative for what the future should be. In a recent trip to Colombo, Sri Lanka, I had a conversation with a representative of a redevelopment group building a massive office and shopping complex in the heart of the city. When I pointed out that the building looked like it could be from anywhere and asked who had created the render he said that he didn’t know… ‘some Chinese company.’


Looking at the rendered city sets of sci-fi films like Elysium and Star Wars reveals an aesthetic continuum between the hyper-real development renderings that pervade the city and how we imagine the future city to be. The rise of white and gleaming tower i inevitable as is the kitsch homage to the 70s utopian visions of space life. As if to concretise this relationship between urbanists, rendering artists and games developers, a 2014 project between city-developers and game-developers created rendered imaginaries of how Britain might look in the 22nd century showing cities like Manchester as hyper-real assemblages of placeless architecture and fantasy technology [Clarkson, 2014].

The future is imagined through these visual artefacts, and as they become more and more identical we run the significant risk of reducing our range of imagination. We might end up building an aesthetic cage in which we find it impossible to imagine a future beyond the gleaming towers and rendered greenery, in which the 45-degree skyline view becomes all encompassing and we dream of endless sunsets over our glass balconies.


In the late 1960’s Superstudio cannily seized on the cultural penetration that visual culture allows to seed alternative future imaginaries to what they saw as a visual hegemony. Their projects were visually rich and dizzying, playing into the language of cinema, collage and colour photography to inject a counter-narrative into popular culture. They realised, through exhibition and publication the reach of their work and how to play with these tools to introduce new ideas and critique. With the prevalence of rendering tools in contemporary culture, we can start to draw parallels with other contemporary practices and see how artists are using the tools of luxury flat renderings to build alternatives or critique the aesthetic hegemony we have.

The English word ‘render’ finds its etymology in the Latin ‘reddere’ - to ‘give back’ from which we also get the English word ‘redeem.’ So, in the spirit of redeeming myself I want to suggest an exceptionally loose categorisation of radical approaches to rendering largely from outside the architecture world: Un-Rendering, Low-Rendering and Hyper-Rendering. These categories are structured by the tactics that practices used to achieve the objective of engaging audiences in critical debate. In Un-Rendering we find practices trying to undo the render, to uncover the underlying technical reality of the rendering produced. Low-Rendering practices use intentionally low-resolutions, simplified, distorted and ambiguous imagery to encourage audiences to critically imagine the wider context of the work or the specifics of its functioning. And in Hyper-Rendering practitioners push the technical boundaries of rendering software and materials to create radical aesthetic imaginaries that critique the homogeneity of the rendered landscape. These practice create Overton-Window effects, introducing extreme and radical ideas in order to try and stretch the standard deviation of styles. I’ve also referred almost exclusively to practices outside what could be considered architecture. These practices are artists seizing the tools of architects to do other things, to think and act in non-architectural ways.

Un-Rendering 

Underlying any rendering we find a host of systemic conditions and requirements; the intermeshed complexities of planning, logistics, legal restrictions, engineering and infrastructure. The rendered image normally used for public consumption is the sharp end of this long and expensive sword. Practices of unrendering seek to investigate and reveal what is behind the rendering, to trace the thread of reality that the fantasy render is tied to.


Crystal Bennes’ archiving project #developmentaesthetics aims to ‘chronicle the rise and rise of the inane language and visuals used to market new buildings and developments in London (and increasingly across the UK).’ [Bennes, 2013] By presenting photographed images on a blog with almost no commentary, she flattens them and removes them from the context of glamour and aspiration in which they are supposed to be contextualised. By presenting them together she encourages us to draw comparisons between the aesthetics and language of contemporary architecture and development and critique the hegemony of the renders and hyperbole that surrounds them. Dan Hill’s similarly archival ongoing project Noticing Planning Notices serves to draw attention to the role of planning notices in civic engagement with development. He makes the point that ‘…the primary interface between the UK’s planning system and the people and places it serves is a piece of A4 paper tied to a lamppost in the rain.’ Rather than renderings serving as the main means of public engagement with the future of their cities, he extols a better understanding of planning processes as the ‘dark matter’ of future city development.


More radically, the artist Hito Steyerl, in her 2013 video work, How Not to be Seen: A Fucking Didactic Educational .MOV File highlights some of the technical limitations of rendering software. She tells us that ‘to become invisible, one has to become smaller or equal to one pixel.’ By playing to the aesthetic of pixel orientation markers, green-screening, rendering and satellite photography, she suggests ways that we might uses chromakeying and other technical processes to subvert the rendered and computer visible world and challenge renders on their own terms - unrendering ourselves. She also offers her own critique of rendered future imaginaries, citing the gated and exclusive communities of developers and describing how they enable invisibility and un-rendering for the ultra-rich at the other end of the spectrum.

Low-Rendering

Low-rendering intentionally challenges the spectacle of the hyper-real. Rather than presenting complete worlds of smiling residents, playing and working in a perfectly lit and maintained space, low-renders show only a framework or an idea and invite the audience to sketch out the details, often drawing their own conclusions as to the purpose and function of the plan.


In Dunne and Raby’s 2013 project, United MicroKingdoms, the UK has been split into four allegorical political and economic systems: Digtiarians, a society governed by algorithms with a mobile-phone tariff style economy; Communo-Nuclearists, a zero-sum communitarian economy based on the near limitless energy supplied by nuclear power; Bio-Liberals, a techno-utopia social democracy based on synthetic biological technology and Anarcho-Evolutionists, an anarchist society based on self-augmentation and experimentation. Rather than represent these speculative societies through complex RAND-style diagrams, masterplans and papers, the designers created vehicles used by the citizens; driverless robocars for the Digitarians, a nuclear-powered train disguised as landscape for the Communo-Nuclearists, slow biological machine cars for the Bio-Liberals and a communal bicycle for the Anarcho-Evolutionists.

The aesthetics of the vehicles presented are themselves low-resolution, with simplistic designs and pastel colours. They’re not meant to reflect the technical demands or limitations of the technologies but the wider contextual system of politics in which they’re embedded. Similar in detail and sheen to how the background of architecture renderings often appear, (non-descript grey or white blocks fading into the software’s clipping distance.) the models and images of United MicroKingdoms serve as tools for thought and boundary marking rather than outcomes in their own right.

In Superflux’s 2015 film Uninvited Guests, a similar tactic is used. Ostensibly a design fiction about a ‘smart’ future, the film’s technological devices - a walking stick, a fork and a bed - all variously connected to the Internet and sharing data are intentionally shown in low-resolution as generic fluorescent yellow objects with no detail to put them out of the ordinary as objects in themselves.


Matthew Plummer-Fernandez’s 2015 Peak Simulator builds a physical mountain range from the same algorithm used in early computer-generated landscapes. By then physicalising and placing this landscape in a real landscape, he highlights the dissonance between the computer-generated world at low resolution and the real. The leap in complexity between what could be said to be the ‘infinite resolution’ of the ‘real world’ and how the landscape can be made to appear on screen dismantles the spectacle of the rendering and shows it for the fabrication it really is as well as demonstrating how this type of generic landscape is technically generated.

Low-rendering practices use intentionally ‘under-developed’ material and visual qualities to draw the audience away from the specifics of the materials, objects and technologies they are talking about and to think critically about the processes and contexts that have created and supported them. In the case of Superflux and Dunne and Raby this is to draw the viewer away from the specifics of the technology and into wider contextual discussions. In the case of Plummer-Fernandez, this is to draw the audience away form the spectacle of procedurally-processed computer generated landscapes and into the reality of their technical construction and their un-real nature.

Hyper-Rendering

Hyper-rendering builds on and exploits the techniques and technologies of renderings to push aesthetic boundaries towards new future imaginaries and to critique power. These practices intentionally subvert the real world and push the hyper-real nature of commercial rendering to new extremes to draw out their absurdity or to develop new ways of critiquing.


Lawrence Lek’s 2015 Unreal Estate (The Royal Academy is Yours) uses the Unity game engine to imagine a future Royal Academy in London that has been bought by a Chinese oligarch. The hyper-real fantasy of Lek’s Royal Academy defies all aesthetic sensibilities, great works of modern art are gathered and flung haphazardly around the garishly painted space. The space has been militarised and a private helicopter sits on the roof. Lek suggests that this is the way the ultra-rich see the world, as a decontextualised game space to be reconfigured at will. Sascha Pohflepp and Chris Woebken’s 2016 The House in The Sky uses similar rendering techniques to critique society’s upper echelons. Based off photographs, they recreated the mid-20th century home of top RAND strategists and re-staged the discussion rumoured to have happened there. In playing with ideas of modelling and representations through rendering they critique the top-down ideology of RAND strategists during the founding period of neoliberal strategising.

Both of these practices use rendering as an embedded part of their critique. Using the hyper-real properties of rendering software to interpret the unimaginable and impenetrable worlds of the elite and represent them in a performative way that somehow embodies their approaches - form the abstracted world-renderings of RAND strategists to the game-like fantasies of the ultra-rich.

Playing more with the simulative potential of rendering software, Berlin studio Zeitguised produce stunning hyper-real animations that defy physical laws and exist in another realm previously unimagined. Zeitguised position themselves somewhere between arts and fashion though their work exists entirely in digital form. Most of their clients are advertisers who look to them for a rendered aesthetic that’s a little more edgy than their competitors but in their work is a glimpse of the possibility of rendering software - to completely refigure physical laws and create total fantasies that exceed the bounds of a perfected future-present. Another project of Pohflepp and Woebken’s, Island Physics is a curated digital installation of artists working with the open-source rendering software Blender. The artists play with the potential of augmented reality and Blender’s physics simulations to create impossible visual spectacles that fill the exhibition space. As much a technical demonstration of the ability of open source software, it’s also a precursor to its potential to do radically different forms of rendering to those scene on development hoardings around the world.


Another pertinent yet unrealised potential for radical rendering is in virtual reality. Here the opportunity for immersive counter-future-imaginaries is huge but largely undeveloped. A notable exception is A Short History of the Gaze by Molleindustria, a virtual reality game where users occupy the gaze in notable cultural moments form the panopticon to a French boulevard. The project is a clever double-header playing with the idea of looking and seeing and being seen with the new reality-blindness of virtual reality.

Hyper-Rendering practices use the rendering software itself as the basis for critique and imagination. Rather than a tool to be circumvented as in un-rendering or used for critical discussion as in low-rendering, it in itself is vital to the projects and practices of hyper-rendering. As rendering software and it’s attendant technologies like virtual reality cheapens and accessibility and literacy grows we might expect to see a growth in practices using hyper-real rendering methods to create new imaginaries and develop political critiques. It is in hyper-rendering where we might find a contemporary equivalent to the approach of Superstudio, seizing the tools normally used to reinforce the aesthetic hegemony over future imaginaries and turning it to new purposes.

Conclusion

Rendering software is at the root of much of the visual culture that surrounds us everyday and its penetration is increasing, from cinema and architecture to advertising and sales imagery. As a tool it is largely used to prop up an existing aesthetic hegemony that makes imagining alternative futures hard. We run the risk of a Shazam effect, where the aesthetic of rendered futures self-replicates and we are unable to imagine alternatives.

What these practices, and the hundreds like them present is another way that the tools of rendering might be used to create alternative future imaginaries or to challenge the existing ones, to broaden our aesthetic range and thus our range of future imaginaries.

Bassett C., Steinmueller E., Voss G., (2013) Better Made Up: The Mutual Influence of Science fiction and Innovation, Nesta working paper
Bennes, C., (2013) #DevelopmentAesthetics, tumblr http://developmentaesthetics.tumblr.com
Clarkson, N., (2014) How Might Space Travel Change Our Cities, Virgin. https://www.virgin.com/disruptors/how-might-space-travel-change-our-cities
Di Salvo, C., (2009) Design and th Construction of Publics, Design Issues: Volume 25, Number 1 Winter 2009, MIT Press
Hill, D., (2015) Noticing Planning Notices, author’s blog, http://www.cityofsound.com/blog/2015/04/planning-notices.html
Parkin, K., (2014) Building 3D with Ikea, CGSociety, http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpecial/building_3d_with_ikea
Plummer Fernandez, M. ,(2014) ‘You can spot what software has been used to design a building’, Dezeen, https://www.dezeen.com/2014/10/17/movie-matthew-plummer-fernandez-you-can-spot-software-design-building/
Ross K. E., (2016) Superstudio and the “Refusal to Work”, Design and Culture, 8:1, 55-77
Srnicek, N., Williams, A., (2015) Inventing the Future: Postcapitalism and a World without Work, New York: Verso Books
Thompson, D., (2014) The Shazam Effect, The Atlantic http://www.theatlantic.com/magazine/archive/2014/12/the-shazam-effect/382237/

SaveSave

The Finite-State Fantasia 1

I’ve been chipping away at a commission for the last few months which isn’t yet announced but I’d like to write about nonetheless. (You may have seen in my last missive that I promised my PhD supervisor I’d write about what I was doing and why. Hereby I announce a return to regular blogging. Although I only get time to sit in front of a computer while travelling so I join you from the Eurostar.) Part of this process is to bring together all the disparate aspects of what I’m doing into something I can practice and work through and someday call a PhD. Consequently, this post serves as something of a rationalisation of how it ties in to a lot of the things I’ve been doing over the 12 months-ish, especially as I try and figure out what my practice is for the next 5 years. I’m going to introduce and try and frame some of the areas of my research interests and what I”m doing about it before introducing the plan for the project in the next post and hoping, student-like, to document the process of its growth over the next few months here before finally reflecting on what happened and how it worked. Or not.

Magic and Technology 

This new project initially started as a response to some of the thinking about technology and the occult I’ve been doing, mostly under the guise of Haunted Machines. This ongoing project with Natalie Kane is currently in its largest and most ambitious manifestation - as a year-long program of event throughout 2017 centred on the Impakt festival in Utrecht, Netherlands. Of course, I have to work out what exactly it is about this line of thinking that appeals to me beyond it’s obvious aesthetic appeal.



While at a panel discussion last night Nicolas Nova prodded me on what the importance of this type of ‘hauntological’ thinking is and why it works. From where it stands now I can see three reasons for the appeal. Firstly, it’s using the tools of the master against them. It’s an out-and-out seizing of the narrative of ‘magic’ and ‘enchantment’ that occupy popular and corporate narratives about what technology is and what it’s for. When that world that you’re trying to critique turns to those metaphors it feels only apt to do the same as if to stretch the analogy even further.

Secondly, magic and the occult has deep and powerful and well-proven ties to politics and powers structures. Skipping over for a moment, all the complex anthropology about what spiritualism and the supernatural actually are, there’s no doubt that they have been used as levers of power in different ways. I’m currently engaged in reading about the classification and examination of Japanese ‘wonder’ in the 19th century and finding the narrative dominated by the politics of a gradually-opening Japan a neat parallel to many political shifts today.

Thirdly, it’s a method of examining ’cultural assimilation.’ I can’t remember where I originally heard the phrase I now use almost daily (I suspect, like most other things Gell) but using the occult as a metaphor is a powerful way of breaking cognitive barriers to have livelier and critical discourses. Whatever your cultural background, we all have a conception of ‘magic’ where we might not have one of ‘packet switching’ or even something ‘ownership.’ A recent paper I read pointed out the ludicrousness of abstract concepts like ‘the market,’ ‘the cloud’ and ‘publics’ being acceptable forms of decision-making orientating bodies but magic, the spiritual and the supernatural which has as equally real and present artefacts and effects being excluded.

But at the route of Haunted Machines is a critique of dominant narratives of technology, seizing a narrative to build powerful analogies to challenge the loudest voices of progress, innovation, disruption etc. etc. etc. So I don’t suppose Haunted Machines is about the occult any more than Animal Farm is about agricultural technologies. Haunted Machines is about power, design, art, technology and the outsider as well as how, for my conception of it, machines are becoming totems of power that dazzle us with trickery.

Rendering 

I’ve been talking about rendering and CGI for a little while. I’m by no means an expert and my skill level is incomparable to other practitioners but I’ve been noticing that I’ve been using it more and more in my practice and have, through doing so, been thinking about why. The obvious appeal is in its ease of use. With Blender as a free platform, even with a steep learning curve, one can be producing good quality images and animation quite quickly and cheaply. This in itself is a major point of interest. I presented a paper at TU Delft last week, which I’ll publish on here in the next week or so, in which I discussed the potentials for such an amazing tool outside of its current applications.

Briefly: Rendering software has largely been the preserve of cinema, advertisers and architects. Consequently, these fields which generally lack much in the way of quality wider social critique (sorry architects, it’s true, you know it, I know it) have the monopoly on visualising and imagining the future. Either they need to get better at pluralising or we need more renders. There’s loads of artists and other practitioners starting to do this and I’ve been keeping my eye out for more. The other interesting point of renders is the actual technical methodology. The computing power required to create believable realities is shrinking rapidly. I could create an image that would fool you into thinking it was real in minutes. This has drastic implications of fields critiquing technology and culture. I point you to the return of Moriarty in Star Trek TNG. Not only could this present an enormous power lever for deceit and trickery but it leads to a genuine question I’ve posed before elsewhere.



Additionally, technically the process is very interesting. Rendering only works at close-high resolution. I could create a 10m x 10m tile of water pretty realistically but it doesn’t scale. The trick only works close up. The haptic and materiality of it are really dynamic and interesting and counterintuitive. Light doesn’t really work the way you think it does, air is really important. Things aren’t made the same surfaces you thought they were. Wesley Goatley explored this a little bit in Watching Mephitic Air, but I keep coming back to the very real material and technical qualities of rendering software as sites of technological critique. There’s also dialogue between how machines see and then create that works through rendering (lit. ‘to redeem/return’) where how a machine ‘builds’ a model of the world is an aspect of its perception of it.

What’s it doing? 

This is the final and biggest part for this specific project. And I think it ties particularly into Haunted Machines. The brief of this project is about trying to de-mistify the magic, fear and horror around seemingly intelligent, autonomous objects by getting inside the machine. What is the world when you have only infra-red sensors with a range of 1.5m that flash 67 times second returning two discrete data points? A totally non-integrated sensorium. Our work in Haunted Machines has shown over and over again that we create narratives that animate devices, systems and services in order to make sense of the cognitive phenomena of machines we are totally and utterly incapable of comprehending. The perfect focal point for this idea is the question ‘What’s it doing?’ A question you probably ask quite regularly, sometimes even gendering it with ‘What’s he/she doing?’ At this exact point we alot agency, will and, I suspect, at the back of our minds, consciousness to the machine.

For now, I’m buying into the theory proposed by Herman von Helmholtz that the brain is a ‘prediction machine:’ The model of the world we have and consciously hold is a ‘best guess’ based on available information. Getting chewy about these words is important (and Goatley specifically chews me out about getting them wrong.) Information is the reduction of uncertainty. Senses are input devices and perception is where these senses are integrated into a model. In fact, several theories attempting to solve the ‘hard problem’ of consciousness propose that consciousness exists simply as integrated perception in animals. We don’t see colour and form as separate and see an object as integrated in all our senses where a machine senses them as separate points.

But the words don’t fit, the stories aren’t right. To be honest a fair bit of me is getting annoyed with ‘machine vision’ projects that show nothing more than the debugging screen. What is the ontological experience of being a machine? Even the word ‘experience’ and ‘ontological’ are problematic here. Can I affect a first-person sympathetic perspective of the machine? Is it worth it?

The point is to tackle a major thrust of my PhD work - that treating ‘smart’ and ‘networked’ objects as if they are more gadgets and office furniture isn’t good enough. It isn’t good enough for things that act on your world and have their own ‘worldview’ (no they don’t, I know) to be glossed over ‘as magical’ while being so many different things to so many different agents. It isn’t good enough for things that you don’t ‘wholly’ own in the Japanese sense of in where you can’t comprehend the entire chain of cause and effect that might dictate actions. Machines exist as 'not-nothings' things that aren't conscious and humans but things that aren't simply tools or other objects. It's reconciling their 'not-nothingness' with how we build our perceptions of them into a lived experience that more often than not delivers them as 'non-integrated perceptive cognates' that I'm going to try and do.

Self Driving Lumiere



I was admiring the similar framing of these two videos when I was putting together my talk for LIFT way back at the beginning of the year. It's been on the to-do list for ages to a side-by-side so here it is.  The first is the Lumiere brothers' famous 1895 film Arrival of The Mail Train. Which is renowned largely for the urban legend that on seeing it, people thought a train was coming towards them out of the screen. This is use apocryphally as a reference to how technologies take time to be culturally assimilated and reading protocols to be established. The second follows a similar chain of reasoning, a video that came out last year showing some folk testing out a Volvo's auto-stopping feature. They demonstrate total faith in what the technology claims to be and that it will stop, despite the obvious outcome.

Anyway, there's a poetry in their framing which I enjoy.

Idle Thoughts on Seeing and Knowing

A few weeks back I was at the excellent Systems Literacy at the Whitechapel hearing words from good friends James Bridle, Georgina Voss and Tom Armitage. A thoroughly excellent event but I left considering our rhetorical hangups on 'seeing' and 'sensing' (which one could argue are different things). They're terms that have become cliched, with itchy fingers hovering over metaphorical bingo cards of 'Seeing Like a X' or 'Making x Visible.' Powerful curatorial bumpf to be sure but we're now all asking 'Where does making things visible get us?' Years ago I aired this as my frustration with what was then the state of Speculative and Critical Design. Once you've created your public, (constructed your god (another post for another, darker time)), highlighted your issue, then what?

I guess the title of the event (Literacy) sums up what I'm thinking through here which is types of knowledge, speaking and sensing and the use of these terms when talking about complex technological systems and literacy.

As Tom Armitage noted in his talk, literacy is the ability to both read and write, and since that relies on the technology of an alphabet - in the broadest sense - that means a certain amount of 'knowing how' as opposed to 'knowing that.' Here we run up against John Searle's famed Chinese Room - the inhabitants of the room have a propositional system of knowledge, shorthanded to 'knowing that.' They know that one Chinese character follows another according to a prescribed (geddit?) set of rules. The Chinese literates on the other side of the door have the 'know how' having internalised the rules and developed a natural affinity with the technology of Chinese written language. I'm probably way out here but this suggests two ways that something could be seen to be 'literate.' One through knowing how and the other knowing that. Knowing how as a tacit and internalised literacy and knowing that as a technical, external literacy.

James Bridle brought up the power of source code as something real and legible that contains knowledge. But again, we have two levels here. The literate who 'know how' to read the source code and correlate it to what is presented on the web and those who 'know that' it's there and forms a central part of the way we 'see.'

Now, in a general way, machines can 'read' and 'write' and in that sense, they are literate. But their literacy is that second type of technical literacy that comes from knowing that not knowing how; they understand the rules of language but usually can't discern its abstracted meaning. Or, that meaning has no meaning. Or the meaning of that meaning has no meaning. I'm no AI expert.

Round and round it goes, who is literate? What is literate,? What does it mean to be literate? But this skips over the most interesting thing: What are these literate things talking about and how? I guess, what I was thinking is that if the machine is literate, or can convince others of its literacy or something then it is capable of, and in a sense is engaging in rhetoric. They can persuade us to do something - 'plug in to a wall socket or your computer will run out of power.'

Luckily, someone much more qualified than me figured this. In Sensing Exigence, Elizabeth Losh suggests a rhetorical standpoint for machines. Since expression springs from '...a defect, an obstacle, something needing to be done, a thing that is other than it should be...' and machines process things and call on us (or other machines) when there's a need to address an issue, they have 'exigence' - an urge or need to speak even if this speech is produced from knowing that not knowing how. In the same sense that machines 'speak' when they have exigence, they listen and read. Machines can read, write and this makes them audiences. In that case, she suggests, we can't seriously critique connected objects and complex systems unless we see them as more than things to communicate through or about, and things to communicate with. So:

How can we talk with things instead of to, through or about them? 
i) Is that worth doing? 
ii) Is it mad? 
iii) Will people laugh at you?

As someone who regularly and inadvertently thanks cash machines and mumbles at ticket barriers, I can answer part iii with a resounding 'yes.' Part ii is more a matter of opinion and depends on what else you were doing before and after talking to the machine.

Talking with is important. Talking to however implies a uni-directionality. Here's some badly improvised definitions:

  • Talking At: Communicating toward another unaware/uncaring/unknowing whether the other is or is capable of listening. OR Writing whether it can be Read or not. 
  • Talking To: Communicating toward, aware and knowing that the other is listening but without listening oneself. OR Writing, knowing it can be Read. 
  • Talking With: A two-way exchange of both talking and listening between parties. OR Reading and Writing each other. 
So, we're already deep in 'talking to.' There's two directional forces at work trying to get us to talk 'to' the machines. First is the whole 'learn to code' lobby. This idea that everyone should, no absolutely must learn to code. Holistic and fulfilling education be damned, if only everyone could code! This is trying to launch humans out of knowing that and into knowing how. Out of knowing that code exists and into knowing how to use it. The second is making computers more 'intelligent' in order that they can better understand our complex and abstract language. This is trying to brute-force computers out of knowing that there are certain rules to human language that can be followed convincingly to simulate literacy and knowing how to extract and communicate meaning and the values of that meaning. (Or the meaning of the values of that meaning etc. etc. etc.) I thin I approached this in my original Haunted Machines talk, suggesting that in any conversation between a human and a machine, one party is lying to the other about the nature of their reality. For instance, machines might be imbued with human characteristics while we might adapt our behaviours to suit the needs of a machine.


All these methods to get us to talk 'to' machines or them to talk 'to' us are pointless. The technology of an alphabet, no matter how good, is a codification of meaning and by forcing one party to codify its meaning more, they make concession to the other party who will get more meaning out of the exchange. I.e. in the Chinese Room, the people in the room cannot discern meaning but give plenty of meaning to the Chinese literates outside. There's no talking 'with' between the two parties.

-

Prior to the Whitechapel event, I'd been trying to bend my brain around the ongoing ontology/epistemology debate in anthropology (I still haven't). This little circle of frustration centres on how animism is dealt with in the literature with Eduardo Viveiros de Castro suggesting that since 'things define themselves,' that animism is a surety, an ontology, a way in which things are as opposed  to an epistemology, an approach to knowing.

So, hurriedly, I scribbled 'Internet Epistemology' and thought 'yeah I'll come back to that and know what I meant.' Well, notes never work out like that and a quick search reveals that it's pretty well defined way of looking at either how knowledge travels and is validated on the Internet, or how distributed, networked knowledge works. So, not that.

What I meant, it turns out, is that rather than simply 'seeing' machines and systems we should try and work out how we can talk 'with' them. Allowing a two-way reading and writing of subjects that helps us to construct knowledge. Then we go for 'seeing like an x' to 'knowing like an x' or even better, 'understanding an x.'

Machines sense. To say they 'see' is somewhat a misnomer. A person looking through a camera 'sees' but the camera is only sensing. We at the same time as humans find ourselves in the business of 'sense-making' - trying to convert what we see into sense. This all happens at the same time as machines convert what they sense into what we see in the first place.

LIFT: The Internet of Damned Things, by Open Transcripts

Normally this would be the time where I'd reluctantly sit down and bash out a talk I gave a few weeks previously while trying to recapture the essence of what I said as I stumbled through what, at one point, were well rehearsed points on a stage. For some reason, transcribed or written up talks travel better than videos. I suppose Medium is to blame. Anyway, Open Transcripts, who did excellent coverage of Haunted Machines last year, have done it for me.

It's a great project which you should absolutely support on Patreon.

Anyway, here's the link. There's also a transcript of Joel's and Nat's talks from the same session. The great thing is that all the sources of quotes and so on have been linked up. Go check it out.