This Did Not Take Place, IMPAKT Fesitival 2015



What follows is a writeup of the talk I gave at IMPAKT Fesitval this year in Utrecht on the subject of memory and technology. I have to confess that there was a lot more profanity as I delivered the talk due generally to being ill for months. 
I'm trained as a designer but I don't really design anything. I find it a useful thing to say rather than do. I spend most of my time thinking about, writing about and talking about relationship between technology, politics and design. So when the guys from IMPAKT asked me if I had any opinions about the relationship between memory and technology I was like; 'Hell yeah I have opinions about everything.' And so here are those opinions.

The title of this talk takes its name from the very famous book by Jean Baudrillard, The Gulf War Did Not Take Place. In the book, Baudrillard argues that the 1991 Gulf War did not take place. Obviously. Not that there was ipso facto no Gulf War, he's not like a Gulf War denialist, but that the Gulf War as it was presented through 24 hour rolling media was not the Gulf War that was lived on the ground. The Gulf War was a highly orchestrated media 'atrocity' masquerading as a war and Baudrillard talks about how manipulation of the media changed the narrative that was received in Europe and the US.

So the first part of this talk is about how the manipulation of media changes the remembered narrative of those who receive it. This is a kind of prosaic idea, hardly new. We're used to states and corporations controlling cultural narratives through the media but I think we live at a point where, interestingly, individuals have a high-degree of control over their own personal narrative and through the connected nature of things begin to change the wider socio-political narratives we have, perhaps unknowingly.

In the background is a video from a cruise missile nosecam. The Gulf War was also the first war where we had images of the conflict directly from the weapons themselves. The nosecam of a cruise missile shows great power, control and high-technology but at the same time is a mechanical contradiction: The camera is destroyed at the point where the weapon fulfils its purpose. And so we have all the foreplay of war with the showy aspects of control and power without any of the terror, chaos and suffering that the cruise missile brings. The cruise missiles becomes a form of media.

And so the second part of this talk, the second aspect at play, is sight. We're visual animals, most of our memories are visual (even though it's not as strong as olfactory memory) and we tend to trust what we see. So the proliferation of small, mobile cameras leads to new conceptions of how memories are recorded and what they mean.




And so we begin with cinematic tour-de-force that is Google's How It Feels (through Google Glass). Two minutes and fifteen seconds of film history. Glass, for those that don't know, was Google's ill-fated attempt to create a consumer augmented reality market. Google Glass was a headset that you would wear on your head. It only did four things that no-one ever wanted badly and so didn't get very far. The key thing is that it was a head mounted camera from which you record and stream stuff. And that's how Google try and sell this thing - through the experiences it enables.

You only see the product itself appear for six seconds in the entire clip but you get to do a wealth of weird stuff - flying balloons and planes, sculpting tigers from ice, playing with a dog on a deserted beach and so on. Essentially now that you have hands free you can better experience things and experience is inseparable from the recording of those things.  Google are telling you that their technology enables experiences, not how it works, what it does, or what it's for.
Experience has become a really central part of our technological narrative, born largely from the protestant work ethic and there's two distinct flavours of experience. The first is the kind of sexy, rarefied Instagram experience - swimming with dolphins, flying planes, eating cakes. And you have to photograph and upload them or it might as well have not happened. They're usually accompanied by some reminder of your own mortality too - 99 Things To Do Before You Die - as if not swimming with dolphins is just a waste of good oxygen. 

The second meaning of experience is 'expertise' and the two things are completely interconnected. At job interviews you're asked about your experience. Experience adds to your expertise and makes you more socially valuable and you evidence this experience by photographing it and uploading it. 



Here's the advertisement for the Apple iPhone 5. There's very little information about what it actually is, its battery life or processing power, size or weight. Just lots of information about what it can do. And that is take photos, as if this is some brand-new never before conceived of technology. Apple tells you how the technology will enable you to gather and share more experiences than ever before, more photos of cakes and lakes than your mates. 



This sales pitch of technology as an experience-enabler is hardly new. We've had 'sex sells' for a damn sight longer than we've had GPS or the Internet. Here's the compulsory vintage futurism of a any talk like this. What I believe is the oldest cell phones advert, from Radio Shack in 1989. And already we can see that the technology is sold on its experience generating potential. You can now phone people from the beach! Work hard and have fun!

But there's a crucial difference between the cell phones of old and the cell phones of now. These vintage cell phones augment your experience. They allow you to multi-task and become more mobile (or, perhaps force you to multi-task and become more mobile.) But they don't record your experiences.
The average smart phone has 19 sensors in it: Light, proximity, two cameras, three microphones (one of which is ultrasound,) touch. Positioning comes via GPS, Wi-FI, Cellular, Near Field and Bluetooth technologies. And it contains an accelerometer, magnetometer, gyroscope, pressure sensor and temperature and humidity sensor.

In fact these devices aren't just recording your experiences, they're capturing high-resolution, detailed versions of parts of your life and remembering them.
Things remember you. This is the Simplicam, which can act as a stand-in for anyone of the 99% of Internet of Things projects that are about surveillance and mass data capture. The Internet of Things i rapidly becoming the world's largest surveillance infrastructure and one that we're weirdly excited to invite into our homes.

The aim of this infrastructure is to constantly record, monitor and store data on you and your behaviour.
This is the Amazon Echo. It kind of staggers me that when Amazon said they wanted to put an always-on microphone into people's homes everyone just thought that was OK. A corporation manufactures, sells and distributes spyware and everyone just lapped it up. The purpose of the Amazon Echo is to act as a hub for Internet of Things products but also find ways to make it easier for you to buy stuff from Amazon. At it's core are data-gathering; listening to things you say in the house in order to better target products at you and also acting as a personal shopper; directly responding to your impulsive needs for soap or blue pants.
Worse still is something like the Samsung Smart TV. Samsung are kind of explicit that the purpose of the microphone is to allow you to control the TV by voice but yet again, the microphone is always on, always gathering data and always sending it off to third-parties to be analysed. Samsung had to later send out a disclaimer:
You can control your SmartTV, and use many of its features, with voice commands. If you enable Voice Recognition, you can interact with your Smart TV using your voice. To provide you the Voice Recognition feature, some voice commands may be transmitted (along with information about your device, including device identifiers) to a third-party service that converts speech to text or to the extent necessary to provide the Voice Recognition features to you. In addition, Samsung may collect and your device may capture voice commands and associated texts so that we can provide you with Voice Recognition features and evaluate and improve the features. Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.
So yeah, this is Samsung telling you not to volcalise any sensitive or intimate information. In your living room. 

These systems gather huge amounts of data about you but only the data that's relevant to them. They're not interested in why you like blue panties, only that you do and then they use that to sell you blue panties in the most effective way possible. They build what's called a 'data double' - a slightly inaccurate chalk outline of who you are, made only of data points of relevance to that company or corporation and inevitably flawed by the technology. This is why it gets weird when these data doubles try to reach out to us. We and they imagine that they gathering of enough data is enough to make them empathetic can human-ish.


Here's Facebook doing it's whole memory-lane thing that it's trying to do at the moment. I'm really bad at Facebook, I mostly use it for self-promotion, so it's constantly prodding me to interact. Three years, it tells me I've been friends with Paul Revell. My dad.

It's accurate, but completely out of context. And just getting this in the middle of the day out of nowhere kind of throws you for a bit of a loop.
And it could have been worse. This is Eric Meyer's Facebook page a year ago. Facebook decided to send him a prompt to share his year with others and showed him an image of his daughter. Who had died a few weeks earlier from a terminal illness. Now, Eric Meyer was the first to admit that Facebook aren't dicks. They don't always wantonly go about trying to emotionally bully people. It's just that we assume these systems are so advanced and so sophisticated that they have some built-in human sensitivity, they don't. Facebook is a database, not a friend. It's got great data on pain but it can never understand the embodied experience of feeling pain. It can't empathise, it can just react to data available. It's not good at being your friend but the narrative we build and it builds suggests that it is.
Which brings us on to the idea of gaslighting. Gaslighting is a term for psychological and emotional manipulation, bullying and violence particularly through the control of environmental conditions. The term comes from the film of the same name and is essentially indicative of a process where someone convinces someone else that they are imagining things happening or tells them things are happening which can't be observed.

The proliferation of fallible, manipulable connected devices and our emotional reliance on them leaves the territory for gaslighting wide open and leaves us with sticky problems as far as questioning our own reality goes.
This brings us rather neatly back to our idea of a non-existent Gulf War. Here's Dick Cheney addressing the press at Desert Shield with an angry looking Colin Powell in the background. This was the 'media circus' of the Gulf War. The main reason that the version of the Gulf War broadcast was so tightly controlled was because the press were so tightly regimented. The US dubbed this regimentation Annex Foxtrot and they forbade the press from going into the field, talking with normal soldiers or civilians or leaving strict enclosures. This was the mechanic by which the Gulf War story was told to Europe and the US.
We have the same thing today but perhaps more inadvertently in our coverage of warfare. If you google the word 'drone' this image crops up in the top ten hits. Five years ago it and various versions would have filled the top ten. For years this was the de facto image of a drone used in coverage of what we were learning about drone warfare. It littered blogs, newspapers and social media. Last year though it was revealed to be a rendering created by a hobbyist.

Now it doesn't matter hugely. There was no one pointing at this specific drone and saying it specifically did something - it was used as a generic placeholder image. However it's skews the visual narrative of how we think about and visualise drone warfare when collective memory thinks of a render when thinking about drone warfare.
More prosaically, I discovered last year that 26% of Ikea images are renders. Probably more now. It makes sense to produce renders rather than pay to set up and photograph these sets. However there's some strange dissonance about publishing aspirational imagery of a thing that we think is real and isn't. This living room never existed, the people and lifestyle implied by it never did. That's no different to advertising or any other kind of vision, but it exists in a visual language of reality as opposed to futurity and introduces an interesting element of impossibility.
 Stalin was well know for his manipulation of imagery. He erased enemies and the disappeared from the collective memory of Soviet Russia as a way to cement his power and control the narrative of history. Now, and then in the west, we look back on this and see it as crude, dictatorial. We think that citizens must have been cynical and sceptical about these images. But in reality, at the time, like most contemporary media, it slipped seamlessly into the collective memory - just another part of the story. Soviet citizens may have known the images were doctored but they didn't view them critically like we do now.

It's interesting to think if in fifty years visual anthropologists will look back at the realistic renders populating our visual culture and wonder how we so un-cynically accepted the creep of irreality into our collective memory. And who know what effect it might have in years to come?
This is a very famous image and one of much contention. It appears to show a scene in Berlin taken by a Google Street View car where a Smart car has pulled over and a woman (presumably formerly in the car) is giving birth on the side of the road. In 2010 this was in the press a lot with the obvious question - is it real?
But that's not quite the question. There's more nuance to it. You see we instantly recognise and see-through the fact that it's a street view image. The visual cues are all there - the map in the corner, watermarking and the camera style. But we could also ask, not just if it's a real birth, but if it's a real street view photo. How hard would this to be knock up in photoshop? We're so familiar with this media of global representation that we don't really critically question the context of the photo. We're a bit like Soviet citizens being fed doctored photographs, we see through the context and question what's inside rather than the whole setup itself. This might be a real birth and a fake Google Street View image.
In fact if we look back on the same scene today, even more questions are raised. The ad agency that the image was taken in front of denies having anything to do with it, and yet that appears to be the same smart car at the bottom of the image. The hospital on the other side of the road quite sensibly suggested that had a live birth taken place outside, they would have noticed. Even more interesting are the dates. The original image is watermarked 2010 and this one is copyrighted from 2008 with a 2012 watermark and a significant growth in the trees. Another aspect of this world-remembering machine is the ability to forensically examine the past.
 Of course, we can't talk about memory without reference to time. The image here is from Fritz Lang's Woman in The Moon. Lang said that the rocket launch scene wasn't tense enough so introduced the countdown clock as a way to build tension. NASA loved it and now a dependence on time is inexorably tied to the popular imagination of space flight.
In fact, the history of time itself is deeply tied to technology. This is the 1830 Liverpool and Manchester Railway. For most of human history, people operated on localised time. It wasn't until the construction of massive communications and transport infrastructures that a standardised sense of time was required. Before the railway, Manchester and Liverpool kept different times. This was the first timetabled railway line and the company had to enforce standardised time so that people could understand when trains departed and arrived. Within a few years, the standardised time used in timetabling had spread across the country and by 1880 the western world was on Greenwich Mean Time, all aligning clocks to London time. With standardised time, clocks and watches become the first networked devices.
We can look at the Global Positioning System as a continuation of this project. Sending satellites into space to more accurately pinpoint the position objects in time and space to better synchronise the working of the Earth as a planetary machine.
The Internet is a large physical infrastructure that relies on being very carefully synchronised and standardised to even work. The construction of this 'world brain' of GPS and the Internet that can very accurately and quickly understand and compute the relative position in time and space of everything connected to it means that we more and more and think of the mind as an architectural space as the ancients did.
For a long time, the modernists considered the body and mind to be liquid; ethers, biles and juices. Yet now we see the popular resurgence of 'memory palaces.' Once again we are starting to visualise the mind as a physical architectural space. After all, if the Internet behaves like a human brain and possesses architectural dimensions, it makes sense to assume that the brain is much the same.
And what does Google do with its memory palace? Well it allows you to stroll through it, and kind of add to it. This is the Memories For The Future project. In it, Google put the street view images of Fukushima before the tsunami out for people to stroll through and remember their old homes, places of work and so on. It also appeals to the morbidly curious I suppose. But that possessive title is a bit of a giveaway; implying Google's self-appointed role of guardian of the collective memory. These aren't your memories you're strolling through; their Google's. And as you stroll through them you feed them data about who you are, what you're looking at and what you do while you're there. They've constructed a dead town as a data playground. 

Sure, I'm being cynical. Google most likely genuinely thought this would be a good idea for those that had lost so much, and for those it probably is. But good intentions dont' betray the underlying ideology-as-business-model. Google makes money from you doing stuff. It wants you to do stuff and will make money from all the stuff you do, including looking at images of your destroyed home.
Which kinda leads us on into the last section - Manufacturing Memories. The still there is from the remake of the Manchurian Candidate. Which I consider superior to the original. In the film, Liev Schrieber undergoes brainwashing treatment during his time in the Gulf War to become a brain dead assassin - his memories and those memories of his comrades entirely fabricated in the lab.
So DeepDream is really interesting. By now we've all seen weird, kinda horrifying images created by it everywhere and enjoyed their novelty. There's something really human about us celebrating the aesthetic failures of technology. It's why Instagram is so popular. The best analogy I've heard for how DeepDream works is that it's like asking a child to draw a house. All children draw pretty much the same house because they've had limited exposure to images of houses from which to synthesise the idea of what a house is. Which is why child's houses look relatively generic - square, four windows, a door and a triangular roof.

DeepDream is kind of like that but it's only ever seen pictures of dogs.
So when you ask it to draw the aliens meme guy it just draws it in dogs. The purpose of this exercise is two-fold. Firstly, computers can't recognise discrete objects in images well. Humans are blessed by being able to visually comprehend, describe and represent a cup. A computer needs that data codified. DeepDream allows Google to recognise cups in pictures by comparing them to dogs (in a way.) Secondly, much like a child, that analysis allows DeepDream to create new images of cups. Which is where it gets interesting.




Deep Stereo is a spin off of DeepDream using similar neural network technology. Deep Stereo can interpolate between still images and create and understand 3D space and movement. Now, this may jsut seem like great technology which would make Street View a lot more fun. But think back to our house-drawing child. They might grow up to be an architect and really study houses to understand how they work and then synthesise that knowledge into new types of houses. Or, they might design film sets. Or, they might run a ponzi scheme in property. It's all based on the same basic understanding and analysis of the raw data of what a house is.

Think back to the drone and Ikea renders. Those things are innocuous. Deep Stereo implies a future in which Google can quite convincingly write and manufacture memories that never happened to an incredible degree of accuracy. 
Milan Kundera's quote is quite literal. Of course some people do rewrite history. But the companies harvesting data are doing it because of the inflated fiction of the value of data. We value our past and so we invest our data back into it. Buying surveillance cameras, more aggressively experiential smart phones and more and more accounts. We share stuff because we're vain and want to be remembered or noticed. This gives value to those who guard this endless stream of stuff and in turn makes them more valuable. A feedback loop exists between your future memories and the rapid expansion of technological power today. Google probably do just want to get better and better at capturing the world, but in their quest to do so, they're developing tools to rewrite future history.
I guess the pithy takeaway here is our relationship with the things that remember us. We're not remembered by people so much as things. Objects and devices constantly busy remembering us, some we asked to, some we didn't. And they send all these memories off to places that we may not know about or understand or want to have them. But like I said, we're vain like that and we don't have as much control as we'd like. Your Facebook account is not your own, it's everyone's and Facebook's. It is a database of advertising targets not a family photo album. Their both databases and they both uses similar design to convince us to share and record but they're purposes are vastly different.

And in this desperation to constantly harvest and grab memories, we're making the cameras and eyes that record our experiences ever smaller and more ubiquitous and easier to use. In doing so, we unintentionally invite other actors to control our perception of our experiences. A Go-Pro for instance, just happens to fit in a sea gull's beak.


Sight, Scanning, Screens, Sophistication

Desktop Panorama: Google.com

A little a while back I had a go at shooting what I called 'desktop panoramas' using my iPhone and laptop. I didn't really make much of a deal of them at the time, beyond posting some to Twitter. They're actually remarkably hard to capture, requiring an almost acrobatic skill to turn the computer 180 degrees while simultaneously sliding the phone along the screen at some semblance of a regular speed without tilting the screen back or forth too much. I wouldn't recommend it. I'm surprised either device survived.

There was something quite beautiful about the images. (Some of them anyway.) They have an almost painterly quality, the sharp edges of pixels and typekits become a smudgy, shaky mess as the screen, phone, hand, and body worked in a complex dance of decoding and recoding the information being scanned into a jumbled form of colour and shape.


 Desktop Panoramas: Facebook

There's obvious physical limitations to the taking of desktop panoramas - I only have two hands, so most of them were done with the phone resting at the bottom of the screen which kind of fixes the line of the camera across the screen at 11.6cm, roughly halfway up the visible screen. In the second Facebook attempt I tried to 'free hand' the panorama to get a more interesting capture to predictable but equally interesting results - the panorama appears to double back on itself and possess rendered depth.

Panoramic shooting technology was obviously designed to capture huge, sweeping landscapes in dramatic fashion. It achieves deep views with complex focal length distortion that lend a grandeur and sense of scale and depth by elongating the frame so substantially. They also bear two other remarkable artifacts. Firstly, a representation of time. I'm not a photographer and most folk would probably rattle on about the relationship between photography and time but I'm being more literal here. You start at the left and turn to the right, the left is earlier than the right in time. This has all sorts of horrid consequences. But also more subtle ones. The panorama below manages to catch the resonance of the camera's shutter speed and my desktop lamp - something unseen to human eyes. (Again with that lovely painterly effect caused by my human hands.)


Again, there's something fascinating about the decoding and recoding of all the technologies and media that combine into the shot, my hands, the text, the flickering of the light, the movement of the camera. All of those ambiguities are codified into a single artifact where the compromises made show themselves as 'glitches.'

The point of ambiguities is, I think, particularly important. I'm currently reading (slowly, it's annoyingly verbose) Evil Media by Matthew Fuller and Andrew Goffey. Early on they make a point about how ambiguity is a form of power in a system that requires discrete and confined coding of information. Using language ambiguously but confidently can provide plausible deniability and offset responsibility in a human interaction (see also Graeber's Utopia of Rules about which direction this interaction takes place (clue: top to bottom)) and ambiguity gives an individual the power and opportunity to assert their interpretation as the most truthful or objective. There's no scientific progress without contention and power vacuums tend to arise in periods of uncertainty. As the chapter suggests - Leverage[d] Anxiety. This is also what the authors terms 'sophistication' - some ability to leverage ambiguity to your own meaning.

In the panorama, the camera tries to compromise light levels, colour, flickering inputs, my shit hand and so on to build as an objective view of reality as possible for me. (Obviously within the confines of the biased engineering of the machine in the first place which, for instance, makes it small enough to fit in a man's trouser pocket at the cost of perhaps some ability.) It's a system designed to disambiguate what it sees where perhaps in reality we would embrace the ambiguity. The human eye simply can't see 180 degrees in focus all the time and we're happy with that uncertainty, the camera isn't.

This theme of images trying to resolve themselves through machine thinking has been brought up by Boris Anthony in Puppyslugs R Us over there where he explains the logic of how we construct descriptions (images) out of memory. Some of these memories are shared and almost all of them are definitely ambiguous. When I type 'house' you'll see a very different house to me. Even something as specific as 'Tobias' shoes' will elicit different and ambiguous thoughts. When you say 'house' to Google's neural net, it constructs an image out of very specific, unambiguous images of dogs and so on that form its own memory. The compromise - the lack of sophistication - is in the Puppyslugs, the sympathetic imagery where we can see the outline and structure but something that is clearly not a house, but an assemblage of puppies.

If Google or Alphabet or whatever do decide to go evil with neural nets then it'll be in using our ambiguity around the shared memory of 'house' into a more objective and truthful vision of 'house.' In this world, our versions (no matter which) of 'house' are definitely wrong because they're formed in ambiguity while Google/Alphabet's is formed in specific definition. Similar I guess to what they're doing to maps and territories. Intuition, metis and opportunities for sophistication would go out of the bay window as Google told us what the definitive image of a house is.

This ties into larger stuff around the construction of objective reality which I won't go into. The last thing to tie in here is Joanne McNeil's excellent piece - again, over there - on screengrabs as POV shots for social networks and the Internet.

I'm right with her when she says, 'I can’t remember taking screenshots until about five or six years ago.' I was 'trained' to take them in university as a way of keeping evidence of my largely digital work but quickly dropped it as an annoying interruption. Now I find myself taking screenshots all the time. No explanation of better hardware, software and more generous data plans needed here.

In almost the oppostie way to a desktop panorama - which imposes limitations to increase ambiguity and give a new type of sight to the screen, screenshots allow users to smash through the post-optimal limits of the technology. There’s an element of cunning and sophistication in doing so, a sense of hijacking the media or platform and turning its imposed limitations to new uses while thoroughly working within its bounds. The 140 character rule is the most obviously imposed limitation, McNeil points out that it was first defined by early interactions with Twitter largely being over SMS. Now it is a nostalgic post-optimal quirk used to create value to the platform by simulating limitations.

A limit of 140 characters forces new and sophisticated behaviours, from the invention of '@,' the growingly common use of numbered tweets to make long points, screenshots and even the sophisticated understanding of how handles work in embedded tweets:



Screengrabbing provides its own form of sophisticated manipulation - McNeil mentions folk screengrabbing Snapchat and Tinder but consider also the invention of 'regrams' a feature still not purposely integrated by Instagram. But she also types about the incredible contextualisation that screengrabs offer, not only in the content being screengrabbed, which in itself reveals so much, but in that thin bar of metatext at the top: Why is this person screengrabbing Dail Mail comments at 4 in the morning with only 3% battery? Oh they've got wi-fi, they must be at home.

Whole stories can be built out of that thin blue bar. This is almost opposite to what James Bridle was searching for in the disembodied camera - the screengrab tells you so much about a person's proclivities, position, time, state of mind and interests. It is one of our best inward-looking cameras. As McNeil says:
'Like old GoPro footage of an afternoon cycling, these screenshot images bring you back to where you were looking at that minute.'
The sophisticated manipulation of sight, scans and screens provides a space for new narratives as well as the leveraging of power over how these images are interpreted and what makes them absolute, and what that even is. After all, faking screenshots isn't hard.

I used to play a trick when I worked in a shop where I'd screengrab the entire desktop of the computer and then set it as the desktop background and move the icons off-screen. Hilarious I know, but folk had a comprehension of that desktop, they understood the iconography and how the image translated to action and I understood how easy it was to destroy that rigid comprehension.

(Below are some scans I made when trying to fix my printer the other day. They didn't end up being particularly relevant but they're too damn pretty to just leave out.)



Haunted Machines an Origin Story (Long)

Way back in September last year I was sat in a residency studio in Helsinki when I read the news that BERG, in its final incarnation as BERG Cloud were winding up. This was, and based on conversations that still happen, is, pretty significant. Especially because BERG were one of the first group of folks to attempt rolling out a commercially viable Internet of Things thing - Little Printer. I've never been a fan of the Little Printer, or 99% of Internet of Things ideas in general, for various reasons, many of which will be expounded upon here but Little Printer stands as a pretty remarkable device. Anyway, it got me thinking about what happens to all these devices once the thing that supports them - cloud, server, whatever - goes away. To awkwardly and inaccurately throw some Kant around, the object (Little Printer) isn't the 'thing in itself' but only the portal to a network (BERG Cloud.) And once it's redundant of that portal capability it is, essentially just an expensive lump of plastic. In one definition, it is a haunted object/media/machine, full of our memories and perceptions of its past, but unable to perform them - we look at it remembering what it used to be.

At the time, in Helsinki, I was putting together a talk for Web Directions 2014 in Australia, which was a month down the road. Walking over the bridge to Aalto University and back over the next few days, this vision of a domestic landscape of 'haunted machines' started to form. Machines who's 'thingness' is hidden from us, either because they're dead or because we're not told enough and we become distant from them, assigning meanings to meaningless and impenetrable behaviours. This line of thought became the Haunted Machines keynote I gave at Web Directions and subsequently at KIKK and numerous other places. Later, with my friend and colleague Natalie Kane it became the ongoing Haunted Machines project and conference, where we expanded the scope to include all sort of haunted and magical goings on in technology and design.

So, well, ten months later the video for the talk was finally released and I reckoned, because I used to and it always seems an effective way to begin discussion, I would write it up. I'm not going to go verbatim, this writeup will be different from the original talk (which you should still check out.)  Almost a year has passed and my interpretation has probably changed somewhat since I originally wrestled the idea together. I guess you could imagine this as... what if I had the same slides and was doing the talk a year later.

(post-writing note: I have edited a handful of things out for flow on writing.)


I'm an artist and designer from London. I generally work and think around an area called 'technopolitics'  which is, broadly, the technologies of politics and the politics of technologies. Luckily for me, that encompasses most of human civilisation so I get a pretty wide range of stuff I'm able to talk about. I was going to come here and show some pictures of protests and be all 'oh my god, Facebook.' but. Today is Halloween, and I'm a huge fan of the horror genre so I couldn't resist using it as the gimmick that runs through what the current, near-future weird of the connected world is.

I'm not just throwing horror tropes around arbitrarily. It's about more than the visceral thrill of a good ghost story: I love the horror genre because it's about us, about our fears and misunderstandings. Horror stories are one of our tools that we use to tackle the unknown and it seems like we're surrounded by more and more of an unknown, a 'technological sublime' all the time in which times, horror becomes particularly relevant.


So sometime ago, around the time of the Snowden revelations I started to realise my computer was doing lots of things that weren't apparent on the surface. It was having conversations with other entities and channelling foreign energies well beyond my control. When the motherboard started to pack in I took it to the store only to find that the folk there could access all sorts of parts of it and do all sorts of processes that I wasn't allowed to do.

I realised that my relationship of it was less of the kind of smug-and-snug work-buddy aesthetic promulgated by its manufacturers and more of an occult seance. My relationship with it is mediated by this ritualistic interface (that is an interface which is, in iteself, conducted of rituals distant from their practical effect), both in the actual GUI itself and with the company that built it and the permitted level of literacy I'm allowed to have. The actions I and others perform around it aren't having any technical effect, the whole thing is a spectacle to distract from the reality. My machine is haunted by the ghost of my perception of it.


The idea of the 'ghost in the machine' is hardly new. This is a pretty cliched image to use, but the Mechanical Turk, more contemporaneously known as just 'The Turk' or 'The Automoton' was a seemingly mechanical chess player that at various times contained some of the world's greatest chess players. Crowds would gather to see the Turk play opponents as luminary as Napoleon, all the while unaware that the Turk in reality was 'piloted' by a human hidden in the cabinet. To this day there are various competing theories on the workings of the Turk and it's notable that Edgar Allen Poe, one of the horror genre's greats began his method of 'literary analysis' with a piece deconstructing the workings of the Turk - Maelzel's Chess Player (1836):
No exhibition of the kind has ever elicited so general attention as the Chess-Player of Maelzel. Wherever seen it has been an object of intense curiosity, to all persons who think. Yet the question of its "modus operandi is "still undetermined. Nothing has been written on this topic which can be considered as decisive--and accordingly we find every where men of mechanical genius, of great general acuteness, and discriminative understanding, who make no scruple in pronouncing the Automaton a "pure machine, "unconnected with human agency in its movements, and consequently, beyond all comparison, the most astonishing of the inventions of mankind. And such it would undoubtedly be, were they right in their supposition.

The Turk eludes to another cliche, Arthur C. Clarke's third law, which I probably don't need to bring out. There's various ways to interpret this law but the simplest one is that once technology attains a sufficient level of advancement, such that its working are impossible for someone to mentally reverse-engineer, it may as well be magic. By this definition, magic is just technology (technique) who's effects are observable and understandable but who's method is unknown or misinterpreted: The technology loses it's 'techniqueness.'


We can look to the Pacific Cargo Cults as an interesting artifact of this disconnect. These were particularly prevalent in the post-war period but are now largely just tourist attractions as in the one above. During the Second World War, the US Army placed airbases all over the Pacific Islands during the campaign against Japan. They shipped supplies and food into these bases to support the forces based in the area. Inevitably, some of these supplies ended up in the hands of the islanders. At the end of the war most of the bases were shutdown, the planes stopped coming and with them the supplies.

The islanders, in an attempt to summon back the supplies began to re-enact and reconstruct the artifacts and rituals they observed amongst the soldiers resulting in scenes like this image - native islanders dressed in blue jeans, with 'US' painted on their chests, parading with carved 'rifles.' The islanders didn't understand the concept of a nation-state let alone one with an army or flight. To them, this was magic, and so they performed the rituals they observed to achieve the effect of food. These rituals are devoid of technique to actually bring in food but to someone alien to this technology, the functioning is unclear.


Like I said, most of these cargo cults are just tourist attractions now and it seems somewhat ridiculous to us but look at, for instance, this convincing Photoshop advert from 4chan last year. 4chan managed to convince loads of (mostly) American teenagers that upgrading to the new iOS would make your phone waterproof. YouTube is littered with videos of teens dumping their iPhones in the sink.

They did it again this year, except with an update called 'Wave' allowing you to charge your iPhone in the microwave. Again, with predictable results.

This is funny and kinda stupid but it talks volumes about technological literacy in pop culture when folk think a firmware update makes electronics waterproof, or circumvents the popularly held understanding that metal plus microwaves equals explosions. A large part of this is the authenticity of the fake ads. They look convincing, they carry the same visual language - think again of the cargo cults and how it's all about perception and not technique.


So the first section of this talk is called 'Black Boxes and Demon Runes.' For those that don't know this is a Lemanchard's Box from the Hellraiser series. In the films a hapless and usually greedy Faust-wannabe solves the puzzle box to open a gateway to hell with... unenviable results. In the spirit of using the horror genre as a running gag, this is a stand-in metaphor for your phone.


In a tech-theory sense, what we're talking about here is the Black Box. Bruno Latour came up with this idea that basically as technology becomes more advanced, it becomes less and less knowable:
...the way scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become.
So, for instance, you put 'kittens' into Google, you instantly get kittens yet have very little understanding at how that happened. None-the-less, Google is number one source for kittens for a reason; it's very good at delivering on kittens.


But now it's more than just you putting 'kittens' in and getting kittens back. All sorts of other things happen, Google takes that data and uses ti to figure out how popular kittens are (not hard.) Other organisations request that data and get it sent off, your searches and browser activity might be logged with ISPs, you might have spyware, the data gets fed back into targeted advertising. Basically, the Black Box is so black that you can't actually see that other people are taking part in the process, that there are other forces who do know the workings of the black box, who do have access to it and are using that to their advantage. The Black Box in your pocket is doing all sorts of things without your knowledge, request or explicit permission.


Here's a recent and pretty explicit example, in the Yosemite update, Apple loaded up an opt-out feature that sent everything you typed into Spotlight back to Apple. This really should have been an opt-in feature and of course Apple covered for it with the Glomar response of the tech world; 'improving consumer experience.' The point is that there is no way you would know about this stuff, it's buried so deep in layers of stuff we as consumers are either not allowed to know about or simply aren't told about, plausible deniability-chic.


We now have to talk about the fact that there's a very different orientation and set of definitions around possession, ownership and control in the connected world. These three concepts are now very separate and very differently understood and legislated for. The 'modern' world of physically mass-produced objects where possession was nine-tenths of the law and those things were largely bundled together is dead  - the value of consumers is now in data, not in objects. And that has shifted these principles around.


A classic example in the instance of Apple and Amazon is eBooks: You don't actually own an eBook from Amazon. You own a licence to access an eBook. Now, on an everyday basis this isn't such a notable distinction. You can't easily lend it to your friends or peruse it on a bookshelf, but we're kinda used to that now. It only really comes up if, for instance, you got to another country where the book is banned (where it suddenly disappears from your library) or it is stealth-edited by the legal owners. Here you have purchased a licence for an eBook that you don't own and ultimately can't control except within the architecture of the platform you've signed up for. It isn't yours outright, you possess it.


The reason why so much of this granular detail is lost on us is because most of it is buried deep in the terms of service and terms and conditions of these services and platforms. This is an example of bike rental place in Canada that wanted the user to read through 128 pages of terms and services before they were allowed a bike. It's well known that we don't read this stuff and that's resulted in many amusing anecdotes. Now, this is probably largely a UI problem someone could solve if they took the time, but no-one has. And there's a reason for that. This legalese is where the power dynamics are established. Where your position and legal rights as a consumer are laid out relative to the supplier and the legal jurisdiction of the state. Ultimately, this legalese is the language of power and the place where that power establishes it's legitimacy. So of course it needs to be protected, even if that protection is via the stage magic of spectacle to distract from the mechanics of the wider sociocultural system - the terms and conditions.


This is one of the classic horror tropes, particularly memorable in the Scream series that was regrettably killed off by Caller ID. The victim receives a threatening call from the murderer but they're calling from inside the house the whole time and I'm using this to talk about the 'smart home' ok?


Earlier this year a couple of infosec folk managed to replace the firmware on a home printer with the game Doom over wi-fi. It's just office stationary; so what? But it exposes a massive flaw in the Internet of Things rhetoric; that the system upon which we're meant to ostensibly be trusting our lives as biological animals is fundamentally untrustworthy. What if this wasn't just a printer but your car or fridge? Things that could actually be used to kill you. Or something more prosaic? A disagreeable and obnoxious toaster? When I talk about 'haunted machines' that begin to perform inexplicable and impenetrable actions without our control, this is a prefiguration of that.


So we have to revisit that Arthur C. Clarke law. Which, you know, at this stage is over fifty years old and rephrase it for our specific situation here: Any sufficiently advanced hacking is indistinguishable from a haunting. In the same way that many Internet of Things objects are referred to as 'enchanting' or 'magical,' with an intervention, they can very quickly become haunted.


Nest, which is kind of the poster child of this future has been hacked a couple of times now, with all the good humour of hackers everywhere. What's more significant is that last year marked the first time a Nest was used as a zombie node in a DDOS attack. In the same way that these devices can be used to haunt you, they can be hijacked and used to haunt others without you knowing. Think again of that modified Black Box, connected to all the other black boxes and none of us (bar the privileged) able to see what they're talking about.


Facebook pulled out the 'improving consumer experience' card when they were caught with their hands in the till earlier this year. Turns out they were conducting psychological experiments on users. Now, again, they got away with this because, medical ethics aside, it was all laid out in the terms and conditions of use. But being gaslight-ed by your social network through which you interpret and visualise the world is always going to come at a human cost. Facebook and others like it are the systems upon which the whole narrative of 'Smart' is based. If Facebook can psychologically experiment on you then don't even doubt that your smart home will; psychological experimentation will be boot-loaded into the operating system.



The Smart Fridge is like the great resounding myth of technology. It's the kind of proving ground; once we have smart fridge, then we have smart. But this is a device you need to live. At the moment only you can access your fridge. You decide what food to put in it, what food to take out and when. It's your fridge. Think of those eBooks - there's a sheer insanity in relinquishing that control to a company that is going to decide what you eat and how you live based only on alienated data. I'm going to touch on the existential elements of this later but in our haste to smart everything we're forgetting the fact that given the choice of people you'd trust to keep you alive, in most cases, you'd probably pick yourself first and that's fine and works.


The friendly ghost is kinda the flipside of the haunting poltergeist. Think here of Casper or Obi-Wan or Patrick Swayze (who was a very friendly ghost.) The friendly ghost is usually that unnoticed, unremarked-upon presence that you only really notice when it's not there, like a silent guardian or helper.


Nest again is a great example of this. It's a truly awesome idea - this device that learns your habits to best optimise your energy use. Good for you, good for the planet. But my problem with it is one of existentialism. Much like the fridge, looking after your home and your environment is really your problem. Nest offsets the existential burden of being human onto an algorithm. Sure, that's probably more efficient than you constantly running back to your thermostat with incremental degree changes but you then lack the crucial awareness of how everything you do impacts the world. You're laying that burden on a machine to make your life easier and guilt free. On some level, if we're going to be better humans then we need to practise being better humans, not invent machines that do it for us. In a purely hauntological interpretation, we are haunting Nest.



Incidentally, Nest just released this 'Works With Nest' promo. They're kind of doing the Internet of Things right in building partnerships and collaborations rather than just doing takeovers, even though they themselves are owned by Google now. Otherwise you just end up stuck in some horrifically proprietary architecture that has full run of your house.

I just like the idea of this house where the smoke alarm goes off and in your bleary-eyed semi-woken state the walls are flashing red with blood.

In April 2014, Microsoft stopped updating Windows XP. No big deal right? XP is 12 years old. Except actually XP is a real cult favourite. Over that 12 years, Microsoft had actually managed to make XP a stable and light operating system that a lot of Windows aficionados stuck to doggedly.


More importantly, it transpired that Bloomberg Business Week reckoned 95% of cash machines were running on XP at the time. This resulted in this massive scramble to update the operating systems of cash machines which would suddenly stop receiving security updates. Unsurprisingly, the same thing had happened a few years previously with Windows 95 as well. This is a classic example of where progress and advancement force out something that was actually working quite well but also where the web of dependencies technological, legal and financial go into brief conflict with each other after years of general amenability. This exposes another problem with the connected vision is that objects like smart fridges, thermostats or whatever work on two cycles: Your own life cycle where you replace white goods maybe every five to ten years and then business models that expect new products, services and obsolescence every one to two years. This web of dependencies is not suited to keeping you alive, it's suited to spectacle and profit. We rely on friendly ghosts that might be snatched away at any time.


HAL 9000 has kind of become the benchmark of how we think about the visual aesthetic of machine intelligence - the sinister glowing eye of the machine represented in the film 2001: A Space Oddessy. (Clearly no-one at Nest had every actually seen 2001.) 2001 is often pointed at by designers as just an incredibly well-designed film that really considered how these interactions with machines and devices would start to look and it was amazingly prescient in many ways.


Moon follows 2001 in this tradition. It's really hard to design a machine intelligence without giving respect to HAL 9000 so it's to Duncan Jones' credit that he managed to slightly prefigure the popularity of emojis with Kevin Spacey's voice. GERTY uses emjois because we get so much information from faces and emojis are specifically designed to communicate quite complex subliminal meanings in a simple symbol making them a lot more effective than the written or spoken word. This was at the root of some of the sinisterism of the disembodied voice of HAL 9000. It's impossible to interpret the meaning of what HAL is saying without a face to go with it, it's just a voice.


GERTY obviously went on to inspire something of BAXTER, the production line robot. BAXTER is endowed with simple facial expressions in order to make it seem less sinister and more human to other workers on the production line. (Less sinister?) The truth is that BAXTER is still operated by engineers who understand its inner workings and code and understand how it really works. The workers next to it are presented with The Turk - the spectacle of a human-like presence that cheats them of the reality of the machine. Whilst probably fucking terrifying them. BAXTER presents a disparity between reality and reality-as-experienced. I'm heavily reminded here of the Chinese Room Problem - how do you get two groups of people, with fundamentally different language to talk to each other? At some stage there has to be a concession and so far the machines have made the most concession and that's building a picture of machine interaction which is predicated on a constructed facade of communication.


Paro is another example of this disparity. Paro was developed in response to Japan's ageing crisis. For years, Japanese robotics companies were working at the rich market potential of care robots for elderly people and in the process created a whole host of appalling monstrosities. The only one to achieve real success was Paro.

Paro serves no practical function except one of companionship. It purrs when stroked, vibrates, warms up, wiggles around and basically simulates the effect of being a real seal-pup. This has remarkably therapeutic and calming effects on the people with whom it interacts but is, essentially, an algorithm in a seal suit. Think back to Nest; what happened where instead of encouraging greater human interaction or even working with real animals, we invented a robot to bring emotional joy to others? There are all sorts of circumstances where this is a good idea and, like I said, it does have positive effects, but we're essentially lying to old people. A lot of the value of the experience is based on the assumption that it's two-way, that this seal genuinely enjoys your company - it doesn't, it's an assemblage of sensors programmed to respond in a certain way - a disparity between reality and reality-as-experienced in a cute furry object - a constructed fantasy.


Our robots are starting to come in all sorts of exciting forms now, far beyond that image of the 'robot' we might find in the classic literature. Horse eBooks was this incredible bot that scraped eBooks about horses and just blurted it out on Twitter. If the Internet did poetry, this would be it. It was, and is, a hugely important cultural artifact.

Except that it wasn't a bot. It was a guy from Buzzfeed. When Horse eBooks stopped tweeting, he did a load of interviews and people were pretty annoyed and somewhat bemused. People loved this thing. But it goes to show you how the interface can deceive. How this twisted, mutilated language presented by something claiming to be a bot convinced us that this was the Internet talking to us, not Jacob from Buzzfeed.

There's something about the fidelity of how the machines talk to use here as well: Whether we assign faces or interpret machine babble from carefully constructed click-bait. We bundle in expectations and meanings in order to make that concession to incommunicado work.


It's this ease of deception and unaccountability that forms the basis of JTRIG's Gambits for Deception - a guidebook for spies on how to deceive and emotionally manipulate people online. Think back to our haunted machines - just as these platforms and services can be used to bring you great joy and happiness, so too can they be used to manipulate and destroy you as has happened uncountable times at the hands of individuals, and probably states and corporations. The distance that the web allows us doesn't just make us all dogs on the Internet, it makes us all the gaslit and potential gaslighters.



This is the fascinating story of the robot cold-call service. Essentially, this cold-call line uses voice recognition software to generate responses to people it speaks to.  Except it's only got a limited number of pre-recorded responses so it's quite easy to force it to loop and repeat, as shown in the clip.

But. But the reality is, it's not a robot call centre. It is in fact a real person sat behind a keyboard who has a set number of responses they can push to activate based on what they hear you saying. You're talking to a real person through the mediation of a limiting machine. Think back to Paro, it's pretty sad that we're at a stage where machine-mediated human contact is preferable to just talking to each other, even if it's for something pithy and annoying like cold-calls. And, again, it's indicative of our cultural assumptions that we hear these looping responses and assume; 'machine!' or see mindless twittering on horses and think; 'machine!' Whilst celebrating one and condemning the other. We're haunted by these assumptions into twisted and deceptive relationships.


Not really a horror trope so much as a Queen lyric, but I've been talking a lot about the confusion of realities and I just kind of wanted to bang on about it a bit more because it's important.


I found out that 26% of Ikea catalogue images are renders. This is hardly surprising - CAD, CGI and simulation technology is so good now that it's far cheaper to hire some folk to knock up a controlled environment on screen that you can make your perfect image out of than to spend time hiring studios, getting a crew together and taking the time to do a shoot. But I find it a weird disparity that more and more, the world is filled with renders that we're meant to aspire to, with imagined realities that were never real in the first place - with second-order simulacra.


So I want to revisit Clarke again and say that any sufficiently advanced render is indistinguishable from reality. In other words, any VR, 3D render or CGI effect done to a sufficiently high degree that the seams become invisible, is indistinguishable from stuff that actually happened. This becomes particularly important when these renders are perceived as realities in situations where that distinction is vital to wider context.


Here's an example. This was one of the earliest and most prevalent images of a drone which was pasted everywhere when the mainstream media began to talk about drone warfare. You can search Google Images and it still comes up pretty high in the ranking. But after a bit of investigation, the artist James Bridle discovered this wasn't a drone at all but a rendering of a drone. It wasn't done for any sinister reason, there was no intentional deception intended, the creator was simply an avid 3D artist and did it. But on the Internet, devoid of context and distinction it became a real drone for the purposes of images of drones. It became an objective reality of a thing that never happened, a drone that never existed, never flew, never went over those Afghan mountains.

When we're talking about something as contentious as drone warfare, the use of unrealities to talk about potential truths is problematic. This image was never presented as legal evidence (as far as I know) but it serves to construct a reality in the minds of those who encounter it which, even if only in a slight way, is false.


I'm not entirely sure why this is relevant but it's too good not to talk about. This is the amazing subculture of folk who construct computers in Minecraft. Minecraft is such a thing of the network, all based on voxels that do simple things according to dependable rules with very little in the way of variation or randomness. This is a 16bit computer with a 128 byte hard drive. These things really advance at the speed of Moore's law as well and while looking at this (a lot) I came across a load of these folk who want to build a computer good enough to play Minecraft, in Minecraft.

The other thing about these is how beautiful they are. If the Internet had a physical landscape, it would be Minecraft computers.


We often underestimate how realities-as-imagined start to shape realities-as-experienced. A good example of this is the entanglement between pop culture, particularly science-fiction and technology. Minority Report is a prime instance of this. Similarly to 2001 and Moon it had a great team of designers who really thought about and considered what kind of technologies and interfaces might populate the world and how they would work. Now, despite the fact that Tom Cruise got pretty significant injuries from using his hand-wavey interface thing, it inspired a whole generation of UI designers to make it a reality. (If you build it, they will come.)


And it went on to inspire over a decade of tech-journalism. Ten years of CES fairs of almost Minority Report technology. (Not to mention the huge military-industrial investment in pre-crime software.) This imagined reality was constructed almost 15 years ago and yet technological innovation is going toward it. These visions have a powerful hold over those who go on to shape the world. When a reality-as-imagined is so good, so convincing, so goddamn tasty, people will try and build it.


And this complex entanglement between realities is the reason why when I see Google's Street View camel dutifully documenting the Abu Dhabi desert...


...I can only wonder how much of the inspiration for it came from Dino-Riders. We all grew up on the same stuff and form has a tendancy to repeat itself.


So how do we start to challenge these ghosts and visions and hauntings? How can we start to expose and understand the inner workings of the machine? To see past the spectacle and the magic and into the gears of the automaton?


Well, language is pretty important. I talked earlier about how jealously guarded the legalese of terms of services are guarded. That's where the power happens. But even an understanding of how words are used to obfuscate opens us up to a critical view of the systems at play. The 'Cloud' for instance, this mythical haven of data, is actually a shit-ton of massive server centres using 2-4% of the world's energy. All heavily guarded behind private armies and propped up by a surveillance state. Question your ownership of the cloud - it's limited, you give away a lot when you trust a company which has a vastly different plan for you than you do.

Some folk won't even use the term 'Internet' any more. It harks back to the image of a global village of connected horizontal democracy that the early hippy pioneers dreamt of. The series of increasingly expensive walled gardens where we're spied on isn't 'Internet', it's 'Shitnet.'


We can begin to grasp the physicality of this stuff. This is my website, it's in a building in San Antonio. All my stuff is in there being looked after by people I've never met and don't know. A friend of mine calls it my horcrux.


And it's only one part of the vast piece of giga-infrastructure that constitutes the Internet. There's no magic here, just a lot of engineering.


And that super-structure leaves its scars. Vast open pit mines like this one. The stuff that comes out of here, the copper and the gold, goes into your phones and laptops. It leaves huge physical scars on the surface of the planet which are directly connected to the very digital things you build...




...and I'm tempted to say it's worth it. The very stuff of the network - a cat, on a Roomba, in a shark costume, chasing a duck, uploaded to YouTube and watched millions of times. All that from an open-pit mine.


A lot of the stuff I've talked about here has been sinister, dark, alarming and sometimes downright scary. The Internet is scary, a lot of the things happening on it are terrifying. But there are no ghosts. We are them. We are the ghosts. It's just that we're so separated by fallen curtains, interfaces, literacy and distance that when we see the curtain twitch or something slither out of the corner of our eye, we call it a ghost. It's how we've dealt with the unknown for 20,000 years. The Internet is just another ghost story so far. But behind every interface, every emoji, every robot is just another person.

I'm also deeply aware that I come to a conference of web devotees as a jaded cynic. Rest assured I love the Internet and some of the stuff here has been completely amazing and awe-inspiring. But I must leave you with one very dark thought before we all start drinking. I am sorry. But; you control the perception of objective reality.

You control the perception of objective reality. It's a meaty and difficult one: The people who visit your apps, your websites, your services and platforms construct their understanding of the world through it. And so you have to figure out where you sit on the sliding scale between making magic, which is a fun spectacle that might bring people joy but also potentially horror. And making sense, which might not be as enchanting but is, in some ways, more real.

Happy Halloween.