I recently came across Alan Warburton's new short video essay Spectacle, Speculation, Spam, it's pretty comprehensive and understandable breakdown of how software itself is a unifying point of theory and practice. Using the tool as the means of production invites you to theorise on what it means to be using that tool in a critical way. It's worth watching because there's a lot of nuance in the argument and it's something that in my practice I spend a lot of time thinking about. I'm kind of in to opening up and talking about the tools used in art and design production, not in an open-source way but more because I think those tools have an interesting relationship of being shaped and shaping their application. The project I'm working on at the moment is a simulation of a simulation of a machine inside a gallery. I made the conscious decision to very visibly realise what it is to the audience - none of the working will be hidden, nor will the fact that it's essentially just a simulation running on Unity and not a 'real' machine at the highest level. This is a way of opening up the layers of production to the audience, not presenting a spectacle (though I hope it will be spectacular) but as a kind of Pompidou Centre effect.
Art-Labour
Warburton also raises the vexed problem of labour. Though this new project is built in Unity, I didn't build it. I've dabbled in Unity but am nowhere near experienced enough to build what I envisioned to the degree of production I'd be happy with so I've employed someone else and have taken a more directorial role. This is a first for me, I've always done 95% of the labour on my projects - barring extremely specialist services like industrial 3D printing or where assembling a team is necessary like film-making. I was faced with the problem of wanting to do something new and ambitious that was outside my skill set but also not having the time to actually learn those new skills, so I had to bite the bullet and get in someone else.
Rather than a one-way relationship, the result has been a lot of interesting critical conversations between myself and the Unity developer as a result of our understandings of how different software packages work. My 3D software expertise is in Blender (where I would say I'm at a high level of skill) while his in Unity. We've ended up spending a lot of time in discussion about the different ways these software packages work out things like physics, particles and even basics like colours. It's exactly these kind of base level operating structures that I find great points of critical enquiry in working with software: why does Unity render particles such-and-such a way and Blender in such-and-such a way? I'm not going to go into a big software comparison, this isn't that type of blog but it leads to interesting questions about who these packages are for and why they were designed that way and by who? They're both free but in different ways; They both offer similar functions but for different outcomes and so on.
Another interesting point on the problem of art-labour is that I'm working with a developer normally used to games. These games need to be made quick and dirty but look slick and polished. They need to go straight to user's phones and devices so inconsistencies and bugs need to be removed and ironed out. In my work, I'm trying to bring out exactly these software flaws and allow the audience to see the simulation fail and break which I think is a cognitive dissonance for employment, reminding me of Jeremy Hutchinson's Err. project; 'I want you to make something for me but it's fine if it doesn't work.' I've been inviting the developer to leave unintentional flaws in the simulation setup, if it gets stuck or glitches and resets early then that tells us much more about these technologies than faking it does.
Protorenderer
I've never been able to draw (everybody says that and everyone else says that everyone else can draw), I suppose what I mean is; I was always frustrated with my inability at drawing to properly represent my ideas at any more than the most basic, diagrammatic level. When I was at the RCA I decided to play around with 3D printing and a classmate showed me Blender as a way to quickly knock it up. After that I started playing around with it to make my own models of things, not as renders in themselves but as ways to think about objects and their physicality. I enjoyed rotating around them, zooming in, catching different angles and trying to get the thing on screen to fit with what I imagined. This new project, before even hitting Unity had me building a dozen iterations of the setup and functional behaviour in a model of the exhibition space (see above) as a way to think about audience impressions and experience as well as work out very technical constraints like projector throws.
My first render from 88.7 Stories From The First Transnational Traders - 2011-ish?
This is less of a product-design process than a cinematic process. I've always thought about things cinematically and have previously written somewhere or other about how cinematic visuals so easily slip into popular culture and so make a powerful vehicle for designers. The purpose of rendering for me is to get an impression of the thing with one-order of reality removed, as in cinema. The shot above is probably the first render I ever did. It's terrible. But at the time I was overjoyed. It contains millions of faces and only about 3 materials. I teach students Blender and I think one of the hardest conceptual jumps to make is that you're not trying to model reality - you're faking reality. Game engines use this technique all the time with things like clipping distance and object cutoffs to lower processing power on things that are far away. You can't tell from this image but in that glass room at the back are dozens of hand modelled office chairs that no one can ever see but suck up valuable processing time. I had yet to learn my own lessons.
So rendering became a method of prototyping. Most of my projects now involve Blender in some stage of their production, in some, for their entirety. The remake of 88.7 was an entirely rendered 28 minute film and performance. With that project I didn't want to do something that 'represented' the fiction I'd come up with but something that augmented it, offering a more interpretive eye into the characters who's stories I was reading. 'The Manager' is all gold and clockwork, 'The Engineer' a ghostly schematic-like first-person view point, 'The Trader' has a sea of data and we only ever see flickering fluorescent lights for the scientist as she gazes at the ceiling wondering about her situation.
Watching Mephitic Air, made with Wesley Goatley, who's also doing the sound for the new project (another skill and theory set I haven't time to become expert in) is just over 30 minutes of rendered pollution data. Taking something that is classically 'immaterial' (in popular discourse, not reality) and giving it the illusion of materiality which is totally simulated. This influenced our decision to project on thin dust sheets rather than hard projector screens. Walking around it you could never fully grasp the shape of what was happening, just a vague sense of motion and change with the occasional explosion of visuals and sound. The materials are close representations of the substances we were aestheticising at room temperature. Things never normally seen in isolation because they're too small, volatile and diluted, suddenly made hard and irreal.
-
It's also a little bit of a concern of comfort. Through my practice, I've found the tautology that 'you end up doing what you do' really applies. I once spent a year writing and all I got was people asking me to do writing. I now find rendering so comfortable that I seem to always turn to it whenever anything new comes along. That's fine for prototyping but a part of this new project is pushing it aside once it comes to production. Instead of producing a render, which can only ever live on screen or paper, producing an 'artwork' in the fullest sense - with the optimum tools for the ideas, that it's less about the methods of production and what they represent than what the thing is itself. I think my attachment to rendering would make that otherwise impossible. The benefit of unifying practice and theory through the critical use of software (or whatever tools) only goes so far before they start to obfuscate the ideas you're trying to talk about in the first place - yet another difficult balance to strike.
I'm probably going to write and think more about why I render as time progresses and I've done endless little bits of writing on it. It's a hard skill, I wouldn't say there's a rendering package apart form maybe SketchUp that you could pick up and get running with in a day, there's some really interesting conceptual barriers to get round when working with 3D on a 2D screen which lead to difficulties that are more than just learning functionality.