Tuesday, 8 July 2014

Punk Rock and Fighting


Patrick Stickles, frontman of Titus Andronicus, did a great podcasted interview in January with the comedian Marc Maron (whose WTF podcast is well worth listening to in general, I'm currently catching up). They cover a lot about music over an hour or so, about rock and roll as a dying art form and how that might be good if it wants to remain any kind of music of protest. And there's a moment around 47 minutes in that just jumped out, where Stickles talks about becoming a punk, being 9 years old and his sister bringing home Green Day's album Dookie (the first record I ever bought too) and within 2 days she's got green hair and she's started this war with her parents that Stickles just wished he was fighting himself. And it wasn't a war she could win, she was fighting against the very state of the world, of all that was expected; "she might as well have been waging a war with God as far as I was concerned." And then this beautiful line: "your parents' authority is absolute, but my sister found this CD which somehow gave her the strength to take them on."

And I know that seems like a trivial fight by trivial means, but that first fight, that acknowledgement that things could be other than they are and that you could resist, and that art, not that it could bring about change, but that it could make you able to take on the fight, or a fight, that realisation I remember so profoundly, but I don't think I've ever quite heard it articulated. It's a cool moment.

Sunday, 6 July 2014

What Was It I Was Writing About Again...?


While we're on the subject of book writing, I've been trying to put together some (ugh...) "mission statements," paragraphs for me to try and conceptualise what it is that I'm attempting to do with the work, central theses, and paragraphs that might, in some form, make their way into the introduction or conclusion. It's a kind of eidetic reduction: when I strip everything down what is this book trying to accomplish, what is its essence?

It's something that I try and do with conference papers and articles too, but it's been essential for longer work where I can't keep every facet of the argument in mind all at once. When I'm drowning in notes and ideas and diversions I can come back to these two or three paragraphs and think "does what I'm getting sidetracked by right now actually help with what I'm trying to do?"

I have no idea if it's of use to anyone (maybe you don't need such crutches, maybe you can remember your damn argument!), but I thought I'd offer up a couple that have been on my mind this morning. I'm trying to explain to myself why I'm focussing on individual technologies and individual users rather than exploring the webs of "technical systems."

1. Those objects which represent catastrophic global risk (e.g. (and arguably) nuclear reactors, nano- and biotechnology), that are unpredictable, that no human can feel a sense of mastery with and through, are a new type of thing that have nothing to do with the history of technology and its impact on human experience. They need to be theorised and understood, of course, urgently. But when we put mobile phones and e-readers in the same bracket, when it's all just “complex modern technologies,” we make a profound error. We neglect the continued lineage of expertise with mundane and impactful devices simply because what’s inside the box is more complicated than it has been before – the outside of the technology that we actually encounter and use is not more challenging than a bicycle or a spear or a butcher’s knife or a violin. It's a strange hubris to suggest that when you microwave a meal you are engaged in something more complex than a concert performance 25 years or more of training in the making; that when you struggle to set the clock on your DVD player you rightly wish for the simplicity of older or more “primitive” societies where you simply had to craft and deploy the tools of the hunt and the butchery of its outcome.

2. We need a name for the objects encountered in the uniquely intimate and powerful fashion we need to ascribe to our expert use, and a name which describes those objects outside of the complex systems which bring them into being and in which they sit. I.e. when we say "technological system" we shouldn't then lose the term "technology" to describe the thing we encounter. Phenomenologically, I encounter an object as a special thing outside of its systems even as, philosophically, I realise the importance of those myriad networks. The history and philosophy of technology has us covered in terms of technical systems – my project is to talk about the individual things themselves, how they affect us, how we affect them, how the ways in which we intermingle with one another need to be analysed at the individual event of use as well as the society-wide deployment. By better understanding individual use I believe that we set a better stage for understanding an object’s political implications and the considerations we may need to make when looking at the boundaries of technical systems. In some ways I’m discomfited by this seemingly rampant individualism, but I hope that it can make me, and hopefully the reader, more sensitive to the origins of vital collective and intersubjective political concerns. That it’s not my project to analyse them here should not be read as my refutation of the significance and importance of networks.

Saturday, 5 July 2014

Hyperlinks as Punctuation and Possibility


I’m finishing up the book I have to deliver to Palgrave in September (I’ll post a more thorough breakdown soon, but it’s about what technology is in the popular imagination; by definition; in phenomenological experience; and as an embodiment of knowledge. It uses e-reading, and people’s resistance to e-reading devices, as a case study for the discussion of how we skilfully use equipment as cognisers who spread our cognition over brain, body, tools, and environment). The book is based, in part, on my thesis (LINK) which explored the resistance to the then-new e-readers (mostly the early Kindle and iPad models). I found the following in my notes and it probably won’t make it into the final draft, but I still like the general idea: hyperlinks are a kind of punctuation and they do things to the way that we conceive of words on the page.

If it's always going to be a part of e-reading from the start let's return to hyperlinking, and to what it might mean. A hyperlink, most typically represented as an underlined blue word, when clicked takes the reader from the page that they are on to somewhere else, known or unknown. The author of the document sets the hyperlink marker, which word or image is clickable, and they set the destination; the reader chooses whether or not they are going to click the link. This doesn't mean, however, that an unclicked link has no meaning. Steven Johnson describes hyperlinks as an entirely new linguistic element, “the first significant form of punctuation to emerge in centuries” (Everything Bad Is Good For You 111), and this is an apt description; hyperlinked words do not change the words themselves, at the level of letters, but instead augment and alter their meaning and capacity to mean. In early writing systems pictographic script represented spoken words; the spoken “bird,” in the simplest pictogram, would have a representational or symbolic parallel with the image of a bird. A text was accurate if the interpretations of each image matched some value of what the author intended. A chirographic or typographic written word is different: it is more precise, and part of its ability to better capture specific meaning comes from its representing, or coming to represent, a spoken word inscribed many times with its own history and context. For instance,


[l]inguists classify English as a morphophonemic writing system because it represents both morphemes (units of meaning) and phonemes (units of sound) in its spelling...[T]he linguists Noam Chomsky and Carol Chomsky use words like ‘muscle’ to teach the way our words carry an entire history within them …For example, the silent ‘c’ in ‘muscle’ may seem unnecessary, but in fact it visibly connects the word to its origin, the Latin root musculus, from which we have such kindred words as ‘muscular’ and ‘musculature.’ In the latter words the ‘c’ is pronounced and represents the phonemic aspect of our alphabet. The silent ‘c’ of ‘muscle,’ therefore, visually conveys the morpheme aspect of English. In essence, English represents a 'trade-off' between depicting the individual sounds of the oral language and showing the roots of its words (Maryann Wolf, Proust and the Squid 42-43).


To look at a pictogram of a muscle, it would always mean the concept of “muscle”; whatever the culture dictated that concept to be, the image would always suggest to the reader their current interpretation of that conventional concept. But if we look at the word “muscle”, with its silent “c,” then we get the full morphophoneticism of English coming to the fore: the Latin root, with its pronounced “c,” hides within, a conceptual trace, a history more or less known, and more or less affective to the reader. But now paint that word blue and underline it, put it on a screen and it becomes imbued with possibility. This contraption now means the interpreted cultural concept of the spoken or inscribed “muscle,” like the pictogram; it contains “musculus” and a history of use, like the inscribed word; but it also reminds us, without our even clicking it, in fact without, now, it even being a hyperlink, of everywhere it might take us: anatomical diagrams, bodybuilding, bodyguards, seafood even, or somewhere we have yet to learn. Hyperlinks represent a personal aspect to every underlined word, of choices made to access (or not) a unique link or combination. They are hypermorphophonemic: conceptual, historical, possible.

But if every e-reading space is tied to hyperlink-inflected reading then suddenly any particular word need not even be a visible link, instead every word carries this new weight. Hyperlinks exist to remind us that we can head out into other texts, out into the world, that where we are is not the final say, and that the boundary lines we have revered in print are blurred at best, and potentially inconsequential. In the webs of text online, hyperlinks chart an authored path, whilst simultaneously reminding us that with Google only ever a few clicks away we can always break out from the document we’re reading to wash ourselves in information whose connections are of a much more arbitrary and idiosyncratic variety. That promise of hyperlinks now exists in all digital texts, whether they appear online or not, and this weaves a gentle magic, existing as a fundamental, conscious or unconscious breakdown of the privileging of the boundaries set by the author or typesetter, and the immutability of bound paper text.

Thursday, 8 May 2014

Making IT Beautiful - TEDxExeter Video and Transcript


I haven't blogged here in way too long. This has been the busiest six months of my life I think and something had to give, unfortunately it was posting here (normal service will resume I think, I've got some ideas for how I can put things up more regularly here again). The good news though is that the hiatus should be paying off: as it stands I have three book chapters coming out in edited collections over the next few months, I'm editing a couple of collections myself, and my first monograph is moving into the final stages of drafting. I also managed to keep on doing my teaching job (which sadly comes to a close in September) and I'm pursuing the next stage of my career (hopefully a little more research focussed while keeping up plenty of the teaching that I love). It's been an exciting time.

In amongst this I managed to do a TEDx talk in Exeter which was a terrifying and wonderful experience. I've never had to memorise anything before, and I may never attempt it again to be honest, I like having notes! But it was great to get a taste of delivering a performance. Below is the video and a transcript. The experience of seeing myself from multiple angles giving a talk that I can now barely remember (adrenaline is a weird thing) is profoundly strange, but it does look like a TED talk, a genre all of its own. That might be what's strangest of all, like suddenly finding myself in a soap opera or on the pages of a novel.



A Kind of Progress: How Boring Technologies Change Our Minds

So I’m sitting near the front of a bus. It’s pretty packed, and every seat is taken. We get to the next stop, and another big group get on, and I give my seat to a youngish mum and a baby. So now I’m standing up in a scrum of people, and the baby, it looks only a couple months old, but it already knew how to be angry, it was an angry angry baby, deep down, and it looks up at me, and it decides that I’ve somehow wronged it in some really fundamental way because it starts to just bawl while looking me right in the eye. And the mother, she looks down at her infant child, and I’m guessing she really loves this kid, and she follows its eyeline, and she tracks up and she looks at me, who’d just given up his seat, and she glares and looks away, out the window, like her child’s...integral fury was somehow my fault. So this bus journey has just gone from kind of uncomfortable to just awful, for everyone on board. And that’s when this group of teenagers in a huddle next to me start to play music on their phones.

There’s 5 of them, they’re probably about 14 years old, old enough to have stopped caring what adults think, but not old enough to have decent taste in music, and you can actually hear the passengers on this bus collectively sigh, because it’s not a long journey, and no one’s going to cause enough of a fuss to push through the crowd and tell them to stop. But, in this crappy public transport moment, while the older couple behind me start to grumble about how rude teenagers are today, I’m no longer frustrated, no longer feeling guilty for trying to stare down an overly aggressive baby, because one of these teenagers and her friend start doing this mundane thing, they start putting on their makeup. But it seems weird because I think that one of them has this big mirror that she’s pulled out of her bag, and then I realise that it’s an oldish iPad, with the front camera turned on, and it’s being used just like a mirror.

Its case was covered in stickers and the tip-exed names of various bands, but what was so striking was just how normal this advanced plastic and glass thing had become. It was a trivial moment, but it was also one of those oddly vertiginous moments of modern life where you realise that experience has changed in some small but significant way. A 15-minute journey and I saw this thing used as a mirror; a stereo; to send an email; to look something up; to play a game. Every time I glanced over it was being put to some new use; it had been built, for this girl, into the practice of being a teenager. In the same way that that baby was angry, right at its core, this girl was switched on.

It’s simple to say that we live in a moment of profound and rapid technological progress. This has become a truism, a less than interesting fact about our world. We're told that we don't have a choice about progress, that we can be either for it or against it, and TED speakers, and attendees, for the most part, are meant to be for the advancement of our material culture, of the things, the objects that we surround ourselves with. “Progress” is a hard thing to define though.  What does it mean to say that things are getting better? The American essayist and poet Ralph Waldo Emerson wrote in the 1860s
“Tis too plain that with the material power the moral progress has not kept pace. It appears that we have not made a judicious investment.”
I think that I agree; I have little faith in an idea of progress that doesn't include us getting better too, as a species, and as an increasingly globally coherent group, but that's another, much longer discussion. The science fiction writer Neal Stephenson, however, makes the simpler, but related point in a talk called “On Getting Big Stuff Done,” where he notes that:
In the first 70 years of the 20th century, we went from not believing that heavier than air flight was possible to walking on the moon.
Stephenson argues that if you took an American from 1900 and put them in a time machine and sent them to 1968, then when he or she went home again they wouldn't really even have the vocabulary to say what they had seen. But if we took someone from 1968 and sent them to now then their articulation of the change would be far easier. Stephenson says that in that case he or she would likely think that the internet was cool, and typewriters had become computers, but what had happened to supersonic air travel, what had happened to the national investment in space exploration? And he also has this fictional time traveller say things on their return which are haunting:
“Diseases that we [in 1968] can easily treat with antibiotics have become intractable and are making a comeback. And even diseases that can easily be snuffed out by vaccines are coming back, simply because parents aren't getting their kids vaccinated because they don't believe in science anymore.”
I guess my point is simply that "things are getting better faster" is a more complicated idea than we often give it credit for, and dramatically so if we consider the state of the entire world.
But there is a way that rapid change does occur through technology, and most often for the better I think. And that’s when a technology becomes boring. Cars will probably always be too exciting; we get drawn to them, we can’t really talk about them, everyone wants one regardless of how much they mess things up. We’re just not going back to horses, and, in the UK at least, we’re doing a really sterling job of trying to give up on trains too. But tablets and e-readers and smartphones are maybe starting to become just the right kind of dull.

When you’ve used them for a while you can make them do a few things really well and reliably, and you can lower the cost, and you can start to make sure that everyone has one; and only then do we start to find out their potential. Radio was like this, and TV, and telephones, and books, and photography. It was when each of them became kind of boring that they became really really powerful, and we never stopped being able to have conversations about them, about what they are and what they can be. They are not aspirational totems; they take on the same reliability and rhythms as sunrises and tap water.

What tablets and ereaders and smartphones do, what they are doing, like the great boring technologies before them, is starting conversations about how we work; how we relax; how we learn; and how we view the world, and the worst possible thing to do would be to shut down these conversations and to not see where they go. Which is why every time I see articles about the death of reading, and videogames destroying our kids’ minds, and why can’t we all just curl up with War and Peace rather than investigate the potentials of virtual reality, then I want to scream a little, because…give it a chance. We’re still all learning what TV can be, and ubiquitous high quality photography; we have no idea, for instance, what carrying a small powerful computer around with us at all times might yet do.

We should be exploring great new devices and responsibly seeing what they can offer, not damning innovation based on the strength of our old mythologies. By “old mythologies” I mean the ways in which we develop a way of looking at things that ends up making them seem to have always been true, so that when they get changed, even slightly, we experience this profound sense of wrongness. But this shows how much we care.

We place a lot of weight upon our objects and endow them with a life of their own. It's obvious that anyone who truly loves books, for instance, knows that they are much more than words on a bundle of pages. But they're not of course, the bundle is exactly what they are; we just bring something else, something better, do our best to attach it, and, with practice, do. We make things special. Physical books allow us to play with paper and bring it to life: half turning pages so that they pass by quicker; running a nail under an important line; dog-earring corners; doodling and making notes; mourning and then relishing the bangs and bumps and creases of the cover as they accumulate. It's hard to pinpoint the psychological effects of all these little things beyond a broad notion of adding importance, but that people mourn their loss suggests the pleasures, and maybe the necessity of physical interactions in daily life. But if this is the case then the outlook for things like e-readers and tablets is actually very hopeful: rather than being these mass produced lumps of plastic and glass that are so homogenous and so featureless that we cannot possibly fall in love with them, users will, instead, always work to adapt new human practices. This is part of our relationship with mundane objects: that in our bid to find the boundaries of the things that we use every day we also give something of ourselves back to them.

I think that some commentators doubt that the users of new technologies will find a way to place importance onto their digital things in the same way that they have with older and seemingly more sensuous technologies like print. But, to my mind, that’s what being human is all about: making things special, making things more than just things. We should continue, and, in some instances start long overdue conversations about vital issues such as conflict minerals and the types of social, political, and environmental impact that occur whenever a technology becomes essential. But we’re also allowed to marvel at how adaptable we are as a species, at what power we can wield when we become experts with the very items that the last generation said would threaten to destroy us.

Every stickered laptop; every annotated electronic text; every emoticon-ed instant message; every nailvarnished mobile; every comedy home movie; every tagged photo; every lovingly curated blog is testament to the fact that people have, once again, worked with these things until they are beautiful. We are building the history for our digital devices that on a long enough timescale will imbue screens with the same richness as paper pages. They traverse the same path: we make the objects, or cause them to be made; we use them; we establish what makes them work; and they get made again; and we become one with them; and we make them sing.

There is a profound bravery to letting the next generation try something new. We will always find ourselves in a state of consternation because one thing that’s always true of young people is that they will insist on the experiment, but the question always then becomes: Do we have the guts to allow them to explore what scares us, let alone to support them, let alone to follow them?

Sunday, 15 September 2013

When Technology Melts Away


Here's a copy of the talk I gave at this year's Marginalised Mainstreams conference at Senate House. I'm working through some ideas about transhumanism and our attitudes towards objects. Sorry for the weird layout, these are the notes I use as I talk so the paragraphs are pretty short so I can keep my place!



When Technology Melts Away:
The Representation of Friction-Free Tools and the New Human Aspiration

Today, I want to talk about how popular culture affects our reception of new technologies, and how essential, if inevitable, pop cultural forms are likely to be to the continued development of enhancements to the body and to cognition.
I want to be clear about what I’m trying to claim, so the simple idea is this: that popular culture prepares us for the future; it doesn’t just reflect our ideologies and prejudices and faiths and hopes, it can drive them. This isn’t that new an idea, but I think it can often be forgotten.
What I’d like to emphasise though, is that by paying specific attention to the role of technological artefacts in mainstream discourse we can get a greater sense of what people’s attitudes are liable to be, and for the radically new technologies that will start to alter human somatic and cognitive abilities at speeds eclipsing the iterations of evolution, we need to become increasingly sensitive to the values that people are establishing, and how those values might also be manipulated.

I’m not going to be arguing that any technology is good or bad, simply taking for granted that technological change will continue to occur, probably, perceptually at least, more rapidly, and almost certainly under the aegis of a dominant Whig-historical faith in progress.
And I’ll talk a bit about Sherlock Holmes and John Luther and Iron Man.

At the moment I’m at that weird stage of research where it feels like I’m between two projects, but actually I’m deeply into both.
I’m writing my first book, but it’s based on the last 6 years of my research so it feels done in my head, even though I’m still deep into working out its final shape.
And then there’s the second book, which I haven’t started at all, and yet it’s what I’m thinking about most of the time. So that’s kind of being written and not being written.
And what I want to say today comes out of that in-between space, out of the two projects overlapping.

So the first project, the project being written, is about technology, about what technology is, and what it does to us.
I’m really interested in how we conform to our tools at the same time as we shape them with our use.
We see the changing shapes of things over time, cars say, or computers, or stereos, knives, or kettles, and in that change there’s this combination of the improving state of the art and the mobile state of aesthetics, and this combination is a real writing of our knowledge and our tastes and our commitments onto the bodies of these things that we surround ourselves with.
But then there’s also how much our lives, our bodies, our ways of thinking, have been changed, moulded, by the presence of phenomena like driving; computation; the changing formats of music; cheap, mass produced blades; and quickly boiling water.
Things push back.
In part, what the bodies of our artefacts code, what is written into their shapes, is how much importance we place on them and the tasks that they help us to achieve, they code how much they’ve become valued.
But if things push back, do our bodies also have that same or similar encoding?
Foucault, and Foucauldians, talk about the “docile body,” the body taking its place and being shaped in the structures and strictures of society, that’s familiar enough, but there’s far less work on our mundane domestication by our things.
And there’s a real worry about this, I think. A worry that we can read about and hear repeated over and over, but it’s often being best captured in lay and amateur media forms, and that’s part of what I write about.
The case study I keep coming back to is e-reading.
Around the time of the first Kindle e-readers you couldn’t go a week without seeing an article called “The Death of Books?”
And always with this dumb, implicitly redundant question mark that’s meant to stand in for all of your concerns with the state of the modern world.
What will happen to our children when they read off screens rather than paper?
What will happen to the novels?
What will happen to us?
What’s happening? And why is it happening so fast?
Just think what it must all be doing…
I hate that question mark, it’s far more insidious than a bold declarative you can interrogate.
Anyway I wanted to try and understand where these kinds of resistant discourses come from, to see if there’s a common thread running through them.
Technologies have been resisted for a long time, and I’m fascinated by the popular discussion, which can sometimes be very nuanced, very sensitive, deeply aware of what we might face.
By using that word, “resistance,” I absolutely intend to invoke a political, moral, or ethical claim to avoiding or repudiating the move toward new technologies or new norms of use, in the case of e-reading: to allowing a new generation to grow up reading from screens rather than paper pages.

Take one of the most prominent works on the subject of resisting e-reading, Sven Birkerts’ The Gutenberg Elegies. At the height of his argument Birkerts tells us that

“What [codex] reading does, ultimately, is keep alive the dangerous and exhilarating idea that a life is not a sequence of lived moments, but a destiny. That God or no God, life has a unitary pattern inscribed within it” (Birkerts, The Gutenberg Elegies (1996) 85)

There is an ethics here, but an ethics tied to resisting a move away from the natural, or, worse, a drift from metaphysical rightness.
Though this is an extreme position, to associate the driving lines of text with a pattern in our lives, there is a real sense amongst many resisters of e-reading that there is very much a right way to do things.
And this sense of naturalness is almost always rooted in embodiment, a sense that reading is always-already perfectly aligned with the human body and that a move to the screen drags us away from ourselves.
Again, this is nothing new to technological criticism, but instead a playing out in the popular culture of established ideas to a far wider audience.
The early-to-mid-twentieth century technological critics, Mumford, Heidegger, Ellul, all saw technology as a potentially corrupting influence for a particular humanness.
Earlier, Marx argued that we are at our most human when we’re putting something of ourselves into our work out in the world, but also that the corruption by what we might now call technical systems had led us away from the purity of individual technical work.
The American Romantics also issued their own warnings: Thoreau told us that men had become the tools of their tools, and William Carlos Williams looked to the fields and saw technology getting in the way of lived experience:

Machines were not so much to save time as to save dignity that fears the animate touch. It is miraculous the energy that goes into inventions here. Do you know that it now takes just ten minutes to put a bushel of wheat on the market from planting to selling, whereas it took three hours in our colonial days? That’s striking. It must have been a tremendous force that would do that. That force is fear that robs the emotions: a mechanism to increase the gap between touch and thing, not to have contact (William Carlos Williams, In the American Grain 182-183).

This idea, that technology gets between us and the world, that it acts as a kind of visceral insulation, this is an idea which repeats and repeats.
At the end of the twentieth century, the anarcho-primitive philosopher Jonathan Zerzan states it bluntly:

“It seems to me we're in a barren, impoverished, technicized place and that these characteristics are interrelated” (Zerzan, “Against Technology” 1).

And it’s this same trend, this same concern, that moves through much resistance to e-reading.
Blog posts from readers reviewing new devices can be oddly illuminating with regards to what people actually find important. We see a concern that printed books, in their particular form, are able to record in their materiality a rich history of use, and this ties them to a human physical world in a way that the clinical asceticism of plastic and glass simply can’t.
A blogger, Anna Dorfman, offers a representative argument for what is important to her in her interactions with print:

I don’t see the act of reading as a purely word-based experience. Reading is also tactile. Reading should involve interaction between you and the text in your hands. The speed at which you turn to the next page (or flip back to the one before) matters. That accidental glimpse you got of page 273 (while still only on page 32) while fishing around for your bookmark matters. The weight of the book in your bag - that subtle reminder that it’s waiting for you - matters. The paper stock matters! The font, the letter-spacing, the margin width! It all matters!...And don’t even get me started on the smell of old paper and fresh ink!

There’s a kind of folk-phenomenology at work in reports like this, an intuitive sense that something profound changes when we undertake effectively the same task, but with a new bodily pose, a new engagement, or a new apparatus.
But then you get someone like Baroness Susan Greenfield who’s pseudoscientific claims that the new Facebook phone is going to rewire or cannibalise children’s brains, or that the Xbox is responsible for the increase in autism diagnoses - both of which are abhorrent bits of parent-shaming by the way, that neglect the finer aspects of neural plasticity; the expansion and better implementation of diagnostic criteria; and the fact that autism manifests way before most children have the kinds of manual dexterity required to manipulate a controller – anyway a sense of nuance can often evaporate, and this ends up shaping the red-top debate.
But these kinds of angry, ideological rants are important – and I mean Greenfield’s rants, not mine - they’re important, they have importance, as they structure the ways in which people conceive of things, conceive of the new, and in this way they shape the emergent.

And this links to my next project, the one that’s not being written, but that I seem to be constantly writing.
I’ve become increasingly interested in transhumanism and how, as a discourse, it’s actually at work in professional and amateur scientific communities, and in the wider popular consciousness.
Very briefly, I'm siding with the definition of “transhumanism” as

“a general term designating a set of approaches that hold an optimistic view of technology as having the potential to assist humans in building more equitable and happier societies mainly by modifying individual physical characteristics.” (Sky Marsen, “Playing by the Rules - or Not? Constructions of Identity in a Posthuman Future).

I'm not pretending that this is a neatly established distinction, but it gives us a reference that, for now, I'm fairly persuaded by.

So let’s look at some contemporary examples of transhumanism:



Here are some existing and near-future proposals for drugs and wearable technologies designed to change us: a new range of smart watches that are a few months away; tDCS light electroshock stimulation; creativity and attention enhancing pharmaceuticals; the Google Glass project's promise of ubiquitous augmented reality; the Occulus Rift Virtual Reality unit; and the emerging appetite for the constant tracking of biometrics with products like the Nike Fuelband.
They all represent ways of continuing our manipulation of our conceptions of ourselves, our world, and our agency within it using technology.
We might think of these sorts of enhancements, these wearable communication devices, pills and monitors, as a light transhumanism perhaps; gateway drugs towards becoming other.
They're items certainly well worth considering on their own terms, but also as pointers toward things to come with their potential for building the public appetite for transforming the body and embedded mind chemically and surgically.
This preparation is something that any harder transhumanism will require, though public desire often gets left off the list of biohacking necessities in favour of more tangible technological requirements.
By a “harder” transhumanism I mean, for instance, the arguably murkier realms of elective surgery and permanent neural enhancement; murkier as they open up questions of who can afford what, who will have access, who will want access, who might be left behind, what will be expected of people before they’re old enough to make their own decisions, and similar challenging social questions.

So why might these seemingly softer technologies lead us down such a path?
Miniaturisation and normalisation are trends that mundane technologies have often taken.
Mobile phones with batteries in briefcases carried by businessmen become cheap clamshell devices spreading throughout developing countries.
Printed books started as Gutenberg bibles and proceeded to iterate toward, largely, better conforming to the hands which held them, becoming smaller, more robust, less decorative.
Sundials became clocks became watches which became minute elements of other devices.
Computers took up rooms, took up desks, took up laps, fitted into pockets, now they're set to become watches and glasses.
The line to a world of normalised implantation is perhaps simply another step away.
Now, this may seem initially far-fetched, but Wired reported in late February the creation of electronic temporary tattoos, powered by the movement of the wearer, and capable of sending and receiving information.
When your phone is as disposable as a nicotine patch, the ultimate in wearable tech, how far away does sub-dermal really seem?

One of my favourite discussions in the Digital and Cyberculture Studies class I started this year involved the work of amateur bodymodifiers, a subculture known as “grinders,” who's experiments with biohacking have lead them to implant small free-floating magnets in silicon shells into their fingertips.
This hack allows them to feel the shape of electromagnetic fields, the stuttering of failing hard-drives, it reveals an invisible layer of our built world, and users who have performed the surgery find that, after a couple of weeks of recovery, their brain starts to code information from their fingertips very differently, arguably forming a new sense as the magnets spin in response to the unseen and previously unfelt forces in the world.
A subdermal hack has altered their cognition and their phenomenal experience of the modern world in a way that we might imagine as being a step toward the strongly transhuman; it has the “ick” factor, or the fascination - depending on your tolerance - of a subtly new form of being.
But my students readily saw this kind of “hard” hack as existing on a continuum from search changing the way that they remember; Google Maps changing their sense of their lived space; and mobile phones and social media changing their relationship with time and with their friends.

So we have technologies iterating to be smaller, more complex, and more advantageous the more deeply that they are embedded within us and our practices.
We also seem to have a technological trend towards breaking the skin barrier, and some devices which have started to do this, be they fingertip magnets, more traditional surgical implants, or bone-conducted sound in audio devices.
But this all appears against the backdrop of a very mainstream resistance to new technologies that already seem to go “too far.”
And as I said earlier, “too far” tends to mean “unnatural” with regards to our embodiment. The markers of such an unnaturalness are, surely, the justifiable fears of pain and infection, which are closely related, but distinct enough that we shouldn’t reduce one to the other; corruption is a different fear to pain.
Infection is actually probably more clearly linked to a fear of defacement, of ruining the only body we have – this, too, is a thread that runs through the e-reading debate: that by changing the devices that we use we might somehow be ruining ourselves and our experience of the world.
Surgical transhumanism necessarily relies on extinguishing such fears, and this neutering of concern, or its escalation in light of a new realism, requires the normalisation of the more extreme soft assemblages of always-on digital artefacts such as mobile phones, smart watches, and other, even more intimately wearable tech.

I’d like to finish up by looking at how pop cultural representations of transhuman devices can accompany this tendency and the discourse of resistance.
I said that I'd talk about Sherlock Holmes, who may seem like an odd figure to associate with transhumanism, but bear with me.

In the new American Holmes series, Elementary, set in contemporary New York and accompanied by a female Watson played by Lucy Liu, an in-recovery Sherlock, played by Johnny Lee Miller, is as brilliant as ever, able to establish the most arcane connections between events; spotting the most minute of clues; and generally impressing everyone around him with his cognitive abilities.
In the first episode, Holmes and Watson have only recently met, and he surprises her by reading a story of her life in her clothes, her phone, her demeanour.
One of the most astounding moments, for Watson at least, comes when Holmes sees an image of her parents on her phone, apparently happy, which leads him to state: “handsome woman your mother, it was very big of her to take your father back after the affair.”
“Ok, how could you possibly…?” and Watson tails off.
Later on she insists that Holmes reveal his methods.
“Google” Holmes replies, “not everything is deducible.”

Holmes is the embodiment of supreme cognitive skill, a very human refining of pattern recognition and critical thinking.
In short he's the poster boy for the power of natural cognition.
And yet throughout this series he continually supplements, augments, his innate and trained skills with the bolt-on efficiency inaugurated by mobile computing.
When Sherlock relies on Google, and it’s so obviously beneficial for him to do so, the show contributes to the making mundane of our changing attitudes towards knowledge and, importantly, its location.
Google’s function as a prosthetic memory has been debated, and it can be experienced by expert users of the system, but its representation in the work of Holmes rarefied deductive process makes it not only normalised, but a part of an aspirant intelligence.
Similarly Holmes uses his phone for all sorts of other support, a macro-lens replacing his iconic magnifying glass; text speak, praised by Holmes for its brevity and precision, replacing a more taciturn manner.

We might compare this modern Holmes to someone like John Luther.
Luther has a distinctly Holmesian vibe, a detective who can see things that others can't, who sees the connections between things.
But Luther maybe has more in common with the classical Holmes than Miller’s portrayal in Elementary. Aside from getting others to search the various police databases, his most advanced use of technology is spreading paper case documents around himself in an effort to see all the facts.
His phone remains stubbornly turned off lest people contact him; it certainly isn't used to solve cases or augment his thought process.
So are these simply two representations of technological use? An enthusiastic adopter and a resister?
Perhaps, but the hyper-intelligent detective trope seems to have genuinely altered.



In the same way that the existence of mobile phones changes the kinds of plots that we can see in our films, Holmes use of cognitive supports is never questioned, it makes sense, it ties into his idiosyncrasies and becomes a part of his allure.
The same isn't true of the damaged Luther. His refusal of technology feels like a part of his misanthropy; everyone in the show thinks that he's weird for the way that he doesn't use the now elemental device; Luther is great despite his refusal of technology.
Two somewhat broken men, the hyperactive and perennially bored ex-junkie, and the righteous misanthrope, but only the latter has technology as a symptom, and it's a refusal malady.

A more blatant popular transhuman figure is Tony Stark and his Iron Man suit.
Stark, with a chest-implanted electromagnet keeping a piece of shrapnel from piercing his heart, builds a suit of metal armour, an exoskeleton which responds perfectly to his every move and that’s equipped with an artificial intelligence which perfectly matches and predicts his requirements.
The Iron Man suit is the ultimate friction-free upgrade. It matches the needs of its user, amplifies their potential, and seems to promise no ill effects for its use. It doesn't seem to push back, it's very clean. In many ways it’s a high-end phone that you can ride in.
The Iron Man suit undoubtedly overplays the potential to offer an incredibly intimate segue between the human and the machine without risk, but in this regard it also represents a coherent fantasy for human support.



Exoskeleton devices, for instance - for military, rehabilitative, and disability support -, are often compared to Iron Man and we can see here another facet of how popular media might be deployed to affect our expectations.
The postphenomenologist Don Ihde talks about how our fantasies can become culturally primed:

“in an already technology-familiar culture, fantasies can easily take...technofantasy forms...Technofantasies include many sorts of desires...[,] technologies which will give us powers usually beyond our bodily, sensory, sexual, intellectual, or for that matter any or all dimensions of human embodiment. But while we imagine technologies which could do this, we also want them to be transparent, without effort, enacted with ease, as if our enhancements were part of a well trained ‘sports body’” (Ihde, Embodied Technics 10-11).

The technologies that we want to be frictionless often reveal our various horizons of experience. Their aspirational presence in popular media not only speaks to our desires, but actively begins to cultivate them, particularly in areas that we haven’t previously been forced to conceive of as possible.
Analysing the mundane representation and discussion of tools in media might be one of our best primers for the future state of acceptance of things, and also a discourse of value to, but also able to be manipulated by experimental science, engineering, military, and military industrial complex.
If scientists, engineers, and designers want dramatic implantable technology to take off then they will have to ally with or shape pop cultural discourse, and we can maybe discuss this later, but it’s already arguably happening more now than any other time in history, and this is something we have to watch.

Discussions of media as being simply reflective of the culture aren't enough when we have iterations of devices in our pockets that have equalled or exceeded many creative visions of the future, and an iPhone or Blackberry could soon start to look like merely the tame beginnings of what we became.