Sunday, 15 September 2013

When Technology Melts Away


Here's a copy of the talk I gave at this year's Marginalised Mainstreams conference at Senate House. I'm working through some ideas about transhumanism and our attitudes towards objects. Sorry for the weird layout, these are the notes I use as I talk so the paragraphs are pretty short so I can keep my place!



When Technology Melts Away:
The Representation of Friction-Free Tools and the New Human Aspiration

Today, I want to talk about how popular culture affects our reception of new technologies, and how essential, if inevitable, pop cultural forms are likely to be to the continued development of enhancements to the body and to cognition.
I want to be clear about what I’m trying to claim, so the simple idea is this: that popular culture prepares us for the future; it doesn’t just reflect our ideologies and prejudices and faiths and hopes, it can drive them. This isn’t that new an idea, but I think it can often be forgotten.
What I’d like to emphasise though, is that by paying specific attention to the role of technological artefacts in mainstream discourse we can get a greater sense of what people’s attitudes are liable to be, and for the radically new technologies that will start to alter human somatic and cognitive abilities at speeds eclipsing the iterations of evolution, we need to become increasingly sensitive to the values that people are establishing, and how those values might also be manipulated.

I’m not going to be arguing that any technology is good or bad, simply taking for granted that technological change will continue to occur, probably, perceptually at least, more rapidly, and almost certainly under the aegis of a dominant Whig-historical faith in progress.
And I’ll talk a bit about Sherlock Holmes and John Luther and Iron Man.

At the moment I’m at that weird stage of research where it feels like I’m between two projects, but actually I’m deeply into both.
I’m writing my first book, but it’s based on the last 6 years of my research so it feels done in my head, even though I’m still deep into working out its final shape.
And then there’s the second book, which I haven’t started at all, and yet it’s what I’m thinking about most of the time. So that’s kind of being written and not being written.
And what I want to say today comes out of that in-between space, out of the two projects overlapping.

So the first project, the project being written, is about technology, about what technology is, and what it does to us.
I’m really interested in how we conform to our tools at the same time as we shape them with our use.
We see the changing shapes of things over time, cars say, or computers, or stereos, knives, or kettles, and in that change there’s this combination of the improving state of the art and the mobile state of aesthetics, and this combination is a real writing of our knowledge and our tastes and our commitments onto the bodies of these things that we surround ourselves with.
But then there’s also how much our lives, our bodies, our ways of thinking, have been changed, moulded, by the presence of phenomena like driving; computation; the changing formats of music; cheap, mass produced blades; and quickly boiling water.
Things push back.
In part, what the bodies of our artefacts code, what is written into their shapes, is how much importance we place on them and the tasks that they help us to achieve, they code how much they’ve become valued.
But if things push back, do our bodies also have that same or similar encoding?
Foucault, and Foucauldians, talk about the “docile body,” the body taking its place and being shaped in the structures and strictures of society, that’s familiar enough, but there’s far less work on our mundane domestication by our things.
And there’s a real worry about this, I think. A worry that we can read about and hear repeated over and over, but it’s often being best captured in lay and amateur media forms, and that’s part of what I write about.
The case study I keep coming back to is e-reading.
Around the time of the first Kindle e-readers you couldn’t go a week without seeing an article called “The Death of Books?”
And always with this dumb, implicitly redundant question mark that’s meant to stand in for all of your concerns with the state of the modern world.
What will happen to our children when they read off screens rather than paper?
What will happen to the novels?
What will happen to us?
What’s happening? And why is it happening so fast?
Just think what it must all be doing…
I hate that question mark, it’s far more insidious than a bold declarative you can interrogate.
Anyway I wanted to try and understand where these kinds of resistant discourses come from, to see if there’s a common thread running through them.
Technologies have been resisted for a long time, and I’m fascinated by the popular discussion, which can sometimes be very nuanced, very sensitive, deeply aware of what we might face.
By using that word, “resistance,” I absolutely intend to invoke a political, moral, or ethical claim to avoiding or repudiating the move toward new technologies or new norms of use, in the case of e-reading: to allowing a new generation to grow up reading from screens rather than paper pages.

Take one of the most prominent works on the subject of resisting e-reading, Sven Birkerts’ The Gutenberg Elegies. At the height of his argument Birkerts tells us that

“What [codex] reading does, ultimately, is keep alive the dangerous and exhilarating idea that a life is not a sequence of lived moments, but a destiny. That God or no God, life has a unitary pattern inscribed within it” (Birkerts, The Gutenberg Elegies (1996) 85)

There is an ethics here, but an ethics tied to resisting a move away from the natural, or, worse, a drift from metaphysical rightness.
Though this is an extreme position, to associate the driving lines of text with a pattern in our lives, there is a real sense amongst many resisters of e-reading that there is very much a right way to do things.
And this sense of naturalness is almost always rooted in embodiment, a sense that reading is always-already perfectly aligned with the human body and that a move to the screen drags us away from ourselves.
Again, this is nothing new to technological criticism, but instead a playing out in the popular culture of established ideas to a far wider audience.
The early-to-mid-twentieth century technological critics, Mumford, Heidegger, Ellul, all saw technology as a potentially corrupting influence for a particular humanness.
Earlier, Marx argued that we are at our most human when we’re putting something of ourselves into our work out in the world, but also that the corruption by what we might now call technical systems had led us away from the purity of individual technical work.
The American Romantics also issued their own warnings: Thoreau told us that men had become the tools of their tools, and William Carlos Williams looked to the fields and saw technology getting in the way of lived experience:

Machines were not so much to save time as to save dignity that fears the animate touch. It is miraculous the energy that goes into inventions here. Do you know that it now takes just ten minutes to put a bushel of wheat on the market from planting to selling, whereas it took three hours in our colonial days? That’s striking. It must have been a tremendous force that would do that. That force is fear that robs the emotions: a mechanism to increase the gap between touch and thing, not to have contact (William Carlos Williams, In the American Grain 182-183).

This idea, that technology gets between us and the world, that it acts as a kind of visceral insulation, this is an idea which repeats and repeats.
At the end of the twentieth century, the anarcho-primitive philosopher Jonathan Zerzan states it bluntly:

“It seems to me we're in a barren, impoverished, technicized place and that these characteristics are interrelated” (Zerzan, “Against Technology” 1).

And it’s this same trend, this same concern, that moves through much resistance to e-reading.
Blog posts from readers reviewing new devices can be oddly illuminating with regards to what people actually find important. We see a concern that printed books, in their particular form, are able to record in their materiality a rich history of use, and this ties them to a human physical world in a way that the clinical asceticism of plastic and glass simply can’t.
A blogger, Anna Dorfman, offers a representative argument for what is important to her in her interactions with print:

I don’t see the act of reading as a purely word-based experience. Reading is also tactile. Reading should involve interaction between you and the text in your hands. The speed at which you turn to the next page (or flip back to the one before) matters. That accidental glimpse you got of page 273 (while still only on page 32) while fishing around for your bookmark matters. The weight of the book in your bag - that subtle reminder that it’s waiting for you - matters. The paper stock matters! The font, the letter-spacing, the margin width! It all matters!...And don’t even get me started on the smell of old paper and fresh ink!

There’s a kind of folk-phenomenology at work in reports like this, an intuitive sense that something profound changes when we undertake effectively the same task, but with a new bodily pose, a new engagement, or a new apparatus.
But then you get someone like Baroness Susan Greenfield who’s pseudoscientific claims that the new Facebook phone is going to rewire or cannibalise children’s brains, or that the Xbox is responsible for the increase in autism diagnoses - both of which are abhorrent bits of parent-shaming by the way, that neglect the finer aspects of neural plasticity; the expansion and better implementation of diagnostic criteria; and the fact that autism manifests way before most children have the kinds of manual dexterity required to manipulate a controller – anyway a sense of nuance can often evaporate, and this ends up shaping the red-top debate.
But these kinds of angry, ideological rants are important – and I mean Greenfield’s rants, not mine - they’re important, they have importance, as they structure the ways in which people conceive of things, conceive of the new, and in this way they shape the emergent.

And this links to my next project, the one that’s not being written, but that I seem to be constantly writing.
I’ve become increasingly interested in transhumanism and how, as a discourse, it’s actually at work in professional and amateur scientific communities, and in the wider popular consciousness.
Very briefly, I'm siding with the definition of “transhumanism” as

a general term designating a set of approaches that hold an optimistic view of technology as having the potential to assist humans in building more equitable and happier societies mainly by modifying individual physical characteristics.” (Sky Marsen, “Playing by the Rules - or Not? Constructions of Identity in a Posthuman Future).

I'm not pretending that this is a neatly established distinction, but it gives us a reference that, for now, I'm fairly persuaded by.

So let’s look at some contemporary examples of transhumanism:



Here are some existing and near-future proposals for drugs and wearable technologies designed to change us: a new range of smart watches that are a few months away; tDCS light electroshock stimulation; creativity and attention enhancing pharmaceuticals; the Google Glass project's promise of ubiquitous augmented reality; the Occulus Rift Virtual Reality unit; and the emerging appetite for the constant tracking of biometrics with products like the Nike Fuelband.
They all represent ways of continuing our manipulation of our conceptions of ourselves, our world, and our agency within it using technology.
We might think of these sorts of enhancements, these wearable communication devices, pills and monitors, as a light transhumanism perhaps; gateway drugs towards becoming other.
They're items certainly well worth considering on their own terms, but also as pointers toward things to come with their potential for building the public appetite for transforming the body and embedded mind chemically and surgically.
This preparation is something that any harder transhumanism will require, though public desire often gets left off the list of biohacking necessities in favour of more tangible technological requirements.
By a “harder” transhumanism I mean, for instance, the arguably murkier realms of elective surgery and permanent neural enhancement; murkier as they open up questions of who can afford what, who will have access, who will want access, who might be left behind, what will be expected of people before they’re old enough to make their own decisions, and similar challenging social questions.

So why might these seemingly softer technologies lead us down such a path?
Miniaturisation and normalisation are trends that mundane technologies have often taken.
Mobile phones with batteries in briefcases carried by businessmen become cheap clamshell devices spreading throughout developing countries.
Printed books started as Gutenberg bibles and proceeded to iterate toward, largely, better conforming to the hands which held them, becoming smaller, more robust, less decorative.
Sundials became clocks became watches which became minute elements of other devices.
Computers took up rooms, took up desks, took up laps, fitted into pockets, now they're set to become watches and glasses.
The line to a world of normalised implantation is perhaps simply another step away.
Now, this may seem initially far-fetched, but Wired reported in late February the creation of electronic temporary tattoos, powered by the movement of the wearer, and capable of sending and receiving information.
When your phone is as disposable as a nicotine patch, the ultimate in wearable tech, how far away does sub-dermal really seem?

One of my favourite discussions in the Digital and Cyberculture Studies class I started this year involved the work of amateur bodymodifiers, a subculture known as “grinders,” who's experiments with biohacking have lead them to implant small free-floating magnets in silicon shells into their fingertips.
This hack allows them to feel the shape of electromagnetic fields, the stuttering of failing hard-drives, it reveals an invisible layer of our built world, and users who have performed the surgery find that, after a couple of weeks of recovery, their brain starts to code information from their fingertips very differently, arguably forming a new sense as the magnets spin in response to the unseen and previously unfelt forces in the world.
A subdermal hack has altered their cognition and their phenomenal experience of the modern world in a way that we might imagine as being a step toward the strongly transhuman; it has the “ick” factor, or the fascination - depending on your tolerance - of a subtly new form of being.
But my students readily saw this kind of “hard” hack as existing on a continuum from search changing the way that they remember; Google Maps changing their sense of their lived space; and mobile phones and social media changing their relationship with time and with their friends.

So we have technologies iterating to be smaller, more complex, and more advantageous the more deeply that they are embedded within us and our practices.
We also seem to have a technological trend towards breaking the skin barrier, and some devices which have started to do this, be they fingertip magnets, more traditional surgical implants, or bone-conducted sound in audio devices.
But this all appears against the backdrop of a very mainstream resistance to new technologies that already seem to go “too far.”
And as I said earlier, “too far” tends to mean “unnatural” with regards to our embodiment. The markers of such an unnaturalness are, surely, the justifiable fears of pain and infection, which are closely related, but distinct enough that we shouldn’t reduce one to the other; corruption is a different fear to pain.
Infection is actually probably more clearly linked to a fear of defacement, of ruining the only body we have – this, too, is a thread that runs through the e-reading debate: that by changing the devices that we use we might somehow be ruining ourselves and our experience of the world.
Surgical transhumanism necessarily relies on extinguishing such fears, and this neutering of concern, or its escalation in light of a new realism, requires the normalisation of the more extreme soft assemblages of always-on digital artefacts such as mobile phones, smart watches, and other, even more intimately wearable tech.

I’d like to finish up by looking at how pop cultural representations of transhuman devices can accompany this tendency and the discourse of resistance.
I said that I'd talk about Sherlock Holmes, who may seem like an odd figure to associate with transhumanism, but bear with me.

In the new American Holmes series, Elementary, set in contemporary New York and accompanied by a female Watson played by Lucy Liu, an in-recovery Sherlock, played by Johnny Lee Miller, is as brilliant as ever, able to establish the most arcane connections between events; spotting the most minute of clues; and generally impressing everyone around him with his cognitive abilities.
In the first episode, Holmes and Watson have only recently met, and he surprises her by reading a story of her life in her clothes, her phone, her demeanour.
One of the most astounding moments, for Watson at least, comes when Holmes sees an image of her parents on her phone, apparently happy, which leads him to state: “handsome woman your mother, it was very big of her to take your father back after the affair.”
“Ok, how could you possibly…?” and Watson tails off.
Later on she insists that Holmes reveal his methods.
“Google” Holmes replies, “not everything is deducible.”

Holmes is the embodiment of supreme cognitive skill, a very human refining of pattern recognition and critical thinking.
In short he's the poster boy for the power of natural cognition.
And yet throughout this series he continually supplements, augments, his innate and trained skills with the bolt-on efficiency inaugurated by mobile computing.
When Sherlock relies on Google, and it’s so obviously beneficial for him to do so, the show contributes to the making mundane of our changing attitudes towards knowledge and, importantly, its location.
Google’s function as a prosthetic memory has been debated, and it can be experienced by expert users of the system, but its representation in the work of Holmes rarefied deductive process makes it not only normalised, but a part of an aspirant intelligence.
Similarly Holmes uses his phone for all sorts of other support, a macro-lens replacing his iconic magnifying glass; text speak, praised by Holmes for its brevity and precision, replacing a more taciturn manner.

We might compare this modern Holmes to someone like John Luther.
Luther has a distinctly Holmesian vibe, a detective who can see things that others can't, who sees the connections between things.
But Luther maybe has more in common with the classical Holmes than Miller’s portrayal in Elementary. Aside from getting others to search the various police databases, his most advanced use of technology is spreading paper case documents around himself in an effort to see all the facts.
His phone remains stubbornly turned off lest people contact him; it certainly isn't used to solve cases or augment his thought process.
So are these simply two representations of technological use? An enthusiastic adopter and a resister?
Perhaps, but the hyper-intelligent detective trope seems to have genuinely altered.



In the same way that the existence of mobile phones changes the kinds of plots that we can see in our films, Holmes use of cognitive supports is never questioned, it makes sense, it ties into his idiosyncrasies and becomes a part of his allure.
The same isn't true of the damaged Luther. His refusal of technology feels like a part of his misanthropy; everyone in the show thinks that he's weird for the way that he doesn't use the now elemental device; Luther is great despite his refusal of technology.
Two somewhat broken men, the hyperactive and perennially bored ex-junkie, and the righteous misanthrope, but only the latter has technology as a symptom, and it's a refusal malady.

A more blatant popular transhuman figure is Tony Stark and his Iron Man suit.
Stark, with a chest-implanted electromagnet keeping a piece of shrapnel from piercing his heart, builds a suit of metal armour, an exoskeleton which responds perfectly to his every move and that’s equipped with an artificial intelligence which perfectly matches and predicts his requirements.
The Iron Man suit is the ultimate friction-free upgrade. It matches the needs of its user, amplifies their potential, and seems to promise no ill effects for its use. It doesn't seem to push back, it's very clean. In many ways it’s a high-end phone that you can ride in.
The Iron Man suit undoubtedly overplays the potential to offer an incredibly intimate segue between the human and the machine without risk, but in this regard it also represents a coherent fantasy for human support.



Exoskeleton devices, for instance - for military, rehabilitative, and disability support -, are often compared to Iron Man and we can see here another facet of how popular media might be deployed to affect our expectations.
The postphenomenologist Don Ihde talks about how our fantasies can become culturally primed:

“in an already technology-familiar culture, fantasies can easily take...technofantasy forms...Technofantasies include many sorts of desires...[,] technologies which will give us powers usually beyond our bodily, sensory, sexual, intellectual, or for that matter any or all dimensions of human embodiment. But while we imagine technologies which could do this, we also want them to be transparent, without effort, enacted with ease, as if our enhancements were part of a well trained ‘sports body’” (Ihde, Embodied Technics 10-11).

The technologies that we want to be frictionless often reveal our various horizons of experience. Their aspirational presence in popular media not only speaks to our desires, but actively begins to cultivate them, particularly in areas that we haven’t previously been forced to conceive of as possible.
Analysing the mundane representation and discussion of tools in media might be one of our best primers for the future state of acceptance of things, and also a discourse of value to, but also able to be manipulated by experimental science, engineering, military, and military industrial complex.
If scientists, engineers, and designers want dramatic implantable technology to take off then they will have to ally with or shape pop cultural discourse, and we can maybe discuss this later, but it’s already arguably happening more now than any other time in history, and this is something we have to watch.

Discussions of media as being simply reflective of the culture aren't enough when we have iterations of devices in our pockets that have equalled or exceeded many creative visions of the future, and an iPhone or Blackberry could soon start to look like merely the tame beginnings of what we became.

Saturday, 2 March 2013

Whatever It Takes...


- what matters most is how well you walk through the fire - charles bukowski -

A few weeks back I had the opportunity to speak at Exeter's inaugural "Fruni" event (http://www.fruni.org.uk/). Lecturers were voted in to talk about their research to an audience of staff and students from a mixture of disciplines. Below is a transcription of the talk I gave with relevant slides; it starts off with some discussion of cognitive science and phenomenology, but then mostly settles into a discussion of dancing to dubstep, Tron, and skateboarding. The basic argument is about what technology is, and what kind of things in the world might be of interest to the field of Digital and Cyberculture Studies.

“Whatever it Takes to Understand: Why Studying English in a Digital Age Might Mean Not Being Scared of Science (or Philosophy, or Biology, or Neuropsychology, or Art History, or Computers, or Math, or Technology, or...)”

I'm never sure how to begin things like this, so I'll just say some thank yous to you guys: to Imogen for all of her hard work in getting Fruni off the ground; to the other lecturers who've spoken in this series who've set the bar so unimaginably high for this evening; to my parents for being curious enough to know what I'm up to that they thought they'd come along and find out; to friends and colleagues who I now owe a drink to for dragging them out of the house on a Monday evening; to any of my students who made it, my first year theorists in the making, and my third years who've blown me away this year, particularly my partners-in-crime in getting Digital and Cyberculture Studies onto the curriculum; and to everyone else, thank you for taking the time out of your day to come and see what my lecture might have to offer.

I guess it's worth saying upfront that I have to give a different kind of talk to the other lecturers you might have seen on the Fruni series. I can't tell you about my third or fourth book because I'm currently writing my first. And I can't talk about my research without mentioning pedagogy because, whilst I'm a lecturer under generous university naming policy, I'm what used to be called a teaching fellow - I spend more time in the classroom than I do working on my own research, and I've loved that for nearly two years now.

But as I do start to develop my work as a researcher, as I start the next bit of my career, I've found that what I want to write about and what I already have the pleasure of teaching aren't really that different.

So today I simply want to talk about what motivates my work, and that comes under three broad concerns:
  • What is technology and what does it do? - and this is the subject of my book.
  • Why is reading from a screen so different from reading from paper? - that was the subject of my PhD.
  • How might we study Digital and Cybercultural practices in English departments? - and this is what I work on in my teaching.
Everything I want to speak about today orbits around these three fundamental ideas, though it might seem as if we have to pass fairly far from them at times.

Hence my title: “whatever it takes to understand.”

As I've been discussing with my students all year, we need to find a way to take our anecdotes about the new ways in which we find ourselves living our lives because of ubiquitous, mobile, and massively networked computing.

We need to take our stories and work out how we can discuss them with some academic rigour in the hopes that we might be able to work out what's going on, at least for ourselves, and become a part of shaping the things we use and how we use them, rather than simply feeling all of their diverse and often baffling effects.

I work in the English department, but I haven't written about a novel in a long time. I do write about books, but about books as a form, not any particular text by and large.

18 months ago I finished the PhD project that is now the basis for the book I'm in the process of writing, slowly, between classes. The PhD began with a question: why is reading from a screen so different to reading from paper?

Amazon's Kindle had been released during my MA, and Apple's iPad was just about to come out as I started the PhD project and you couldn't go a week without a newspaper proclaiming the end of books, or print, or literature.

Why is reading from a screen so different?

That one question became an investigation of three assertions that we're still hearing again and again: screens aren't natural; screens don't feel right; screens are making us stupid. And I guess that means that there's three assertions always in the background of these discussions: books are natural; books feel right; books make us smarter.

I wanted to know where these very familiar ideas came from, what made them so pervasive. Did they really tell us something about screens, or about our attitudes towards technology more generally?

So I started to ask what technology even meant.

Whilst we might often agree on the objects under discussion - computer: yes, coriander: no - the specifics of why this might be so are vague. What makes a hammer of the same order of objects as an industrial press? Why are they both technologies? And does everyone experience this mysterious parity in identical ways, allowing for a consensus, or does “technology” define items across a range of unrecognised and untheorised responses?

In trying to more rigidly define my own usage of the term I've come to see technology not as a class of objects that manages to find this parity between fire, mobile phones, and the Large Hadron Collider, but instead, at least in part, to consider technology as being the phenomenological experience of a particular kind of expert use.

Phenomenology, roughly, is the study of what makes the experience of something what it is, what makes it particular as a phenomenon, what, if possible, are the elements that are the same for everyone who experiences it, or, at least, why does it appear to an individual as it does?

Think of an apprentice carpenter with a saw, a tool that he doesn't yet know how to use efficiently. He sweats and the teeth catch and the wood splinters, the saw does the job, the piece of wood, eventually, achingly, laboriously, comes in two.

Now think of the expert craftsmen with the same tool: the teeth glide, no sweat drips, the wood is cleanly split in no time at all. The saw and the expert carpenter become one machine for splitting wood, and that, to me, is maybe how we should define the distinction between a tool and a technology - a technology is something that we take onboard, something that we join with to do things better, something that changes the way that we can conceive of the world and our potential within it.

In thinking about these issues I've become fascinated by the notion of our mental representation of our physical selves, and how that conception might be modified. In a Neuroscience review paper by Angelo Maravita and Atsushi Iriki, “Tools for the Body (Schema),” the authors explore the effects on Japanese macaques of repeatedly using a rake to extend their reach to retrieve food out of arm's length.

Through studying particular kinds of neurons and what they fire in response to, researchers have shown that the monkeys, over time, start to code the rakes as physically being a part of their bodies.

Bimodal neurons are neurons that fire in response to two different kinds of input. In the case of those studied they fired either when the hand touches something or when something moves into the visual space surrounding the hand.

Over time researchers found that the same bimodal neurons of the macaques using the rakes started to fire in response to visual stimulus around the tool, as if the item had become an extension of the monkey arm, which, of course, functionally it had.

As far as the macaque's mental image of themselves, as far as their body schema was concerned, they were a different machine for reaching when they were proficient at working with the tool, and maybe this is a simple example of the technological use I'm trying to describe. Expertise changed the way that they could thinking of themselves and what they could achieve.

We can imagine similar, if more complex, instances in humans, the way you drive a car or ride a bike without thinking; the way you learn how to play tennis and forget about the racquet and the adjustments you need to make in order to return a serve; the way a saw come to fit the hand and the arm.

An example more clearly related to the macaques can be found in cases of patients displaying particular kinds of hemispatial neglect.

Neglect patients, typically after some neurological trauma, can't perceive certain regions of space. If they can no longer code the left-hand-side of their vision, for instance, then they might only draw half circles and see them as complete, or only eat the food on the right-hand-side of their plates, or only do half their makeup or brush half their hair, or not see a stain on one side of their clothes.

In some patients far space and near space, as well as space to the left and right, can be affected separately. For example a patient who can't code space that is on their left and far away might be perfectly able to draw a complete circle right in front of them, but not be able to see something to their left that is out of reach.

In these cases, similar to the macaques, some of these patients are able to reach out with a tool, simply a broom handle or some other way of extending their reach, and suddenly it's as if that previously unseen thing far away is coded as being close to them and therefore visible.

They are able to conceive of space differently through their use of a tool which is so simple that they are always already experts; a broom-handle, to an adult, is already a simple technology for reaching.

Maybe a young child with similar neglect issues, but with much poorer motor coordination, would take longer to achieve this same effect - they might have to turn the broom from cumbersome tool to usable technology through experience before those effects kicked in, in the same way as the monkeys had to take time to learn with the unfamiliar rakes.

All of this seems to resonate with Martin Heidegger's notion of “readiness-to-hand,” or, for any philosophers who might jump on this, with the canonical interpretation of the term, perhaps a little simplified, which is read as the melting-away of a tool during use.

The classic and often-repeated example is the use of a hammer. We don't focus on the hammer when we drive a nail, we focus on the work to be done, we focus on the act of hammering.

Heidegger took this as a given, but we might remember our own early carpentry efforts, where the hammer was hard to use, where we focussed on the thing rather than the work while we tried not to hit our fingers.

Like the apprentice carpenter and the saw, a hammer isn't always a technology - it's use must be precise, and this makes it more complicated than a broom handle or a rake for reaching, it takes a while to code a hammer into our body-schema, to stop it just being a tool for driving nails, and maybe Maravita and Iriki's review, or the case of neglect patients, offers a way of grounding this, and Heidegger's phenomenological observation in Cognitive Neuroscience.

The philosopher Shaun Gallagher calls this “front-loading phenomenology,” the bringing of philosophical concerns about embodied experience into discussion with Cognitive Science through experimental design and the interpretation of results.

In 2010, a research group headed by Dobromir Dotov actually attempted to prove Heidegger's notion of readiness-to-hand experimentally, a perfect example of how there might be a phenomenological influence on scientific work, and I suspect that research like this will become increasingly vital if the Humanities are to actively engage with the Sciences rather than just drawing on them.

We need to ensure that the conversation goes both ways, that findings in multiple fields might be brought together, not just for an attempt at a more holistic understanding of a phenomenon, but also to ensure that the diversity of research methods and values are better known.

I apologise for the rush through these ideas about technology, philosophy, and cognition, but I just wanted to give a sense of how I'm starting to think about this central concern for the book project: Tools get things done, technologies get us to think about what we might do.

Timothy Taylor says something similar in his book The Artificial Ape, a work which sees technological use as what makes us human:

“it is not too much philosophy to say that the emergence of technology was and is intimately connected with the extension of the range of human intentionality. Without a car...I could not have intended to go fishing..., given the distance involved; without stone tool technology, our prehistoric ancestors could not have had the intention to kill big game...[T]he existence of objects, such as saucepans, not just allows actions but suggests them” (Timothy Taylor,152).

Printed books, too, are technologies in this sense. As readers most of us are far closer to craftsmen than we are to apprentices, and as such, books change the way that we look at things, they change what we imagine we might be able to do. Information is accessible, located in a physical container that we can know our place in, there are tactile and visual reminders of what we've covered, it's all so familiar we don't think about it anymore.

But when e-readers first came out they made apprentices of us all again. Everything that we'd learned dropped away and the sweat started to drip as we worked out new routines, new practices, worked out how to bring a writing medium back into expert use again.

Some of us have done it already, started to read on Kindles in a technological fashion.
But I'm not one of them. I remain a novice, I just can't get used to it. I use my Kindle to read journal articles and I still feel that it's just a tool for passing on written information, not a technology - it just doesn't change how I look at the world because it doesn't give me a sense of being able to achieve more than I could without it.

So I absolutely understand why people might be aghast at the idea of losing printed books, and yet I'm not nearly so worried as some of the voices in the media.

If there are strong voices of resistance against such a change, and particularly if we ourselves want to be part of that voice, then we need to understand where our concerns come from and how valid they might be; not valid as in whether they should be listened to, they undoubtedly should, but valid as in whether they should be allowed to be the unedited guiding force in questioning the incursion of digital technology into the realm of reading.

We've mythologised printed books, raised them to the level where they seem like the only way of receiving written information, and if not the only way then certainly the best.

This might well be true, but we are yet to prove it, and if it's not the best reading mode then in our resistant voices we might begin to actively restrict people from engaging with a new form for a myth that we unquestioningly perpetuate rather than interrogate.

When people promote the page over the screen, trumpet the transcendent suitability of their favoured medium, then the privilege of that position can often be ignored.

The One Laptop Per Child project provides libraries to villages in developing countries through deploying rugged, cheap laptops which are provided via a combination of philanthropy and government investment.

These laptops are hand cranked and solar powered in order to charge them without power sockets.

They have also become the main source of light after sundown in many areas, and reports have shown a happy accident that they can actually start to bring families together around their child's education.

When this initiative is combined with the World Reader project then it becomes even more exciting.

World Reader provides cheap and free ebooks and publishes local content from poor and often unheard communities, so that a true diversity of literary voices and contemporary national and local literatures can emerge and seem like a viable possibility.

Essentially I think it's just important to remember that we need to have incredibly good reasons for saying that these people would necessarily be better off with print, at least as it currently stands, and in that recognition we might also question our privilege and our assumptions closer to home.

What makes printed books so good?

It's easy to imagine that books as we know them aren't going anywhere, and this is another example of both privilege and myopia. Whenever I fall into the trap of thinking that things are much the same I think of a particular person.

For the sake of calling them something I've called them Jo, just as easily Joseph as Joanna, but I'll make my point with Jo as a girl, because it's slightly more shocking, and that in itself causes some introspection - I know that it's a more effective story when she's a she, but it comes out of a number of, probably fairly unproductive cultural assumption 

It's a story I've told a couple of you before, but Jo is about 12 years old, soon to turn 13.
Her parents weren't really big on reading. There were always magazines and cookbooks and a dictionary, but otherwise there weren't really books in the house, her mother mostly reading papers for work, and her father, maybe one of the UK's 5 million adults who experience a pronounced difficulty with literacy, simply not deriving much pleasure from the act of reading.

The National Literacy Trust released survey data in 2011 suggesting that 1 in 3 children in the UK lived in households which didn't own books, so Jo's situation, in her early years, while still relatively uncommon, is hardly rare. I think we can often forget how possible a relatively bookless house actually is.

And I don't want to suggest that we don't have real issues in this country when it comes to valuing reading and writing skills, and ensuring that children have appropriate access, but, in this instance, maybe Jo's story isn't a tragic one.

Her school, of course, provided her with books, but until she was 6 she never read for pleasure, and was very rarely read to at home. She actually learned to read a little late because of the lack of books in the house, and several longitudinal studies have shown that the presence of books in the home is one of the key early indicators of successful reading and attainment of advanced reading ages, particularly in pre-teens.

But otherwise Jo made normal progress, excellent even.

Anyway, in 2009, when Jo was 7 and starting to really govern her choices about how she spent her leisure time, her parents bought one of the first Amazon Kindles to be released outside of the U.S. They bought it out of curiosity, for her mother to access documents for train journeys, but also for her father to start to read a little more.

And he did read more, aligning himself with the frequent reports of readers who switch to digital devices substantially increasing the amount and complexity of the material that they're reading, and also with several recent demonstrations that readers with dyslexia or other reading difficulties can greatly benefit from being able to control typeface, font size, and letter spacing.

For Jo's father, as with the mother of a friend of mine, books also felt less intimidating when they were no longer tied to paper, something which maybe contradicts the comfortable image of books that many, maybe most of us probably have. For some, the printed book as a form still holds too much weight, representing a school system which, when still hugely under-diagnosing dyslexia for instance, had told them that they were stupid.

The codex book has become a totem of intellectual life, and for many readers, adults and children, being able to escape that mythic weight actually opens up more possibilities for engagement.

Anyway, reading, seemingly paradoxically, became normalised in Jo's house by the removal of the last few books, and she clamoured to use this electronic device which, for whatever reason, maybe just a common connection with her father, resonated with her.

When she was 9, in the summer of 2010, she received a new third generation Kindle of her own, and as a family they later got an iPad, a device which they all use. Again, this is typical, iPads predominantly tend to be shared devices in family homes.

Most any book Jo would typically have gotten out of the school library she is able to get at home. Her parents are more than happy to buy her ebooks, and even better, as she gets older and begins to read classics for school, most of which are freely available, because the family share a Kindle account the books are available on all of their devices which has started to encourage Jo's mother to read more as well.

My point here is that Jo, in her bookless house, isn't strange, or deprived, in fact she lives in a supportive and presumably at least fairly affluent family to be around 3 different ereading devices.

But Jo doesn't read books. Even though she's become a voracious reader she never handles paper.

Jo is turning 13. In 5 years time she'll be in my class, studying English, having got the highest A-Level grades, and yet also having an experience of what “book” means which completely differs from everything I knew while I was growing up, everything that defines the single pursuit that I've probably spent more of my life doing than any other.

She'll be in my class and I'm absolutely terrified of failing her, of not being able to talk about the different ways in which she looks at a different world.

I wonder whether English Studies can be complacent, at times, about its duty of relevance. I know that many English staff and students, myself included, have sometimes taken it as a badge of honour that what we do isn't useful in some quantifiable sense, and I still adore the fact that you can't measure in pounds and pence the effect that studying art and culture have on a society.

But this means that we do have to be able to defend ourselves - the Humanities have become increasingly lax at articulating exactly what they offer, and this resulted, during the swathes of swingeing cuts that we've seen to higher education budgets over the last few years, in a litany of public statements of why the Humanities still matter at a time where the Sciences, Technology, Engineering, and Mathematics can more readily articulate their worth to the State.

I guess we do ourselves no favours in that one of the main arguments for our necessity is that we analyse the language, discourse, and effects of power, of hegemony, of ideology - we've long questioned the sorts of policies which would attack us; we've often revelled in being heretical; and we like asking difficult questions of people in charge.

But I think that we do need to be able to have the conversation about keeping English Studies relevant in the 21st century without being accused of philistinism.

Can we be relevant and stay committed to understanding great and niche works, while investigating past influence, and critiquing forces which use the word “relevance” to mean “normative” or “appropriate for late capitalism”?

This year I began a course on Digital and Cyberculture Studies to start trying to consider what courses in English could start to include in order to begin to offer this kind of relevance without needless complicity to some short term agenda.

Many you will have heard of the Digital Humanities, an increasingly important field, particularly in the U.S., which roughly deals with the use of Computer Science in Humanities work, whether that be building tools and databases for research; using the kind of “big data” analysis that can come from searching digitised corpora; or working out new approaches to reading “born-digital” texts, texts written on computers to be read from one screen or another.

The emphasis on getting involved with programming is often held up, particularly in the States, as being a large part of what the Digital Humanities are about, and the phrase “less yack, more hack” has become something of a rallying cry - stop theorising and get practising.

But there has also been a move towards what some have termed “Big Tent Digital Humanities,” an inclusive approach which doesn't seek to exclude those who don't program from discussions, but to also include work from those researchers who study the cultural effects of digital technology, the ways in which people are actually reading electronic texts, or maybe how one even writes for these environments without extensive technical skill.

I deliberately didn't name the course I wanted to teach “Digital Humanities” because I wanted to escape or problematise this necessity to program - I wanted to find a way that we could talk about digitisation in this Big Tent fashion.

But maybe there's something to keeping the term Digital Humanities specifically for those who are able or who want to create as part of their research methodology. Maybe we need a name which is distinctive for those who want to bring traditional Humanities skills to bear on new things, and as such I opted for Digital and Cyberculture Studies.

Not coincidentally this allowed me to actually teach the course to third years who had wildly varying expertise in content creation.

As I've already said, the whole point of the course was to ask “how do we talk about digital things? How do we work out what's actually new?” Just to give a quick sense of the course we had weeks based around:
Defining the Digital Humanities.Asking “What is Electronic Writing?”Reading Videogames and Hard Drives.Looking at How Authors Have Responded to Digitisation?Exploring Politics Online and Networked Communities.And also Amateur Versus Expert Content Creation and the Effects of Access.We looked at The Threat/“Threat” of Digital Technology.At Neuropsychology and Interfaces.And talked about Sex Online and Subcultures.
We also have two class blogs, one for notes taken from the weekly student presentations on these topics (http://dandcs.wordpress.com/), and one for the music that we were listening to during the course (http://dandcsm.blogspot.co.uk/), and we also have a Twitter hashtag (#DandCS), which I'm really pleased is still being used.

For the assessments there was the standard presentation, an essay, and a longer essay, but the presentation and the first essay had to be based around something that each student had built.

In recognition of the importance placed on building in the Digital Humanities, even though very few of the students could program, they were still required to create something which had a computer component at some stage, and to use that creation as the basis for thinking through a particular idea.

So stories were written in social media, websites, and interactive fiction; centuries-spanning love letters were hyperlinked together by theme; music was written on mobile phones and out of poetry; memes were released, Wikipedia entries attacked, and blogs begun; a novel about space and place was mapped using Google Maps; a student house was wired for sound and video and became somewhere between a panopticon and a publicity stunt; and straight and gay dating profiles were started and analysed.

Standing back from all of these fantastic projects, things I could never have predicted when I set the task, they resemble the psyche of some collective online unconscious: these were at least some of the things that fascinate and concern a generation of English students, and yet they previously had little outlet to think about them critically within their degree.

This seems odd if only because, by volume at any rate, the internet is the most significant form of written information of our age, and by some margin.

One of the interesting things about digitisation is that older standards of quality, in many cases set by a privatised art world, rather than some common consensus or measured use or psychic value, these older standards don't really reflect the impact that something will have online.

We can debate, and certainly should, to what extent this change in perception of quality is a good thing, and in particular in what special or common cases this might be so, but it gets to the heart of what's new about digital access I think - often what is most powerful is not what is best or most beautiful, but what is most seen.

Freud's theories of the unconscious, perspective painting, art and avant garde cinema and photography, Modernist poetry and novels, each of these things affected the culture around them in a way disproportionate to their actual audience, but this doesn't happen online.

Everything's so easy to see and access that if you don't see it then it might as well not exist. The vast majority of people aren't talking about things they haven't seen or couldn't see in two minutes time.

So these are some of the things that we might want to start having conversations about. And not just in specialist courses, they really should only be the start I think.

The Digital Humanities might well remain as a key and distinct field uniting programming and Humanities initiatives, but Digital and Cyberculture Studies should probably, after we learn its lessons from our students, just become part of the landscape of English Studies, part of a discipline which has, for a long time, wildly spanned genre and medium and approach in trying to understand the effects of written and visual work.

I spoke at the beginning about using the word “technology” to describe those things which change what we might be able to conceive of.

So I'd like to look at a couple of technologies which might be of interest to a Digital and Cyberculture Studies, and to show that what makes them worth studying is what's new about them; even though we already have established strategies for discussing them, even though they might seem familiar in kind to what we already know, it is the particularities of what they change in our conception that makes them new.

Let's start with cheap visual media. I'd like to show you two videos of people dancing.
The first is from a stage performance in Japan with a team of dancers and some amazing triggered lighting, and I want, in particular, for you to look out for a couple of occasions where those lights are turned on in a kind of ripple through the performers.


Now, this piece is a digital product in a couple of ways. Most obviously the programming of the lights and the wireless connection which is able to trigger them is a particular product of this moment, it simply couldn't have been done even 10 years ago.

Then there's also a clear influence from the costumes in the film TRON, a cult classic from the early 80s with a recently made sequel.

The original movie was predominantly set inside a computer simulation, and was one of the first films to extensively use computer generated sets.

Ken Perlin won a visual effects Oscar for the creation of “perlin noise” for the movie, a pseudo-random noise generator for computer graphics enabling the digitisation of seemingly natural textures.

But, besides TRON, there's also the apparent influence of digital culture and film editing on the choreography.

This is a dance sequence which couldn't only have not been made before, but couldn't have been conceived of in the effects it deploys because those effects come from the cultural baggage of 40 years of digitally tampered with visual imagery.

When the dancers fight like videogame characters towards the end of the clip I showed, with a punch sending them reeling through the stages of a staggered fall, each image slightly overlapping the last in a series of captured stills, we know what this is referencing, even though it's not one thing.


The overlaying of images in this fashion has become a staple of sports photography and some particularly kinetic advertising campaigns. The technique is simply called “sequence photography,” and it's been around for a long time, but it's frequent use, and particularly its use in moving images is much more recent.




This is just one of the visual effects that these dancers are drawing on, but who knows how consciously? Sequence photography just became part of our visual grammar, a wholly unnatural way of looking at the world that's been made natural.




Ok, so here's a second video of a dancer called Nonstop:


Again, this is a digital cultural product in several ways.

Most obviously, this time, is it's meant to be a Youtube video, and one that looks pretty good, taken on an affordable home camera, it's a clip that has been watched over 1.5million times, and this is one of Nonstop's least-watched videos, his latest video has been up 12 days and has 3.5 million views.

So this production touches millions of people, and there's something new in that reach, and what it might inspire in others.

But I'd like to focus on how, again, Nonstop exploits a particular visual grammar. There's the popping and locking movements here that come from early breakdancing that he's too young to have seen originally, that have been passed down through other dancers, but also through VHS and DVD performances.

And there's also the clear influence of slow motion video techniques - the effect of seeing something slowed down without technology is what gives this performance some of its uncanny edge; we wonder if it's been doctored, and again we can see a change in conception of what the body in motion can look like.

A history of things being slowed down in order to amaze us haunts both of these videos.

The most significant touchstone in this regard, something that re-energised what could be shown on screen by taking time to observe, was the original Matrix movie's use of “bullet time.”

Here's a quick clip of it in action:


At the end of the video, first we see Neo shoot at the agent who moves, to us, the audience, impossibly quickly, and it looks ugly, but then we're shown the reverse shot, literally, of the agent shooting at Neo, and we suddenly get to enter Neo's perceptual experience of time, and it's beautiful, but we also get given this god-like position of being able to move around him, not quite stopping time, but intently observing, a whole new way of looking at the world, like turning something round in our hands to look at how its facets glint.

This way of looking with digital film found its way into everything for a few years, and we still feel its effects in those dances, and in a huge variety of other visual media. It's become part of the landscape of our expectation.

Of course The Matrix isn't the only thing influencing the movements in the dances, it's just part of a whole culture of visual cues that form a collective digital unconscious, and an unconscious that is only partly ours, as in only partly that of the people in this room, but an unconscious that to someone like Jo has been a part of her whole life - Youtube began when she was 5; the Matrix came out the year before she was born.

And as that makes me feel very old and I want to take a few of you down with me, Jurassic Park is now 20 years old, and, mine and Sian's favourite, The Little Mermaid was made closer to the moon landing than it was to the present day, which is just terrifying.

We've all here lived in a time of extensive visual media, but anyone born around the turn of the millennium has always lived in a world where that culture could be played with, shared, and influence moves around at a rate of millions of hits in a few days.

This is new - the content is much the same, but the effects are different.


There has been an attempt over the last 18 months or so to try and describe a “New Aesthetic” in the visual arts, one centred around digitisation and the intrusion of computing into visible real-world experience.

Here are some examples: http://new-aesthetic.tumblr.com/

Note the use of pixels, glitches, wireframes, and computer-readable rather than human-readable symbology.

This is a New Aesthetic of things we either can't read as humans, or things that we shouldn't see. These pixels and glitches also remind me of the flickering we saw in the first dance video with the illuminated costumes, the costumes that flickered in and out of life because we recognise such flickering as part of our common visual experience - it's how those cartoon shapes should come in and out of existence somehow.




As Cezanne's paintings of Mont Saint-Victoire represented a new way of capturing subjective experience, increasingly trying to capture the movements of his head as he painted, his intimate personal experience of colour in that moment, even his eyes' ciccades, the almost imperceptible side-to-side dance of the eyes as we build our seen-world, in much the same way, so we see a New Aesthetic emerging which brings out the almost-imperceptible influences of our cyberculture, a New Aesthetic which I would argue has also made it into dance.











And, again, this isn't new. All of this is similar to Duchamp's “Nude Descending Staircase”












An image which can be hard to process until we've seen examples of stroboscopic photography, similar to sequence photography, and here used in a visual pun with Duchamp emulating his painting in a work entitled “Marcel Duchamp Descending Staircase.”

Duchamp wanted to capture another way of looking at the world.

But there's something terrifically nostalgic about all of this, a real awareness that our visual experience, our sensory experience in general, that bodily stuff that's meant to avoid influence is just as shaped by our past as our thoughts and speech and sense of self.

All of these seemingly new ways of looking at the world are really the coming to consciousness of the ways that we've already been warily looking, but that have now become normalised.

Maybe we can become expert observers in this mode - maybe these sorts of things are what emerge when certain visual tools become so normal that they function as technologies, when they become a part of who we are rather than just a way to convey information.

Lasting experience and expertise changes the ways in which we look and act.

When we become good at something our lines of potential are written differently in the world around us. Anyone who sketches, or takes photographs will be familiar with this, those times when you can wander around an environment seeing it in terms of forms and lines, geometric planes and shades and vectors rather than lived spaces and people.

Skateboarders and free runners report similar effects, seeing lines of travel, spots of danger, jumps that they've never done before but know they can hit first try in the city landscape, and in fact the development of expertise in skating communities with access to digital equipment and shared files is fascinating, and ties into the way we can think of the dancers we just saw.


This, for instance, is a significant trick in the winning run of a vert ramp skate competition in the early 1980s. The skater is about 4 feet off the top of the ramp, he approached the jump facing backwards, and held the board between his legs as he rotated 360 degrees in the air.
And this is a significant trick in 2011's X-Games finals in the same vert event. The skater also approached the ramp with his body facing backwards to the direction of travel, made about 10 feet of clearance from the ramp, and turned 540 degrees in the air whilst kicking his board away from him in a kickflip, rotating it independently of his body, before grabbing it, completing the rest of his own rotation, and placing it back under his feet to skate away.

The incredible progression in skateboarding proficiency at both amateur and professional level over the last 30 years comes in part from the refined boards being used and the ramps being designed, both the products of digital laser cutting, and both in greater and greater demand due to videogame's promotion of the sport in the mid-90s and early 2000s.

But it also comes from magazines' use of sequence photography to show in incredible detail the exact body positions required at each stage of a trick and, maybe most significantly, videos of tricks being easily and cheaply, functionally freely, recorded and shared in vast numbers.

This allows for amateur skaters to have tricks broken down for them, to see the shapes their bodies should be making, and to visualise them ahead of time, and then to record their own performances and play them back to see where they went wrong, a technique which has been used extensively throughout professional sports and also the dance world, again at both professional and amateur levels.

And I wonder how many of Nonstop's movements in his slow-motion dancing came from his being able to see himself from all sides in a video, rather than as a front-on two dimensional shape in a mirror?

Such ubiquitous exchange of visual information also raises the expectations of what actually is a basic level of proficiency.

The kickflip, a move where the skater jumps into the air, kicking the board out beneath them so that it rotates 360 degrees around its longitudinal axis, in the 1970s and even early 80s was considered a highly technical trick. But now, though it's still just as hard to master, it's considered a relatively basic amateur trick.

The drive to achieve the movement comes much earlier in a skater's learning, with very determined young teenagers mastering it in a matter of months before moving on.
In the 70s you'd rarely see the trick performed well, today, if you follow skating, you see it everywhere, at every level.

This has led to skaters being able to conceive of such tricks, and to conceive of them as starting points and not the pinnacle of a career; video moves from a tool for recording action to becoming a technology which changes experience.

A last example of an avenue for Digital and Cyberculture Studies.

John Perry Barlow, a poet, essayist, founding member of the digital rights group the Electronic Frontier Foundation, and former lyricist for The Grateful Dead, recently said that “getting pornography out of the Internet is like getting food colouring out of a swimming pool.”

And there really is no use denying that sex online is a big deal, and will remain part of any responsible Digital and Cyberculture studies work.

Porn is a great example of what makes digital technologies new because we are trained to be acutely aware that it might be having negative effects.

English Studies, and the Humanities more generally, already have a suite of existing ways-in to discussing pornography: sex and gender relations; power dynamics; the imagery deployed; the commodification of sex; changing attitudes towards sexual expression and taboos; etc. etc.

But do these questions get to the heart of what's new about online pornography?

Ease of access; near comprehensive variety; the lowering of all costs in production and distribution; the rise of amateur content; incredibly rapid changes in the structure and standing of the industry - these are the things that are truly novel about modern porn, it's comparatively rare that any newness comes from the content that we might discover.

It's also interesting that what's changing in the publishing models for print, music, photography, and film after the internet are the exact same issues - ease of access; near comprehensive variety; the lowering of all costs in production and distribution; the rise of amateur content; incredibly rapid changes in the structure and standing of these industries.

And, again, the content is very rarely what's exciting or new, though something like the New Aesthetic goes some way to suggesting how that, far more slowly, might also be changing.

Discussing the issues surrounding porn online with the Digital and Cyberculture students, we ended up talking about the psychology of addiction and escalation, about legal issues and the philosophies behind obscenity, about the biology and neurobiology that might play a part in pornography's effects.

All of these things are absolutely in play when pornography is in magazines on a shelf in a newsagent, of course, but the normalising of access drastically changes things, implicating almost everybody within a couple of generations.

One of my students told me about the problems of studying the effects of porn's rapid, if not acceptance, then expectation: A researcher hoping to study exactly these effects needed to find men in their 20s who hadn't seen pornography to act as control subjects. These subjects proved impossible to find.

So, beyond analysing content and comparing against controls, where else might we turn to look at digital pornography's effects?

Here's one answer, I wonder if anyone knows what this is?


I thought it was a tardigrade when I first saw it, a waterbear.

Tardigrade's are amazing, by the way, they're the only animal, that we know of, that can survive in the vacuum of space without protection, and we know this because we've shot them out of atmosphere and then collected them again a few times now.



But this isn't a tardigrade I'm afraid.

This little guy's a pubic louse.

And he probably couldn't survive in space because there's rumour he can't even survive here.

A story's been doing the rounds, again, that, presumably due to the massive distribution of mainstream pornography, there has been a rise in what I'll euphemistically call intimate topiary.

And the effect of this has been that the main habitat for the common crab... has disappeared to such an extent that it's in danger of going extinct.

So contemporary pornography might, fascinatingly, have an ecocritical component - an extinction level event born out of our repeated choice of viewing material, and one which raises all sorts of questions about, if nothing else, conservation and who's volunteering for that project.

But then it turns out that this story might be apocryphal, and it'xs certainly anecdotal, based on records from G.U.M. clinics, records which could be formed as much by treatment awareness and non-reporting as a genuine threat to a species.

So here's a nice example of how Humanities scholars may increasingly need to be alert to statistical data like never before.

The internet frequently calls for the analysis of huge numbers of data points in order to even attempt to map it's truly interesting effects. It often involves statistics and interpretation.

And yet, if you don't think that that sounds like Humanities work, can you imagine how much we'd love to access similarly extensive data that allowed us try and map the effects of the introduction of printed books, or sheet music, or the novel, or oral poetry, or even early photography.

My point is simply that the Humanities scholar who wants to work on pornography today, and it's vital work that we should absolutely be engaged in, considering a genre that is consumed like absolutely no other on the planet, well the boundaries of what needs to be included in those kinds of study as evidence, as prompts for what people consider important, for what kind of effects we need to look out for is a) at once dramatically expanded, complexified, and made incredibly difficult, but b) also comes at a time where it's incredibly easy to find research in any number of fields and to move towards engaging with experts from those disciplines.

I'll finish with a quick idea about disciplinarity.

When it comes to digitisation, researchers in English need to become more aware that it's an inherently material affair.  Digitisation is all about bodies, objects, and changes in embodiment, and if English Studies wants to be truly engaged, and truly useful in the conversation then we need to face up to the nature of its size and complexity, and this might mean asking if our current strategies are enough by themselves.

Phenomenology, and an awareness of bodily experience, much less Cognitive Neuroscience, is often missing from debates about digitisation, and this seems strange to me as voices which deal with how we encounter things in the world, how we are affected by objects in our environment, how we respond to changes in stimuli during language and pattern recognition, these seem like the right voices to respond to the questions which are most important for digitisation in the popular consciousness: What effect do screens have? What can I do with them? Should my kids use this? Is this hurting me? What does this offer? What are we losing?

Such anecdotes, stored in blogs, reviews, editorials, and conversations, are a living repository of folk-phenomenological reports of what it takes to acclimatise to new devices, to become experts again.

The digitisation of texts, and the rise of the Digital Humanities, have produced numerous approaches to discussing materiality whilst working with written materials, and represent a genuine demonstration of not only the power of interdisciplinarity, but also the potential for Humanities scholars to move well outside of the previously fixed boundaries of their disciplines and to add immense value not only to their own scholarly background, but also to the new fields that they encounter.

The new databases, languages, artworks, games, tools, and entire disciplines that are emerging out of this work are often the products of synthesis, not just of juxtaposition, and as such they can be better equipped to tackle novel complex problems and narratives.

The Digital Humanities offer a good model, but not the only one.

The way that digital things spiral out away from us whenever we try and pin down their effects should make us realise a little more of how everything is enmeshed, how our disciplines are always an artificial carving up of the world, though one that has enabled an incredible amount of progress.

Robert Sapolsky, writing in an article discussing how Wikipedia functions as a material metaphor, says that:

“in just a few years, a self-correcting, bottom-up system of quality, fundamentally independent of authorities-from-on-high, is breathing down the neck of the Mother of all sources of knowledge [The Encyclopedia Britannica]...It strikes me that there may be a very interesting consequence of this. When you have generations growing up with bottom-up emergence as routine...people are likely to realize that life, too, can have emerged, in all of its adaptive complexity, without some omnipotent being with a game plan” ( Robert Sapolsky “Weirdness of the Crowd”).

Sapolsky suggests here that Wikipedia's life as a self-regulating and emergent system of knowledge makes it easier to conceive of self-regulating and emergent systems; in using it and knowing how it works it can change the way we look at the world.

And I wonder if studying digitisation in English departments does the same thing. In all of it's messiness we learn about the inherent messiness of its effects on our lives, and maybe realise that there's nothing so new about that, we just don't yet have a discipline which can give it a convenient boundary.

That it keeps escaping out from under us, that we don't have the tools to comprehend it, should be taken as a provocation:

In this area we need to do better; we need to establish what our duty is and who it's to; we need to revel a little more in being unsettled; and to try, inspired by our students' online lives, to think a little differently.