- what matters most is how well you walk through the fire - charles bukowski -
A few weeks back I had the opportunity to speak at Exeter's inaugural "Fruni" event (http://www.fruni.org.uk/). Lecturers were voted in to talk about their research to an audience of staff and students from a mixture of disciplines. Below is a transcription of the talk I gave with relevant slides; it starts off with some discussion of cognitive science and phenomenology, but then mostly settles into a discussion of dancing to dubstep, Tron, and skateboarding. The basic argument is about what technology is, and what kind of things in the world might be of interest to the field of Digital and Cyberculture Studies.
“Whatever it Takes to Understand: Why Studying English in a Digital Age Might Mean Not Being Scared of Science (or Philosophy, or Biology, or Neuropsychology, or Art History, or Computers, or Math, or Technology, or...)”
I'm never sure how to begin things like this, so I'll just say some thank yous to you guys: to Imogen for all of her hard work in getting Fruni off the ground; to the other lecturers who've spoken in this series who've set the bar so unimaginably high for this evening; to my parents for being curious enough to know what I'm up to that they thought they'd come along and find out; to friends and colleagues who I now owe a drink to for dragging them out of the house on a Monday evening; to any of my students who made it, my first year theorists in the making, and my third years who've blown me away this year, particularly my partners-in-crime in getting Digital and Cyberculture Studies onto the curriculum; and to everyone else, thank you for taking the time out of your day to come and see what my lecture might have to offer.
I guess it's worth saying upfront that I have to give a different kind of talk to the other lecturers you might have seen on the Fruni series. I can't tell you about my third or fourth book because I'm currently writing my first. And I can't talk about my research without mentioning pedagogy because, whilst I'm a lecturer under generous university naming policy, I'm what used to be called a teaching fellow - I spend more time in the classroom than I do working on my own research, and I've loved that for nearly two years now.
But as I do start to develop my work as a researcher, as I start the next bit of my career, I've found that what I want to write about and what I already have the pleasure of teaching aren't really that different.
So today I simply want to talk about what motivates my work, and that comes under three broad concerns:
- What is technology and what does it do? - and this is the subject of my book.
- Why is reading from a screen so different from reading from paper? - that was the subject of my PhD.
- How might we study Digital and Cybercultural practices in English departments? - and this is what I work on in my teaching.
Everything I want to speak about today orbits around these three fundamental ideas, though it might seem as if we have to pass fairly far from them at times.
Hence my title: “whatever it takes to understand.”
As I've been discussing with my students all year, we need to find a way to take our anecdotes about the new ways in which we find ourselves living our lives because of ubiquitous, mobile, and massively networked computing.
We need to take our stories and work out how we can discuss them with some academic rigour in the hopes that we might be able to work out what's going on, at least for ourselves, and become a part of shaping the things we use and how we use them, rather than simply feeling all of their diverse and often baffling effects.
I work in the English department, but I haven't written about a novel in a long time. I do write about books, but about books as a form, not any particular text by and large.
18 months ago I finished the PhD project that is now the basis for the book I'm in the process of writing, slowly, between classes. The PhD began with a question: why is reading from a screen so different to reading from paper?
Amazon's Kindle had been released during my MA, and Apple's iPad was just about to come out as I started the PhD project and you couldn't go a week without a newspaper proclaiming the end of books, or print, or literature.
Why is reading from a screen so different?
That one question became an investigation of three assertions that we're still hearing again and again: screens aren't natural; screens don't feel right; screens are making us stupid. And I guess that means that there's three assertions always in the background of these discussions: books are natural; books feel right; books make us smarter.
I wanted to know where these very familiar ideas came from, what made them so pervasive. Did they really tell us something about screens, or about our attitudes towards technology more generally?
So I started to ask what technology even meant.
Whilst we might often agree on the objects under discussion - computer: yes, coriander: no - the specifics of why this might be so are vague. What makes a hammer of the same order of objects as an industrial press? Why are they both technologies? And does everyone experience this mysterious parity in identical ways, allowing for a consensus, or does “technology” define items across a range of unrecognised and untheorised responses?
In trying to more rigidly define my own usage of the term I've come to see technology not as a class of objects that manages to find this parity between fire, mobile phones, and the Large Hadron Collider, but instead, at least in part, to consider technology as being the phenomenological experience of a particular kind of expert use.
Phenomenology, roughly, is the study of what makes the experience of something what it is, what makes it particular as a phenomenon, what, if possible, are the elements that are the same for everyone who experiences it, or, at least, why does it appear to an individual as it does?
Think of an apprentice carpenter with a saw, a tool that he doesn't yet know how to use efficiently. He sweats and the teeth catch and the wood splinters, the saw does the job, the piece of wood, eventually, achingly, laboriously, comes in two.
Now think of the expert craftsmen with the same tool: the teeth glide, no sweat drips, the wood is cleanly split in no time at all. The saw and the expert carpenter become one machine for splitting wood, and that, to me, is maybe how we should define the distinction between a tool and a technology - a technology is something that we take onboard, something that we join with to do things better, something that changes the way that we can conceive of the world and our potential within it.
In thinking about these issues I've become fascinated by the notion of our mental representation of our physical selves, and how that conception might be modified. In a Neuroscience review paper by Angelo Maravita and Atsushi Iriki, “Tools for the Body (Schema),” the authors explore the effects on Japanese macaques of repeatedly using a rake to extend their reach to retrieve food out of arm's length.
Through studying particular kinds of neurons and what they fire in response to, researchers have shown that the monkeys, over time, start to code the rakes as physically being a part of their bodies.
Bimodal neurons are neurons that fire in response to two different kinds of input. In the case of those studied they fired either when the hand touches something or when something moves into the visual space surrounding the hand.
Over time researchers found that the same bimodal neurons of the macaques using the rakes started to fire in response to visual stimulus around the tool, as if the item had become an extension of the monkey arm, which, of course, functionally it had.
As far as the macaque's mental image of themselves, as far as their body schema was concerned, they were a different machine for reaching when they were proficient at working with the tool, and maybe this is a simple example of the technological use I'm trying to describe. Expertise changed the way that they could thinking of themselves and what they could achieve.
We can imagine similar, if more complex, instances in humans, the way you drive a car or ride a bike without thinking; the way you learn how to play tennis and forget about the racquet and the adjustments you need to make in order to return a serve; the way a saw come to fit the hand and the arm.
An example more clearly related to the macaques can be found in cases of patients displaying particular kinds of hemispatial neglect.
Neglect patients, typically after some neurological trauma, can't perceive certain regions of space. If they can no longer code the left-hand-side of their vision, for instance, then they might only draw half circles and see them as complete, or only eat the food on the right-hand-side of their plates, or only do half their makeup or brush half their hair, or not see a stain on one side of their clothes.
In some patients far space and near space, as well as space to the left and right, can be affected separately. For example a patient who can't code space that is on their left and far away might be perfectly able to draw a complete circle right in front of them, but not be able to see something to their left that is out of reach.
In these cases, similar to the macaques, some of these patients are able to reach out with a tool, simply a broom handle or some other way of extending their reach, and suddenly it's as if that previously unseen thing far away is coded as being close to them and therefore visible.
They are able to conceive of space differently through their use of a tool which is so simple that they are always already experts; a broom-handle, to an adult, is already a simple technology for reaching.
Maybe a young child with similar neglect issues, but with much poorer motor coordination, would take longer to achieve this same effect - they might have to turn the broom from cumbersome tool to usable technology through experience before those effects kicked in, in the same way as the monkeys had to take time to learn with the unfamiliar rakes.
All of this seems to resonate with Martin Heidegger's notion of “readiness-to-hand,” or, for any philosophers who might jump on this, with the canonical interpretation of the term, perhaps a little simplified, which is read as the melting-away of a tool during use.
The classic and often-repeated example is the use of a hammer. We don't focus on the hammer when we drive a nail, we focus on the work to be done, we focus on the act of hammering.
Heidegger took this as a given, but we might remember our own early carpentry efforts, where the hammer was hard to use, where we focussed on the thing rather than the work while we tried not to hit our fingers.
Like the apprentice carpenter and the saw, a hammer isn't always a technology - it's use must be precise, and this makes it more complicated than a broom handle or a rake for reaching, it takes a while to code a hammer into our body-schema, to stop it just being a tool for driving nails, and maybe Maravita and Iriki's review, or the case of neglect patients, offers a way of grounding this, and Heidegger's phenomenological observation in Cognitive Neuroscience.
The philosopher Shaun Gallagher calls this “front-loading phenomenology,” the bringing of philosophical concerns about embodied experience into discussion with Cognitive Science through experimental design and the interpretation of results.
In 2010, a research group headed by Dobromir Dotov actually attempted to prove Heidegger's notion of readiness-to-hand experimentally, a perfect example of how there might be a phenomenological influence on scientific work, and I suspect that research like this will become increasingly vital if the Humanities are to actively engage with the Sciences rather than just drawing on them.
We need to ensure that the conversation goes both ways, that findings in multiple fields might be brought together, not just for an attempt at a more holistic understanding of a phenomenon, but also to ensure that the diversity of research methods and values are better known.
I apologise for the rush through these ideas about technology, philosophy, and cognition, but I just wanted to give a sense of how I'm starting to think about this central concern for the book project: Tools get things done, technologies get us to think about what we might do.
Timothy Taylor says something similar in his book The Artificial Ape, a work which sees technological use as what makes us human:
“it is not too much philosophy to say that the emergence of technology was and is intimately connected with the extension of the range of human intentionality. Without a car...I could not have intended to go fishing..., given the distance involved; without stone tool technology, our prehistoric ancestors could not have had the intention to kill big game...[T]he existence of objects, such as saucepans, not just allows actions but suggests them” (Timothy Taylor,152).
Printed books, too, are technologies in this sense. As readers most of us are far closer to craftsmen than we are to apprentices, and as such, books change the way that we look at things, they change what we imagine we might be able to do. Information is accessible, located in a physical container that we can know our place in, there are tactile and visual reminders of what we've covered, it's all so familiar we don't think about it anymore.
But when e-readers first came out they made apprentices of us all again. Everything that we'd learned dropped away and the sweat started to drip as we worked out new routines, new practices, worked out how to bring a writing medium back into expert use again.
Some of us have done it already, started to read on Kindles in a technological fashion.
But I'm not one of them. I remain a novice, I just can't get used to it. I use my Kindle to read journal articles and I still feel that it's just a tool for passing on written information, not a technology - it just doesn't change how I look at the world because it doesn't give me a sense of being able to achieve more than I could without it.
So I absolutely understand why people might be aghast at the idea of losing printed books, and yet I'm not nearly so worried as some of the voices in the media.
If there are strong voices of resistance against such a change, and particularly if we ourselves want to be part of that voice, then we need to understand where our concerns come from and how valid they might be; not valid as in whether they should be listened to, they undoubtedly should, but valid as in whether they should be allowed to be the unedited guiding force in questioning the incursion of digital technology into the realm of reading.
We've mythologised printed books, raised them to the level where they seem like the only way of receiving written information, and if not the only way then certainly the best.
This might well be true, but we are yet to prove it, and if it's not the best reading mode then in our resistant voices we might begin to actively restrict people from engaging with a new form for a myth that we unquestioningly perpetuate rather than interrogate.
When people promote the page over the screen, trumpet the transcendent suitability of their favoured medium, then the privilege of that position can often be ignored.
The One Laptop Per Child project provides libraries to villages in developing countries through deploying rugged, cheap laptops which are provided via a combination of philanthropy and government investment.
These laptops are hand cranked and solar powered in order to charge them without power sockets.
They have also become the main source of light after sundown in many areas, and reports have shown a happy accident that they can actually start to bring families together around their child's education.
When this initiative is combined with the World Reader project then it becomes even more exciting.
World Reader provides cheap and free ebooks and publishes local content from poor and often unheard communities, so that a true diversity of literary voices and contemporary national and local literatures can emerge and seem like a viable possibility.
Essentially I think it's just important to remember that we need to have incredibly good reasons for saying that these people would necessarily be better off with print, at least as it currently stands, and in that recognition we might also question our privilege and our assumptions closer to home.
What makes printed books so good?
It's easy to imagine that books as we know them aren't going anywhere, and this is another example of both privilege and myopia. Whenever I fall into the trap of thinking that things are much the same I think of a particular person.
For the sake of calling them something I've called them Jo, just as easily Joseph as Joanna, but I'll make my point with Jo as a girl, because it's slightly more shocking, and that in itself causes some introspection - I know that it's a more effective story when she's a she, but it comes out of a number of, probably fairly unproductive cultural assumption
It's a story I've told a couple of you before, but Jo is about 12 years old, soon to turn 13.
Her parents weren't really big on reading. There were always magazines and cookbooks and a dictionary, but otherwise there weren't really books in the house, her mother mostly reading papers for work, and her father, maybe one of the UK's 5 million adults who experience a pronounced difficulty with literacy, simply not deriving much pleasure from the act of reading.
The National Literacy Trust released survey data in 2011 suggesting that 1 in 3 children in the UK lived in households which didn't own books, so Jo's situation, in her early years, while still relatively uncommon, is hardly rare. I think we can often forget how possible a relatively bookless house actually is.
And I don't want to suggest that we don't have real issues in this country when it comes to valuing reading and writing skills, and ensuring that children have appropriate access, but, in this instance, maybe Jo's story isn't a tragic one.
Her school, of course, provided her with books, but until she was 6 she never read for pleasure, and was very rarely read to at home. She actually learned to read a little late because of the lack of books in the house, and several longitudinal studies have shown that the presence of books in the home is one of the key early indicators of successful reading and attainment of advanced reading ages, particularly in pre-teens.
But otherwise Jo made normal progress, excellent even.
Anyway, in 2009, when Jo was 7 and starting to really govern her choices about how she spent her leisure time, her parents bought one of the first Amazon Kindles to be released outside of the U.S. They bought it out of curiosity, for her mother to access documents for train journeys, but also for her father to start to read a little more.
And he did read more, aligning himself with the frequent reports of readers who switch to digital devices substantially increasing the amount and complexity of the material that they're reading, and also with several recent demonstrations that readers with dyslexia or other reading difficulties can greatly benefit from being able to control typeface, font size, and letter spacing.
For Jo's father, as with the mother of a friend of mine, books also felt less intimidating when they were no longer tied to paper, something which maybe contradicts the comfortable image of books that many, maybe most of us probably have. For some, the printed book as a form still holds too much weight, representing a school system which, when still hugely under-diagnosing dyslexia for instance, had told them that they were stupid.
The codex book has become a totem of intellectual life, and for many readers, adults and children, being able to escape that mythic weight actually opens up more possibilities for engagement.
Anyway, reading, seemingly paradoxically, became normalised in Jo's house by the removal of the last few books, and she clamoured to use this electronic device which, for whatever reason, maybe just a common connection with her father, resonated with her.
When she was 9, in the summer of 2010, she received a new third generation Kindle of her own, and as a family they later got an iPad, a device which they all use. Again, this is typical, iPads predominantly tend to be shared devices in family homes.
Most any book Jo would typically have gotten out of the school library she is able to get at home. Her parents are more than happy to buy her ebooks, and even better, as she gets older and begins to read classics for school, most of which are freely available, because the family share a Kindle account the books are available on all of their devices which has started to encourage Jo's mother to read more as well.
My point here is that Jo, in her bookless house, isn't strange, or deprived, in fact she lives in a supportive and presumably at least fairly affluent family to be around 3 different ereading devices.
But Jo doesn't read books. Even though she's become a voracious reader she never handles paper.
Jo is turning 13. In 5 years time she'll be in my class, studying English, having got the highest A-Level grades, and yet also having an experience of what “book” means which completely differs from everything I knew while I was growing up, everything that defines the single pursuit that I've probably spent more of my life doing than any other.
She'll be in my class and I'm absolutely terrified of failing her, of not being able to talk about the different ways in which she looks at a different world.
I wonder whether English Studies can be complacent, at times, about its duty of relevance. I know that many English staff and students, myself included, have sometimes taken it as a badge of honour that what we do isn't useful in some quantifiable sense, and I still adore the fact that you can't measure in pounds and pence the effect that studying art and culture have on a society.
But this means that we do have to be able to defend ourselves - the Humanities have become increasingly lax at articulating exactly what they offer, and this resulted, during the swathes of swingeing cuts that we've seen to higher education budgets over the last few years, in a litany of public statements of why the Humanities still matter at a time where the Sciences, Technology, Engineering, and Mathematics can more readily articulate their worth to the State.
I guess we do ourselves no favours in that one of the main arguments for our necessity is that we analyse the language, discourse, and effects of power, of hegemony, of ideology - we've long questioned the sorts of policies which would attack us; we've often revelled in being heretical; and we like asking difficult questions of people in charge.
But I think that we do need to be able to have the conversation about keeping English Studies relevant in the 21st century without being accused of philistinism.
Can we be relevant and stay committed to understanding great and niche works, while investigating past influence, and critiquing forces which use the word “relevance” to mean “normative” or “appropriate for late capitalism”?
This year I began a course on Digital and Cyberculture Studies to start trying to consider what courses in English could start to include in order to begin to offer this kind of relevance without needless complicity to some short term agenda.
Many you will have heard of the Digital Humanities, an increasingly important field, particularly in the U.S., which roughly deals with the use of Computer Science in Humanities work, whether that be building tools and databases for research; using the kind of “big data” analysis that can come from searching digitised corpora; or working out new approaches to reading “born-digital” texts, texts written on computers to be read from one screen or another.
The emphasis on getting involved with programming is often held up, particularly in the States, as being a large part of what the Digital Humanities are about, and the phrase “less yack, more hack” has become something of a rallying cry - stop theorising and get practising.
But there has also been a move towards what some have termed “Big Tent Digital Humanities,” an inclusive approach which doesn't seek to exclude those who don't program from discussions, but to also include work from those researchers who study the cultural effects of digital technology, the ways in which people are actually reading electronic texts, or maybe how one even writes for these environments without extensive technical skill.
I deliberately didn't name the course I wanted to teach “Digital Humanities” because I wanted to escape or problematise this necessity to program - I wanted to find a way that we could talk about digitisation in this Big Tent fashion.
But maybe there's something to keeping the term Digital Humanities specifically for those who are able or who want to create as part of their research methodology. Maybe we need a name which is distinctive for those who want to bring traditional Humanities skills to bear on new things, and as such I opted for Digital and Cyberculture Studies.
Not coincidentally this allowed me to actually teach the course to third years who had wildly varying expertise in content creation.
As I've already said, the whole point of the course was to ask “how do we talk about digital things? How do we work out what's actually new?” Just to give a quick sense of the course we had weeks based around:
Defining the Digital Humanities.Asking “What is Electronic Writing?”Reading Videogames and Hard Drives.Looking at How Authors Have Responded to Digitisation?Exploring Politics Online and Networked Communities.And also Amateur Versus Expert Content Creation and the Effects of Access.We looked at The Threat/“Threat” of Digital Technology.At Neuropsychology and Interfaces.And talked about Sex Online and Subcultures.
We also have two class blogs, one for notes taken from the weekly student presentations on these topics (http://dandcs.wordpress.com/), and one for the music that we were listening to during the course (http://dandcsm.blogspot.co.uk/), and we also have a Twitter hashtag (#DandCS), which I'm really pleased is still being used.
For the assessments there was the standard presentation, an essay, and a longer essay, but the presentation and the first essay had to be based around something that each student had built.
In recognition of the importance placed on building in the Digital Humanities, even though very few of the students could program, they were still required to create something which had a computer component at some stage, and to use that creation as the basis for thinking through a particular idea.
So stories were written in social media, websites, and interactive fiction; centuries-spanning love letters were hyperlinked together by theme; music was written on mobile phones and out of poetry; memes were released, Wikipedia entries attacked, and blogs begun; a novel about space and place was mapped using Google Maps; a student house was wired for sound and video and became somewhere between a panopticon and a publicity stunt; and straight and gay dating profiles were started and analysed.
Standing back from all of these fantastic projects, things I could never have predicted when I set the task, they resemble the psyche of some collective online unconscious: these were at least some of the things that fascinate and concern a generation of English students, and yet they previously had little outlet to think about them critically within their degree.
This seems odd if only because, by volume at any rate, the internet is the most significant form of written information of our age, and by some margin.
One of the interesting things about digitisation is that older standards of quality, in many cases set by a privatised art world, rather than some common consensus or measured use or psychic value, these older standards don't really reflect the impact that something will have online.
We can debate, and certainly should, to what extent this change in perception of quality is a good thing, and in particular in what special or common cases this might be so, but it gets to the heart of what's new about digital access I think - often what is most powerful is not what is best or most beautiful, but what is most seen.
Freud's theories of the unconscious, perspective painting, art and avant garde cinema and photography, Modernist poetry and novels, each of these things affected the culture around them in a way disproportionate to their actual audience, but this doesn't happen online.
Everything's so easy to see and access that if you don't see it then it might as well not exist. The vast majority of people aren't talking about things they haven't seen or couldn't see in two minutes time.
So these are some of the things that we might want to start having conversations about. And not just in specialist courses, they really should only be the start I think.
The Digital Humanities might well remain as a key and distinct field uniting programming and Humanities initiatives, but Digital and Cyberculture Studies should probably, after we learn its lessons from our students, just become part of the landscape of English Studies, part of a discipline which has, for a long time, wildly spanned genre and medium and approach in trying to understand the effects of written and visual work.
I spoke at the beginning about using the word “technology” to describe those things which change what we might be able to conceive of.
So I'd like to look at a couple of technologies which might be of interest to a Digital and Cyberculture Studies, and to show that what makes them worth studying is what's new about them; even though we already have established strategies for discussing them, even though they might seem familiar in kind to what we already know, it is the particularities of what they change in our conception that makes them new.
Let's start with cheap visual media. I'd like to show you two videos of people dancing.
The first is from a stage performance in Japan with a team of dancers and some amazing triggered lighting, and I want, in particular, for you to look out for a couple of occasions where those lights are turned on in a kind of ripple through the performers.
Now, this piece is a digital product in a couple of ways. Most obviously the programming of the lights and the wireless connection which is able to trigger them is a particular product of this moment, it simply couldn't have been done even 10 years ago.
Then there's also a clear influence from the costumes in the film TRON, a cult classic from the early 80s with a recently made sequel.
The original movie was predominantly set inside a computer simulation, and was one of the first films to extensively use computer generated sets.
Ken Perlin won a visual effects Oscar for the creation of “perlin noise” for the movie, a pseudo-random noise generator for computer graphics enabling the digitisation of seemingly natural textures.
But, besides TRON, there's also the apparent influence of digital culture and film editing on the choreography.
This is a dance sequence which couldn't only have not been made before, but couldn't have been conceived of in the effects it deploys because those effects come from the cultural baggage of 40 years of digitally tampered with visual imagery.
When the dancers fight like videogame characters towards the end of the clip I showed, with a punch sending them reeling through the stages of a staggered fall, each image slightly overlapping the last in a series of captured stills, we know what this is referencing, even though it's not one thing.
The overlaying of images in this fashion has become a staple of sports photography and some particularly kinetic advertising campaigns. The technique is simply called “sequence photography,” and it's been around for a long time, but it's frequent use, and particularly its use in moving images is much more recent.
This is just one of the visual effects that these dancers are drawing on, but who knows how consciously? Sequence photography just became part of our visual grammar, a wholly unnatural way of looking at the world that's been made natural.
Ok, so here's a second video of a dancer called Nonstop:
Again, this is a digital cultural product in several ways.
Most obviously, this time, is it's meant to be a Youtube video, and one that looks pretty good, taken on an affordable home camera, it's a clip that has been watched over 1.5million times, and this is one of Nonstop's least-watched videos, his latest video has been up 12 days and has 3.5 million views.
So this production touches millions of people, and there's something new in that reach, and what it might inspire in others.
But I'd like to focus on how, again, Nonstop exploits a particular visual grammar. There's the popping and locking movements here that come from early breakdancing that he's too young to have seen originally, that have been passed down through other dancers, but also through VHS and DVD performances.
And there's also the clear influence of slow motion video techniques - the effect of seeing something slowed down without technology is what gives this performance some of its uncanny edge; we wonder if it's been doctored, and again we can see a change in conception of what the body in motion can look like.
A history of things being slowed down in order to amaze us haunts both of these videos.
The most significant touchstone in this regard, something that re-energised what could be shown on screen by taking time to observe, was the original Matrix movie's use of “bullet time.”
Here's a quick clip of it in action:
At the end of the video, first we see Neo shoot at the agent who moves, to us, the audience, impossibly quickly, and it looks ugly, but then we're shown the reverse shot, literally, of the agent shooting at Neo, and we suddenly get to enter Neo's perceptual experience of time, and it's beautiful, but we also get given this god-like position of being able to move around him, not quite stopping time, but intently observing, a whole new way of looking at the world, like turning something round in our hands to look at how its facets glint.
This way of looking with digital film found its way into everything for a few years, and we still feel its effects in those dances, and in a huge variety of other visual media. It's become part of the landscape of our expectation.
Of course The Matrix isn't the only thing influencing the movements in the dances, it's just part of a whole culture of visual cues that form a collective digital unconscious, and an unconscious that is only partly ours, as in only partly that of the people in this room, but an unconscious that to someone like Jo has been a part of her whole life - Youtube began when she was 5; the Matrix came out the year before she was born.
And as that makes me feel very old and I want to take a few of you down with me, Jurassic Park is now 20 years old, and, mine and Sian's favourite, The Little Mermaid was made closer to the moon landing than it was to the present day, which is just terrifying.
We've all here lived in a time of extensive visual media, but anyone born around the turn of the millennium has always lived in a world where that culture could be played with, shared, and influence moves around at a rate of millions of hits in a few days.
This is new - the content is much the same, but the effects are different.
There has been an attempt over the last 18 months or so to try and describe a “New Aesthetic” in the visual arts, one centred around digitisation and the intrusion of computing into visible real-world experience.
Here are some examples: http://new-aesthetic.tumblr.com/
Note the use of pixels, glitches, wireframes, and computer-readable rather than human-readable symbology.
This is a New Aesthetic of things we either can't read as humans, or things that we shouldn't see. These pixels and glitches also remind me of the flickering we saw in the first dance video with the illuminated costumes, the costumes that flickered in and out of life because we recognise such flickering as part of our common visual experience - it's how those cartoon shapes should come in and out of existence somehow.
As Cezanne's paintings of Mont Saint-Victoire represented a new way of capturing subjective experience, increasingly trying to capture the movements of his head as he painted, his intimate personal experience of colour in that moment, even his eyes' ciccades, the almost imperceptible side-to-side dance of the eyes as we build our seen-world, in much the same way, so we see a New Aesthetic emerging which brings out the almost-imperceptible influences of our cyberculture, a New Aesthetic which I would argue has also made it into dance.
An image which can be hard to process until we've seen examples of stroboscopic photography, similar to sequence photography, and here used in a visual pun with Duchamp emulating his painting in a work entitled “Marcel Duchamp Descending Staircase.”
Duchamp wanted to capture another way of looking at the world.
But there's something terrifically nostalgic about all of this, a real awareness that our visual experience, our sensory experience in general, that bodily stuff that's meant to avoid influence is just as shaped by our past as our thoughts and speech and sense of self.
All of these seemingly new ways of looking at the world are really the coming to consciousness of the ways that we've already been warily looking, but that have now become normalised.
Maybe we can become expert observers in this mode - maybe these sorts of things are what emerge when certain visual tools become so normal that they function as technologies, when they become a part of who we are rather than just a way to convey information.
Lasting experience and expertise changes the ways in which we look and act.
When we become good at something our lines of potential are written differently in the world around us. Anyone who sketches, or takes photographs will be familiar with this, those times when you can wander around an environment seeing it in terms of forms and lines, geometric planes and shades and vectors rather than lived spaces and people.
Skateboarders and free runners report similar effects, seeing lines of travel, spots of danger, jumps that they've never done before but know they can hit first try in the city landscape, and in fact the development of expertise in skating communities with access to digital equipment and shared files is fascinating, and ties into the way we can think of the dancers we just saw.
This, for instance, is a significant trick in the winning run of a vert ramp skate competition in the early 1980s. The skater is about 4 feet off the top of the ramp, he approached the jump facing backwards, and held the board between his legs as he rotated 360 degrees in the air.
And this is a significant trick in 2011's X-Games finals in the same vert event. The skater also approached the ramp with his body facing backwards to the direction of travel, made about 10 feet of clearance from the ramp, and turned 540 degrees in the air whilst kicking his board away from him in a kickflip, rotating it independently of his body, before grabbing it, completing the rest of his own rotation, and placing it back under his feet to skate away.
The incredible progression in skateboarding proficiency at both amateur and professional level over the last 30 years comes in part from the refined boards being used and the ramps being designed, both the products of digital laser cutting, and both in greater and greater demand due to videogame's promotion of the sport in the mid-90s and early 2000s.
But it also comes from magazines' use of sequence photography to show in incredible detail the exact body positions required at each stage of a trick and, maybe most significantly, videos of tricks being easily and cheaply, functionally freely, recorded and shared in vast numbers.
This allows for amateur skaters to have tricks broken down for them, to see the shapes their bodies should be making, and to visualise them ahead of time, and then to record their own performances and play them back to see where they went wrong, a technique which has been used extensively throughout professional sports and also the dance world, again at both professional and amateur levels.
And I wonder how many of Nonstop's movements in his slow-motion dancing came from his being able to see himself from all sides in a video, rather than as a front-on two dimensional shape in a mirror?
Such ubiquitous exchange of visual information also raises the expectations of what actually is a basic level of proficiency.
The kickflip, a move where the skater jumps into the air, kicking the board out beneath them so that it rotates 360 degrees around its longitudinal axis, in the 1970s and even early 80s was considered a highly technical trick. But now, though it's still just as hard to master, it's considered a relatively basic amateur trick.
The drive to achieve the movement comes much earlier in a skater's learning, with very determined young teenagers mastering it in a matter of months before moving on.
In the 70s you'd rarely see the trick performed well, today, if you follow skating, you see it everywhere, at every level.
This has led to skaters being able to conceive of such tricks, and to conceive of them as starting points and not the pinnacle of a career; video moves from a tool for recording action to becoming a technology which changes experience.
A last example of an avenue for Digital and Cyberculture Studies.
John Perry Barlow, a poet, essayist, founding member of the digital rights group the Electronic Frontier Foundation, and former lyricist for The Grateful Dead, recently said that “getting pornography out of the Internet is like getting food colouring out of a swimming pool.”
And there really is no use denying that sex online is a big deal, and will remain part of any responsible Digital and Cyberculture studies work.
Porn is a great example of what makes digital technologies new because we are trained to be acutely aware that it might be having negative effects.
English Studies, and the Humanities more generally, already have a suite of existing ways-in to discussing pornography: sex and gender relations; power dynamics; the imagery deployed; the commodification of sex; changing attitudes towards sexual expression and taboos; etc. etc.
But do these questions get to the heart of what's new about online pornography?
Ease of access; near comprehensive variety; the lowering of all costs in production and distribution; the rise of amateur content; incredibly rapid changes in the structure and standing of the industry - these are the things that are truly novel about modern porn, it's comparatively rare that any newness comes from the content that we might discover.
It's also interesting that what's changing in the publishing models for print, music, photography, and film after the internet are the exact same issues - ease of access; near comprehensive variety; the lowering of all costs in production and distribution; the rise of amateur content; incredibly rapid changes in the structure and standing of these industries.
And, again, the content is very rarely what's exciting or new, though something like the New Aesthetic goes some way to suggesting how that, far more slowly, might also be changing.
Discussing the issues surrounding porn online with the Digital and Cyberculture students, we ended up talking about the psychology of addiction and escalation, about legal issues and the philosophies behind obscenity, about the biology and neurobiology that might play a part in pornography's effects.
All of these things are absolutely in play when pornography is in magazines on a shelf in a newsagent, of course, but the normalising of access drastically changes things, implicating almost everybody within a couple of generations.
One of my students told me about the problems of studying the effects of porn's rapid, if not acceptance, then expectation: A researcher hoping to study exactly these effects needed to find men in their 20s who hadn't seen pornography to act as control subjects. These subjects proved impossible to find.
So, beyond analysing content and comparing against controls, where else might we turn to look at digital pornography's effects?
Here's one answer, I wonder if anyone knows what this is?
I thought it was a tardigrade when I first saw it, a waterbear.
Tardigrade's are amazing, by the way, they're the only animal, that we know of, that can survive in the vacuum of space without protection, and we know this because we've shot them out of atmosphere and then collected them again a few times now.
This little guy's a pubic louse.
And he probably couldn't survive in space because there's rumour he can't even survive here.
A story's been doing the rounds, again, that, presumably due to the massive distribution of mainstream pornography, there has been a rise in what I'll euphemistically call intimate topiary.
And the effect of this has been that the main habitat for the common crab... has disappeared to such an extent that it's in danger of going extinct.
So contemporary pornography might, fascinatingly, have an ecocritical component - an extinction level event born out of our repeated choice of viewing material, and one which raises all sorts of questions about, if nothing else, conservation and who's volunteering for that project.
But then it turns out that this story might be apocryphal, and it'xs certainly anecdotal, based on records from G.U.M. clinics, records which could be formed as much by treatment awareness and non-reporting as a genuine threat to a species.
So here's a nice example of how Humanities scholars may increasingly need to be alert to statistical data like never before.
The internet frequently calls for the analysis of huge numbers of data points in order to even attempt to map it's truly interesting effects. It often involves statistics and interpretation.
And yet, if you don't think that that sounds like Humanities work, can you imagine how much we'd love to access similarly extensive data that allowed us try and map the effects of the introduction of printed books, or sheet music, or the novel, or oral poetry, or even early photography.
My point is simply that the Humanities scholar who wants to work on pornography today, and it's vital work that we should absolutely be engaged in, considering a genre that is consumed like absolutely no other on the planet, well the boundaries of what needs to be included in those kinds of study as evidence, as prompts for what people consider important, for what kind of effects we need to look out for is a) at once dramatically expanded, complexified, and made incredibly difficult, but b) also comes at a time where it's incredibly easy to find research in any number of fields and to move towards engaging with experts from those disciplines.
I'll finish with a quick idea about disciplinarity.
When it comes to digitisation, researchers in English need to become more aware that it's an inherently material affair. Digitisation is all about bodies, objects, and changes in embodiment, and if English Studies wants to be truly engaged, and truly useful in the conversation then we need to face up to the nature of its size and complexity, and this might mean asking if our current strategies are enough by themselves.
Phenomenology, and an awareness of bodily experience, much less Cognitive Neuroscience, is often missing from debates about digitisation, and this seems strange to me as voices which deal with how we encounter things in the world, how we are affected by objects in our environment, how we respond to changes in stimuli during language and pattern recognition, these seem like the right voices to respond to the questions which are most important for digitisation in the popular consciousness: What effect do screens have? What can I do with them? Should my kids use this? Is this hurting me? What does this offer? What are we losing?
Such anecdotes, stored in blogs, reviews, editorials, and conversations, are a living repository of folk-phenomenological reports of what it takes to acclimatise to new devices, to become experts again.
The digitisation of texts, and the rise of the Digital Humanities, have produced numerous approaches to discussing materiality whilst working with written materials, and represent a genuine demonstration of not only the power of interdisciplinarity, but also the potential for Humanities scholars to move well outside of the previously fixed boundaries of their disciplines and to add immense value not only to their own scholarly background, but also to the new fields that they encounter.
The new databases, languages, artworks, games, tools, and entire disciplines that are emerging out of this work are often the products of synthesis, not just of juxtaposition, and as such they can be better equipped to tackle novel complex problems and narratives.
The Digital Humanities offer a good model, but not the only one.
The way that digital things spiral out away from us whenever we try and pin down their effects should make us realise a little more of how everything is enmeshed, how our disciplines are always an artificial carving up of the world, though one that has enabled an incredible amount of progress.
Robert Sapolsky, writing in an article discussing how Wikipedia functions as a material metaphor, says that:
“in just a few years, a self-correcting, bottom-up system of quality, fundamentally independent of authorities-from-on-high, is breathing down the neck of the Mother of all sources of knowledge [The Encyclopedia Britannica]...It strikes me that there may be a very interesting consequence of this. When you have generations growing up with bottom-up emergence as routine...people are likely to realize that life, too, can have emerged, in all of its adaptive complexity, without some omnipotent being with a game plan” ( Robert Sapolsky “Weirdness of the Crowd”).
Sapolsky suggests here that Wikipedia's life as a self-regulating and emergent system of knowledge makes it easier to conceive of self-regulating and emergent systems; in using it and knowing how it works it can change the way we look at the world.
And I wonder if studying digitisation in English departments does the same thing. In all of it's messiness we learn about the inherent messiness of its effects on our lives, and maybe realise that there's nothing so new about that, we just don't yet have a discipline which can give it a convenient boundary.
That it keeps escaping out from under us, that we don't have the tools to comprehend it, should be taken as a provocation:
In this area we need to do better; we need to establish what our duty is and who it's to; we need to revel a little more in being unsettled; and to try, inspired by our students' online lives, to think a little differently.