- it's my fingers i notice...when you've asked a really interesting question...it's a physical reaction, a gut feeling that i need to start manipulating (the latin root for 'hand,' manus, is in that word) the information...to find the data that will support a good answer...the sign of thinking is that i reach for the mouse...make sure the right browser is open, get a search window handy. my eyes and hands have already learned to work together in new ways with my brain...which really is a new way of thinking for me - james o'donnell -
The following is a rewrite of talks that I gave at the “CognitiveFutures in the Humanities” symposium at Northumbria University and the “Supporting Academic Practice in the Digital Age” event at the University of Exeter. Under the title I've embedded presentation slides, and whenever “slide” comes up in bold there's a new image to go with what's being said.
Minds That Know They Have Bodies Informing Bodies That Know They Have Minds.
SLIDE - Title.
I'd like to talk about a couple of things today, mostly provocations and preludes to further work.
The two points I want to make are 1) That there's good reason to bring Cognitive Neuroscience and phenomenology together in studying digital environments, cultures, and products, and 2) That technology offers us new ways of looking at and acting in the world, and the disciplines best suited to the preliminary discussions of this kind of change in experience are a Cognitive Neuroscience and a phenomenology that are aware of and influence one another.
The “minds that know they have bodies” of my somewhat awkward title, are the psychologists who are exploring the body's role in the work of the mind, those studying Extended and Embodied cognition or who've felt their effects, and the title's bodies are the phenomenologists doing the same. The final call of my first words then is for increasingly philosophically-aware Cognitive Neuroscience to be fed back into Cognitive Neuroscience-aware phenomenology and out into the broader Humanities thereafter.
Shaun Gallagher describes the process of informing Psychology experiments' design with phenomenological insight as “front-loading phenomenology.” What I'm most interested in then is the inverse, the front-loading of body-savvy Cognitive Neuroscience back into philosophical and theoretical work in the Humanities.
This seems like a moment of particular importance for such work in the field in which I situate myself: Digital and Cyberculture studies.
Phenomenology, and an awareness of bodily experience, much less Cognitive Neuroscience, is often missing from debates about digitisation, and this seems strange to me as voices which deal with how we encounter things in the world, how we are affected by objects in our environment, how we respond to changes in stimuli during language and pattern recognition, during memorisation, and environments which require different levels of attention, these seem like the right voices to respond to the questions which are most important for digitisation in the popular consciousness: What effect do screens have? What can I do with them? Should my kids use this? Is this hurting me? What does this offer? What are we losing?
These broad questions affect everything from the internet, to videogames, to television, to mobile phones, to e-readers and to the cultures and politics which surround each of these discrete technologies, and as such these are the questions of a Digital and Cyberculture studies. But such a study is incomplete without interdisciplinary voices, without in particular, evidence and provocations from experimental Psychology.
What's strange about the present absence is that particularly for anyone interested in digitisation and contemporary culture it is clear that digital culture as a whole is fundamentally, if naively, scientifically aware in general, and profoundly scientifically aware and engaged in specific instances, and that this awareness certainly extends to Cognitive Science.
Reading popular internet culture, technology, and futurist online magazines and blogs like Wired, or Boing Boing, or io9, one can't help but be struck by the broad range of knowledge that is taken for granted by articles and comments sections alike - an impending Singularity of greater-than-human artificial intelligence with its concomitant fallout, for instance, is taken as banal knowledge in many arenas of mainstream internet culture. As is the uncanny valley of near-human imitation in robotics, or flow states of creativity, and a hundred other examples of popular and hard-scientific theory.
So not only is Cognitive Neuroscience a great way-in to discussing the changes that occur in a move from page to screen, or in the kinds of media and environments that we encounter there, it's also an essential part of the substrate of that culture that's to be investigated.
Here's another example:
SLIDE - DIY tDCS
This is an image of someone's home-made tDCS setup. tDCS, for anyone unfamiliar with the term, stands for Transcranial Direct-Current Stimulation. It's a process that's getting a lot of attention at the moment in both academic Psychology and internet and technology culture, in the latter in particular because of the popular reporting of DARPA's apparent success with using tDCS to halve the training time of drone pilots.
tDCS is the passing of a direct current through the brain between two externally attached electrodes. What makes it so appealing is that a) theoretically all it takes is a 9 volt battery and two wires and b) it promises a safe way to hack your brain to perform better, way safer than, for instance, using drugs such as Aderall or Ritalin to boost performance.
In a 2008 poll of Nature readers, 20% of 1500 respondents had used drugs for cognitive enhancement, and Wired magazine dedicated two substantial articles to investigating the use of cognitive enhancements on American campuses and reporting the drug regimens of its readers. With the tDCS literature suggesting potential for improvements in mood, attention, motor learning, and creativity, painlessly and safely after several thousand clinical trials it's clear that if the procedure comes even close to its promise then it will find a growing audience outside of the lab as drug-based neural enhancements have before it.
The theory of tDCS, as it's disseminated most frequently online, goes: by passing 9 volts through your brain at roughly these two points you'll get better at doing things.
SLIDE - tDCS (Guy)
This can lead to some cathode and anode placement which would make any tDCS researcher wince, and you really are spoilt for choice with online videos of people passing current through various points on their skulls.
The theory as it actually appears in the academic literature is more like: much evidence is still technically anecdotal, many effects may be down to a placebo effect. Where there has been demonstrable improvement in performance we're not sure why, and we're not sure what's being acted upon or what the long term effects might be.
More is known all the time, of course, and there's certainly a lot of promise for tDCS as being therapeutic or even enhancing, but from a digital cultures standpoint there are two very interesting phenomena: First is the continuation of hacking culture passing from software, to hardware, to the wet-ware of the brain and body, and second is how information on this topic is being passed around.
The amateur tDCS literature is an amalgam of shared academic .pdfs, popular blog posts, tech magazine reports, YouTube videos, and internet mail-order parts. Aspects of the digital world from file-sharing to eBay come together in producing this phenomenon emerging from a previously academy-led field of research.
SLIDE - Isaac's Eye
And whilst Victorian self-experimenters and Isaac Newton's notes on what it's like to insert a needle between your eye and its socket to see the shapes and colours that you can produce, and his diagram for this “experiment” is reproduced here, whilst such history proves that people have long played with the limits of their experience, discussion and experimentation are, I would argue, uniquely widespread at this moment through a democratised republic of digital letters.
tDCS certainly caught my attention through these channels, but rather than reaching for a spare battery I volunteered as a lab guinea-pig. Last week a great PhD researcher shot at my head with an electromagnet until she made my hand twitch, then marked the point that she'd found on my scalp with her eyeliner, apparently having found my motor cortex, and then passed 1.5ma of steady current between two saline-soaked sponges, one above my eye and one at this discovered point, while I tried to master a new motor skill.
And I only wanted to do all this because I read a blog post and then watched a video of an American guy in his early twenties try to gently electrocute himself in his front room.
My point is that the reporting of tDCS trials is changing the way that people see opportunities for improving themselves, what Cognitive Neuroscience has to offer neuro-typical healthy subjects, and how information on such topics should be disseminated.
New ways of looking at the world change what we think we can do, and what we think the world is, how it operates and how it can be operated in.
SLIDE - New Aesthetic
There has been an attempt over the last few months to try and describe a “New Aesthetic” in the visual arts, one centred around digitisation and the intrusion of computing into visible real-world experience. Here are some examples. Note the use of pixels, glitches, wireframes, and computer-readable rather than human-readable symbology.
As Cezanne's paintings of Mont Saint-Victoire
SLIDE - Cezanne
increasingly tried to capture the movements of his head as he painted, his personal experience of colour at that moment, even his eyes' ciccades, and as Duchamp's “Nude Descending Staircase”
SLIDE - Duchamp
learnt from stroboscopic photography, and was itself later emulated in the photograph “Marcel Duchamp Descending Staircase,” so we see contemporary artists
SLIDE - New Aesthetic (repeat)
increasingly trying to capture their understanding of how the world is seen by machines, how computing increasingly underlies our most basic experiences, how we have become nostalgic for certain kinds of graphics, and how glitches remind us of the typically functionally invisible mechanisms behind our smoothly operating machines.
There is space for Cognitive Neuroscience in both the production and criticism of this kind of work, and artists' works based on visualisations of dendrites and fMRI scans are already emerging.
SLIDE - Neurons
Part of my next research project will be looking at how a better understanding of our changing psychological experience of space, memory, perception, and attention might play into discussions of these kinds of works, and I think this is a way of grounding the dicussion.
If novel experiences change the way in which we conceive of the world and our potential within it, so does lasting experience and expertise. When we become good at something our lines of potential are written differently in the world around us. Anyone who sketches, or takes photographs will be familiar with this, the time when you can wander around an environment seeing it in terms of forms and lines, geometric planes and shades rather than lived spaces and people, and I think this can certainly be seen in terms of Extended Cognition.
Skateboarders and free runners report similar effects, seeing lines of travel, spots of danger, jumps that they've never done before but know they can hit first try in the city landscape, and in fact the development of expertise in skate communities with access to digital equipment and shared files is fascinating.
SLIDE - Skate trick 80s
This, for instance, is a significant trick in the winning run of a vert ramp skate competition in the early 1980s. The skater is about 4 feet off the top of the ramp, he approached the jump facing backwards, and held the board between his legs as he rotated 360 degrees in the air.
SLIDE - Skate trick 2011
And this is two parts of a significant trick in last year's X-Games finals in the same vert event. The skater also approached the ramp with his body facing backwards to the direction of travel, made about 10 feet of clearance from the ramp, and turned 540 degrees in the air whilst kicking his board away from him in a kickflip, rotating it independently of his body, before grabbing it, completing the rest of his own rotation, and placing it back under his feet to skate away.
The incredible progression in skateboarding proficiency over the last 30 years comes from a marriage of refined technology in the boards being used; magazine, and then online discussion and videogames stimulating demand for that equipment as the sport's popularity grew; and, maybe most significantly, videos of tricks being easily and cheaply, functionally freely, recorded and shared in vast numbers. This allows for amateur skaters to have tricks broken down for them, to see the shapes their bodies should be making and to visualise them ahead of time.
It also raises expectations of what actually is a basic level of proficiency. The kickflip, a move where the skater jumps into the air, kicking the board out beneath them so that it rotates 360 degrees around its longitudinal axis, in the 1970s and 80s was considered a highly technical trick. But now, though it's still just as hard to master, it's considered a relatively basic amateur trick. The drive to achieve the movement comes much earlier in a skater's learning, with very determined young teenagers mastering it in a matter of months before moving on. Their being able to conceive of such tricks, and to conceive of them as starting points not pinnacles, has driven the sport in the same way that computer technology reaches speeds and prices that would have been unfathomable to most users of the previous decade.
The effects of expertise can also be read into early phenomenological work focussed on object interaction and the affordances granted to us by proficient tool use. Martin Heidegger's notion of “readiness-to-hand,” for instance, is most commonly read as the melting away of a tool during use, the classic and often-repeated example being the removal of focus on the hammer and instead concentrating on the work to be done, on the driving of the nail; the hammer melts away, but not for the novice user, a point I'll come back to.
Similarly Maurice Merleau-Ponty's near equally famous example is of a blind man experiencing the world through his cane, bringing it on board with his body, becoming a single expertly functioning unit. Philosophers interested in Cognitive Neuroscience such as Don Ihde and Shaun Gallagher later saw a ready link between such phenomenological work and emerging evidence from Cognitive Neuroscience regarding the nature of tool use.
SLIDE - Capuchins
Maravita and Iriki's hugely influential review paper “Tools for the Body (Schema)” discusses several single-neuron studies of capuchin monkeys' mental mapping of the shape of their bodies being altered to include a tool, a rake, after practicing reaching for food, and this incorporation of a tool into the body schema has also been demonstrated in humans with hemispatial neglect.
Discussions of such work now increasingly mention Heidegger and Merleau-Ponty, and in March 2010 a group of researchers led by Dobromir Dotov
SLIDE - Dotov et al
published an empirical study explicitly looking at the transition between readiness- and unreadiness-to-hand described by Heidegger in the 1920s.
Whilst such phenomenology-savvy Cognitive Neuroscience is being conducted we need to find ways in which its insights can be brought back into contemporary Humanities research and have an impact beyond a few select authors such as Gallagher, Ihde, and George Lakoff and Mark Johnson. And I think that Digital and Cyberculture studies is a natural fit for such discussions, grounding the philosophy in a cultural issue which affects all of our lives inside and outside of academia.
One area in which this might be put to work is in investigating people's resistance to reading from e-readers. Particularly around the launch of a new Kindle, you can barely go a week without an editorial extolling the virtues of how good paper smells, how awful reading from a screen is, and how you can't read your Kindle in the bath.
Such anecdotes, stored in blogs, reviews, editorials, and conversations, are a living repository of folk-phenomenological reports of what it takes to acclimatise to new devices, to become experts again, and this resistance as a broader trend is significant, sitting on a continuum of resistance to new technology more generally, with technology often being criticised as being “unnatural” or as somehow distancing people from true experience.
An e-reader is clearly an example of a technology, but many readers don't see books as being of the same order of things, and this draws into question what exactly we even mean by “technology.” There have been less attempts to formally define the word than you might imagine; it's usually taken for granted what technology is.
Whilst we might often agree on the objects under discussion - computer: yes, coriander: no - the specifics of why this might be so are vague. What makes a hammer of the same order of objects as an industrial press? And does everyone experience this mysterious parity in identical ways, allowing for the consensus, or does “technology” define items across a range of unrecognised and untheorised responses?
In trying to more rigidly define my own usage of the term I've come to see technology not as a class of objects that manages to find some parity between fire, mobile phones, and the Large Hadron Collider, but instead, at least in part, as the phenomenological experience of a particular kind of expert use.
If we use the word “tool” or “device” to describe an artefact that we use to accomplish a task then we might reserve that word “technology” for the class of objects which melts away during use, which becomes ready-to-hand in Heidegger's terminology, or, from Maravita and Iriki's perspective, becomes incorporated into the body schema, as it seems fitting to deploy the term technology for those things which might have the greatest effect on our mediated experience.
For an apprentice carpenter the jarring back and forth of the saw as the teeth catch in the grain are a world away from the expert carpenter’s easy push and draw; with a phenomenological definition of technology we might say that the carpenter encounters saws as technologies, whereas the apprentice doesn't, the apprentice simply attempts to use an artefact. As the expert skateboarder sees the city differently from the novice, so the apprentice draws their own lines of potential differently to the master craftsman.
When we say “technology” in this way we might refer to either individual experience, or, more commonly, to the way that most users in a society encounter an object. This would render the Large Hadron Collider an artefact, a tool, but not a technology, and logically separates it from being of the same order as the mobile phone on which you've texted and texted and texted, and chatted and chatted and chatted, and when you feel it buzz in your pocket you don’t even think, your hand already knows what to do, and it fumbles the thing up to your face with the call already answered because you're used to avoiding that little delay between pressing the button and it actually connecting…
That is equipment incorporated, just as readily as Heidegger's hammers or Merleau-Ponty's cane, a technological interaction, and one which might, as a shorthand, lead us to describe the mobile phone, as a particular branch of artefacts, as a technology. Technologies, under this definition, are those things which are phenomenologically similar in their capacity to be incorporated and effect change in our perception of what we can achieve.
As Timothy Taylor puts it:
SLIDE - Tim Taylor
“it is not too much philosophy to say that the emergence of technology was and is intimately connected with the extension of the range of human intentionality. Without a car...I could not have intended to go fishing..., given the distance involved; without stone tool technology, our prehistoric ancestors could not have had the intention to kill big game...[T]he existence of objects, such as saucepans, not just allows actions but suggests them” (Timothy Taylor, The Artificial Ape 152).
In short, I've started to think of technologies as those artefacts which change the way we look at things. Maravita and Iriki's review suggests that the expert rake-using capuchins viewed the extent of their bodies differently through the use of that tool, and I want to explore whether this might equally clearly occur in the invisible use of our favourite reading technologies, be they Kindles or paperbacks, or if only paperbacks can legitimately melt away during use, something which would certainly validate those who resist e-reading for some nebulous sense of “unnaturalness.”
In a 2010 paper, Christopher Davoli, Feng Du, Juan Montana, Susan Garverick, and Richard Abrams published a research paper entitled: “When Meaning Matters, Look But Don’t Touch: The Effects of Posture on Reading.” The paper asks: If our hands are physically near written material, does this affect our textual interpretation skills? Does holding a text for reading have a different effect than when the hands are removed from the visual field, for example when words are up on a computer screen? The short answer appears to be “yes,” holding a text in the hands actually appears to be worse for comprehension.
The paper begins by citing numerous studies which demonstrate the brain's acute interest in visual and spatial information directly surrounding the hands. This makes intuitive sense if you consider tool use: we need incredibly precise information about our hands' position in space when we dextrously and accurately manipulate objects.
But this incredible precision comes at a cost, and this paper identifies that cost as manifesting in decreased semantic understanding: when our hands are near something the drain of producing a heightened spatial awareness interferes with the comprehension of information required by another activity.
SLIDE - Stroop Test
This conclusion comes from the team carrying out Stroop tests, a standard test in experimental Psychology for determining the speed of semantic processing, where the congruence or incongruence of an example was indicated by the test subject either pressing a button to the left or right side of the screen (seen in picture A), i.e. their hands were near the text, or pressing a button on their left or right leg (seen in picture B), i.e. their hands were away from the text.
The team found a statistically significant, and in some cases dramatic drop in response times when the answer was indicated with the hands by the sides of the screen over being indicated with button pushes on the legs. This is the kind of work which can be expanded on in order to further establish the particularities of the experience of the screen as opposed to the page, and, in a recursion, a phenomenology aware of phenomenology-influenced Cognitive Science could be front-loaded back into experimental design.
The placement of the hands materially affects how we look at the world; the use of objects and technologies affects how we see our potential. It strikes me that we need a language which is capable of dealing with the modern experience of looking and acting, with our perception of ourselves as hackable and augmentable by seemingly unnatural things which become as natural during use as hammers, books, and breathing, and that language might well be considered the work of a Cognitive Humanities.
Cognitive Neuroscience and phenomenology, particularly turned to the task of exploring emerging digital cultures and products, sit naturally alongside one another; they are capable of producing a tremendous awareness of what occurs during use. By defining terms on an interdisciplinary basis, when minds that know they have bodies start to talk back and inform those bodies about the nature of their minds, then we produce a language which attempts to be commensurate to the task of describing a new aesthetic, new potentials, new actions, a new wave of experimentation, and a culture where people increasingly know a little of everything.