Nice turn of phrase from Oliver Burkeman.
Contemplative computing may sound like an oxymoron, but it's really quite simple. It's about how to use information technologies and social media so they're not endlessly distracting and demanding, but instead help us be more mindful, focused and creative.
My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Co. in 2013, and is available in bookstores and online. This 2011 talk is a good introduction to the project and its big ideas.
Nice turn of phrase from Oliver Burkeman.
I'm just getting around to Carl Wilkinson's recent Telegraph essay on writers "Shutting out a world of digital distraction." It's about how Zadie Smith, Nick Hornby and others deal with digital distraction, which for writers is particularly challenging. Successful writing requires a high degree of concentration over long periods, but the Internet can be quite useful for doing the sort of research that supports imaginative writing (not to mention serious nonfiction). Add in communicating with agents, getting messages from fans, and the temptation to check your Amazon rank, and you have a powerful device.
Unfortunately, the piece also has a couple paragraphs featuring that mix of technological determinism and neuroscience that I now regard as nearly inevitable. Editors seem to require having a section like this:
the internet is not just a distraction – it’s actually changing our brains, too. In his Pulitzer Prize-nominated book The Shallows: How the Internet is Changing the Way We Think, Read and Remember (2010), Nicholas Carr highlighted the shift that is occurring from the calm, focused “linear mind” of the past to one that demands information in “short, disjointed, often overlapping bursts – the faster, the better”….
Our working lives are ever more dominated by computer screens, and thanks to the demanding, fragmentary and distracting nature of the internet, we are finding it harder to both focus at work and switch off afterwards.
“How can people not think this is changing your brain?” asks the neuroscientist Baroness Susan Greenfield, professor of pharmacology at Oxford University. “How can you seriously think that people who work like this are the same as people 20 or 30 years ago? Whether it’s better or worse is another issue, but clearly there is a sea change going on and one that we need to think about and evaluate.... I’m a baby boomer, not part of the digital-native generation, and even I find it harder to read a full news story now. These are trends that I find concerning.”
As with Nick Carr's recent piece, Katie Roiphe's piece on Freedom, everything Sven Birkets has written since about 1991, and the rest of the "digital Cassandra" literature (Christopher Chabris and Daniel Simons called it "digital alarmism"), I think the problem here is that statements like these emphasize the flexibility of neural structure in a way that ironically diminishes our sense of agency and capacity for change. The argument works like this:
I don't want to argue, pace Stephen Poole, that this is merely neurobollocks (though I love that phrase), (Nor do i want to single out Baroness Greenfield, who's come in for lots of criticism for the ways she's tried to talk about these issues.)
All I want to argue is that 1-4 can be true, but that doesn't mean 5 must be true as well.
Technological determinism is not, absolutely not, a logical consequence of neuroplasticity.
It's possible to believe that the world is changing quickly, that our brains seek to mirror these changes or adapt to them in ways that we're starting to understand (but have a long way to go before we completely comprehend), and lots of this change happens without our realizing it, before we're aware of it, and becomes self-reinforcing.
But-- and this is the important bit, so listen up-- we also have the ability of observe our minds, to retake control of the direction in which they develop, and to use neuroplasticity for our own ends.
Because we can observe our minds as work, we can draw on a very long tradition of practice in building attention and controlling our minds-- no matter what the world is doing. Yes, the great Jeff Hammerbacher line that "The best minds of my generation are thinking about how to make people click ads" is absolutely true*, but when all is said and done, even Google hasn't taken away free will.
We can get our minds back. It's just a matter of remembering how.
And can even be represented in graphical form.
The Memory Network has published a new essay by Nick Carr on computer versus human memory. This is a subject I've followed with great interest, and when I was at Microsoft Research Cambridge I had the good fortune to be down the hall from Abigail Sellen, whose thinking about the differences between human and computer memory is far subtler than my own.
Carr himself makes points about how human memory is imaginative, creative in both good and bad ways, changes with experience, and has a social and ethical dimension. This isn't new: Viktor Mayer-Schönberger's book Delete: The Virtue of Forgetting in the Digital Age is all about this (though how successful it is is a matter of argument), and Liam Bannon likewise argues that we should regard forgetting as a feature, not a bug.
The one serious problem I have with the piece comes after a discussion of Betsy Sparrow's work on Internet use and transactive memory:
We humans have, of course, always had external, or “transactive,” information stores to supplement our biological memory. These stores can reside in the brains of other people we know (if your friend Julie is an expert on gardening, then you know you can use her knowledge of plant facts to supplement your own memory) or in media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the Web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of outside information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.
To me this paragraph exemplifies both the insights and shortcomings of Carr's approach: in particular, with the conclusion that "we may be entering an era in history in which we will store fewer and fewer memories inside our own brains," he ends on a note of technological determinism that I think is both incorrect and counterproductive. Incorrect because we continue to have, and to make, choices about what we memorize, what we entrust to others, and what we leave to books or iPhones or the Web. Counterproductive because thinking we can't resist the overwhelming wave of Google (or technology more generally) disarms our ability to see that we still can choose to use technology in ways that suit us, rather than using it ways that Larry and Sergei, or Tim Cook, or Bill Gates, want us to use it.
The question of whether we should memorize something is, in my view, partly practical, partly... moral, for lack of a better word. Once I got a cellphone, I stopped memorizing phone numbers, except for my immediate family's: in the last decade, the only new numbers I've committed to memory are my wife's and kids'. I carry my phone with me all the time, and it's a lot better than me at remembering the number of the local taqueria, the pediatrician, etc.. However, in an emergency, or if I lose my phone, I still want to be able to reach my family. So I know those numbers.
Remembering the numbers of my family also feels to me like a statement that these people are different, that they deserve a different space in my mind than anyone else. It's like birthdays: while I'm not always great at it, I really try to remember the birthdays of relatives and friends, because that feels to me like something that a considerate person does.
The point is, we're still perfectly capable of making rules about what we remember and don't, and make choices about where in our extended minds we store things. Generally I don't memorize things that I won't need after the zombie apocalypse. But I do seek to remember all sorts of other things, and despite working in a job that invites perpetual distraction, I can still do it. We all can, despite the belief that Google makes it harder. Google is a search engine, not a Free Will Destruction Machine. Unless we act like it's one.
Chad Wellmon has a smart essay in The Hedgehog Review arguing that "Google Isn’t Making Us Stupid…or Smart."
Our compulsive talk about information overload can isolate and abstract digital technology from society, human persons, and our broader culture. We have become distracted by all the data and inarticulate about our digital technologies…. [A]sking whether Google makes us stupid, as some cultural critics recently have, is the wrong question. It assumes sharp distinctions between humans and technology that are no longer, if they ever were, tenable…. [T]he history of information overload is instructive less for what it teaches us about the quantity of information than what it teaches us about how the technologies that we design to engage the world come in turn to shape us.
It's something you should definitely read, but it also reminded me of a section of my book that I lovingly crafted but ultimately editing out, and indeed pretty much forgot about until tonight. It describes the optimistic and pessimistic evaluations of the impact of information technology on-- well, everything-- as a prelude to my own bold declaration that I was going to go in a different direction. (It's something I wrote about early in the project.)
I liked what I wrote, but ultimately I decided that the introduction was too long; more fundamentally, I was really building on the work of everyone I was talking about, not trying to challenge them. (I don't like getting into unnecessary arguments, and I find you never get into trouble being generous giving credit to people who've written before you.) Still, the section is worth sharing.
Alexis Madrigal sighs, "Another day, another New York Times story about technology addiction." He's pointing to a Matt Richtel article about concerns about technology addiction-- right in the heart of Silicon Valley.
The concern, voiced in conferences and in recent interviews with many top executives of technology companies, is that the lure of constant stimulation — the pervasive demand of pings, rings and updates — is creating a profound physical craving that can hurt productivity and personal interactions.
“If you put a frog in cold water and slowly turn up the heat, it’ll boil to death — it’s a nice analogy,” said Mr. Crabb, who oversees learning and development at Facebook. People “need to notice the effect that time online has on your performance and relationships.”
Okay, first of all, yes it's a nice analogy, but the whole "frogs in pots of cold water don't notice that they're boiling to death when the heat goes up" thing is wrong. Try it yourself. See what happens.*
Second, where the Hell else is this kind of concern going to manifest itself? The people who are going to be worried about the downsides of being plugged in are… wait for it... people who are plugged in.
Everyone I interviewed for my book chapter on digital Sabbaths is an engineer, a professor, a writer, a consultant, a Web developer-- in other words, people who are seriously wired, and need to be to work. (They also, as Madrigal would point out, tend to work in industries that value overwork, and the perception of busyness). You wouldn't expect Buddhist monks to worry about these issues. (And indeed they tend not to, but because they have a very sophisticated, self-empowering view of attention and distraction. See chapter 3 of my book, or my TedxYouth talk.)
Third and finally, let me just quote this article:
In an era when the boss wants us available 24/7, and when the high priests of the new economy bombard us with ubiquitous marketing messages, some burnt-out survivors are taking another look at their cell phones, pagers, home satellite dishes and "constant connectivity" to the Internet….
"We seem to have no way to put a human handle on our ingenuity," he says. "Between 80 and 90 percent of the messages we get every day are marketing messages, designed to make us feel incomplete. This is having a terrible effect on our inner landscape."
This is from 2001. It's the first instance I can find of the use of the term "digital sabbath." Which is not to say that this conversation is unimportant, but that we've been having it here for some time.
*Spoiler alert: you'll end up with a terrified / pissed off live frog and water all over your stove.
Via Big Think, I found this piece by Rob Horning of The New Inquiry on "Social Graph vs. Social Class." I'm not enough of an insider to be able to decode the whole thing-- academic Marxism is defined as much by its tangle of inside references as any academic field-- but I thought it worth linking to.
This interpretation of how society is organized — the one that anything labeled as “social” by the tech world helps sustain — precludes an interpretation that acknowledges the possibility of class, of concrete groups with shared interests that they work to construct and then use as the basis for forcing concessions from capital. In the network, you are on your own; its ideology suggests we are all equally points on the great social graph, no different from anyone else save for the labor we put in to establishing connections. This obviates the issues of pre-existing social capital and class habitus that facilitate the formation of better connections and the ability to reap their value instead of being exploited by them.
Since the social graph traces intricate constellations that are always becoming ever more complex, it requires massive computer power and elaborate algorithms to interpret and trace out underlying patterns of significance. Generally, only capital has the resources to summon such power, so the commonalities called into being through such analysis of network data are commercial ones. Retailers can figure out what demographic and lifestyle pattern you fit into, whther you know it or not, and then you with advertising that reinforces your belonging and takes advantage of it.
But to forge a social class, a different sort of work is required, called forth by a different conception of society, based on antagonisms between blocs (and ongoing fights that require long-term strategies), not antagonisms between individuals (whose spontaneous skirmishes require more or less ad hoc tactics). Think E.P. Thompson’s The Making of the English Working Class, which treats class not as a statistical artifact but as something that’s as much forged deliberately by members than ascribed by outside forces. The social graph purports to passively record social arrangements that emerge organically and thus reflect some sort of true and undistorted account of how society works. That conception discourages the possibility of those plotted on the graph from making a social class. Social media users don’t take advantage of their connectedness to undertake the work of finding the bases by which they can see their concerns as being shared, being in some way equivalent. Instead, their connectedness drives them to preen for attention and personal brand enhancement. One must work against social media’s grain to use it to develop lasting, convincing political groupings.
I suppose the reply would be that if social media don't work to forge class identity, they have proven their worth as a tool for organizing other kinds of groupings: smart mobs, insurgencies, the Arab spring, and so on. And I'd be curious to know whether if you eliminated the language of the social graph, Horning's objections would be answered.
One of the questions I've been working through in my book is this: how do you decide when it's okay to outsource a cognitive function? When is it okay to let your electronic address book remember all your phone numbers, for example? When should you try to memorize a street address, rather than let your GPS or iPhone remember it for you.
I think the simple answer is this. Will memorizing the information help you survive a Zombie Apocalypse?
Let me explain through a couple examples.
I haven't memorized a Skype user name other than my own. Ever. And I don't worry about that one bit. Skype user names are useless outside of the Skype service. You can't go on AOL Instant Messenger and call someone's Skype username; you can't dial it from a phone; you can't use it to send an email. The only context in which that piece of information is really useful is when you're using Skype. (You might argue that if I know someone's address, and a different person asks for it, I could give it to them; but poor mental cripple that I am, I can't. My response is that Skype itself contains pretty powerful search functions that let you find pretty much any user who wishes to be found.)
Now, if there is a Zombie Apocalypse (or ZA), I assume that Skype is going to go down. The service might keep going for a little while, but eventually its sysadmins and developers are going to stop maintaining the system, and start eating brains.
Knowledge about usernames holds no value outside the context of the service. Given this, it makes sense to not bother to remember usernames.
More generally, knowledge that is useful in one specific context only can be safely left out of your memory, if it's easily retrievable in that context. For example: pretty much every weekend I make pancakes or waffles. I always use Bisquik to make them. The recipes for pancakes and waffles are on the side of the Bisquik box. So even though I've made these for ages, I still don't really remember the recipes. Why? Because if I don't have a box of Bisquik, knowledge about how to cook with Bisquik isn't really very useful. And I can rely on the Bisquik people to remind me of the recipes, by publishing them on the side of the box.
So like Skype, whose usernames are useless if the service doesn't operate, I don't memorize the Bisquik pancake or waffle recipes because they're on the side of the box. If I don't have the box I don't have the mix. and if I don't have the mix, I don't need the recipe. The information and its utility always reside together in the same system.
In contrast, I feel uncomfortable if I don't know the phone numbers of close friends and family. (I confess I haven't memorized them all, but still I think I ought to.) Likewise, there are lines of poetry or quotes from the Bible, Stoic philosophers, and elsewhere that I memorize because knowing them makes me feel like a deeper person, and because they're useful during challenging times (like during a ZA). And while I take thousands of pictures a month at sports events, school functions, on trips, and so on, I still think it's very important to remember those events-- to construct an interior narrative that gives them some logic, and places them in a structure that helps explain them-- not just have records of them. As some neuroscientist said, our experience is reality is a vanishingly thin edge of the present, behind which stands a vast store of memory. Looked at this way, memory isn't just a function, or information. Memory is you.
Knowledge of how to fashion a tent: useful. Knowledge of how to assemble THIS particular exotic tent? Probably less so. Need to learn it if the instructions are stamped on the side of the tent? Zero. The need to be able to remember events in your own life, so you can make sense of yourself and the world. Always infinite.
UK-based futurists Andrew Curry, Victoria Ward and Sabine Jaccaud have recently completed a study of the future of the library. Ward's company Spark Now has a description of the project on its blog, and this jumped out at me:
The future of the library is, in some way, a paradox. So many of the long term trends are running against it that it is easy to assume that it is an anachronism of the 19th and 20th centuries. It is worth spelling this out. Such trends include the rise of digital technologies, and the accompanying rise of audio-visual culture; the long wave of individualism since the late 1960s; the shift from public provision to personal provision; the pressures on public expenditure; the emergence of the e-book and the digitization of books generally. It seems only a matter of time before the library withers away.
But look again, and some other, emerging, trends come into focus. Rising oil prices and greater work flexibility increase the value of the local; the rise of digital rights management fuels campaigns around openness; the number of books published every year continues to rise; issues of access and equity - and affordability - come into sharper focus as one austere year rolls into another; the relationship between the tangible and the digital object becomes increasingly complex; new attitudes to ownership (using, not having) make the library appear as a pioneer.
Look again, and you can start to think that if libraries did not exist, it would be necessary to invent them. But what sort of library would we invent?
They go on to describe a variety of functions that libraries past and present serve, and While it's a nice piece-- any defense of libraries that tries to take them seriously, and not just defend them on the grounds that they're Timeless Monuments of Civilization or whatever-- like lots of discussion of libraries, I think it leaves out something important that most of us overlook, but which I've become convinced is very important.
The best libraries are public spaces that support focus and contemplation.
It's easy make the argument that all these elements-- the public, the space, and the contemplation-- are anachronistic, or can be more efficiently supported elsewhere, or need to be combined in different ways-- and there are times when that might be appropriate. But predictions that the computer would lead us to a paperless office and future fell afoul of the affordances of paper and the role that those affordances play in supporting forms of work as different as corporate brainstorming, air traffic control, and treating hospital patients, and the wrong kind of library is no more likely to succeed in a more-digital future than a library without books would have survived in the past. (The concept of "affordances" comes from Harper and Sellen's Myth of the Paperless Office, and is one I've discussed elsewhere and used in my own work.)
One of the things plenty of people argue now is that the library is a community or collaborative space. Forget the idea of the library as a place where people sit quietly, and frumpy women in sensible shoes go "shhh." Today's libraries are incubators, collaboratories, the modern equivalent of the 17th-century coffeehouse: part information market, part knowledge warehouse, with some workshop thrown in for good measure.
But libraries need to be recognized as institutions that have also supported another incredibly important, and very hard, kind of creative work: the work of the solitary reader, the person who has to lock themselves away with books and ideas in order to produce something worth sharing with others, to have ideas that are worth trading in the scholarly bourse. Of course it's important to have spaces that support serendipitous hallway conversations, shop talk, and creative brainstorming; but where do those ideas come from in the first place? The tacit assumption tends to be 1) they arise in those spaces, or 2) Starbucks or people's offices-- in other words, spaces that libraries don't really have to worry about any longer.
But for many people, or for people at certain stages in their work, there's nothing quite like libraries for supporting deep, contemplative work. For one thing, libraries have Lots of Books. This is a fact that we tend to overlook, but even my local public library (which is pretty good, but not huge) has several orders of magnitude more books than I could ever hope to own. And yes, while it's now possible to have thousands of books in a Kindle, reading scholarly work on a Kindle is awfully hard, because scholarly reading is a martial art.
There's another element of this publicness that makes libraries work. The example of others working can energize a space and the people in it. In a place like a library reading room, working scholars become public characters, figures who define the quality of a space, set its tone, enforce its formal rules and cultural norms, and serve as examples for each other. Of course, it's possible for people to be distracting and annoying, just as someone who talks in a movie theatre or forgets to turn off their cellphone during a concert can be annoying; but when it works, that sense of solidarity can be a great thing.
I think we underestimate the value of public contemplation, and particularly in an era a lot of public discourse around libraries talks about them being community resources or trading zones, it's necessary to remind ourselves how good libraries can still be havens for thoughtful engagement with ideas.
It's also worth noting that this function remains an important one in good new libraries. For example, consider this description in the Guardian of the new Canada Water library in Southwark, which explains why it looks like an inverted pyramid:
The best form for a reading room is wide and horizontal, but there was not enough space for this at ground level, squeezed between the tube exit and the waterside. So the reading room is at the top, with the building widening as it ascends to make space for it, with the added benefit that the most important part of the building is placed high up – if not in the clouds, at least sufficiently far from the ground to feel removed and a little dreamy, as a library should.
Raised, it makes occasion for the spiral staircase, which in turn makes the business of going somewhere for a book into a little event or ceremony, rather than a sideways drift such as you might make into a supermarket.
It's a space I look forward to visiting in my next trip to London.
Slate contributor Katie Roiphe has a piece about Freedom, the turn-off-the-Internet app.
How many people made New Year’s resolutions to spend less time on the Internet? Yet another friend recently recommended that I try Freedom, the popular program that “locks” you off the Internet. The ubiquitousness of this program, which calls itself “a simple productivity application,” feels ominous to me….
The name of the program has to be part of its success; it plays on our hidden desires, the better self we are hoping for, links the program in our heads to revolutions, Arab springs, Thomas Jefferson. And yet the name also pleasantly and politely hints at another word: enslavement. What is frightening is the lack of control implied by this program, the total insufficiency of will when it comes to the Internet. Its generally upbeat vibe gestures toward a certain underlying desperation. I particularly like the comically Orwellian phrase on its website: “freedom enforces freedom.”
Having spent a few hours interviewing Freedom's creator, and people who use it, I think this analysis is a bit overdrawn and under-illuminating. However, as Nick Carr has shown, it's possible to write an entire book in which you take high-tech marketing seriously (Does IT Matter? is best read as a response to an Oracle sales rep's sales pitch), so fair play to Roiphe for keeping it to a couple paragraphs.
But things get interesting when Roiphe mobilizes her own rhetorical resources in the last couple paragraphs of the essay.
In fact the nostalgia for quiet, the elegant pieces extolling a lost peaceful world are a bit misleading. They suggest that if only we could turn off our devices, turn away, turn back to a little shack on a mountaintop, we could once again hear ourselves think. (Pico Iyer writes, for instance, “For more than 20 years, therefore, I’ve been going several times a year—often for no longer than three days—to a Benedictine hermitage … I don’t attend services when I’m there, and I’ve never meditated, there or anywhere; I just take walks and read and lose myself in the stillness.”)
This sounds lovely, of course, but the truth is that our minds have changed. We don’t use the Internet; it uses us. It takes our empty lives, our fruit fly attention spans, and uses them for its infinite glittering preoccupations. Solutions like Freedom or a couple of days at a Benedictine monastery can’t remake us into peaceful, moderate, contented inhabitants of the room we are in. If you ask any 60-year-old what life was like before the Internet they will likely say they “don’t remember.” How can they not remember the vast bulk of their adult life? The advent of our online lives is so transforming, so absorbing, so passionate that daily life beforehand is literally unimaginable. We can’t even envision freedom, in other words, the best we can hope for is Freedom.
Our minds have changed? The Internet uses us? Life before the Internet is "literally unimaginable?" This is like a petting zoo of every smart-sounding-but-actually-stupid thing people say about the Web: technological determinism, the special brand of technical fatalism masquerading as insider knowledge, an offhand invocation of brain changes, a declaration that the Internet breaks human history into Before and After.
None of this is true. None of it.
I'll grant that arguments along the lines of "Freedom is no longer available, we no longer have the ability to make choices about how and where we direct our attention, and our minds are no longer our own," have a compellingly dystopian and wonderful self-fulfilling quality. But that doesn't meant that they're correct. It may be the case that our options are constrained by systems, that Facebook and Google struggle mightily to set our default preferences in life to settings that advantage them, and that in our everyday lives we often wonder how spouses or parents got along without cellphones.
But there's nothing that says we have to treat technologies or ourselves this way; that we have to think of ourselves as being used by the Internet; or that we cannot remember what life was like before the TCP/IP protocol stack. The Web is not a boot stamping on a human brain, forever.
We still have choices, even if Facebook and Twitter (and in her own way, Katie Roiphe) all argue that we don't, that we should just be happy and hit Refresh.
Okay, back to real writing.
The underlying point that today's parents (like me) grew up with MTV, video games, computers, and other weapons of mass distraction is a good one. For some reason, too many people who rail against kids being on The Facebook make it sound like they grew up in Sterling Library, reading Montaigne and Rilke. No, we were playing Xevious and listening to Journey.
The video for my Marseille talk went live today. I was getting kind of impatient, but I must admit, they did a nice job editing it!
Thinking about the Baroness Greenfield interview, and responses to warnings that computers are affecting our brains, I wonder if we're talking about "brains" when we should be talking about "minds."
By that I mean three things.
First, talking about the impact of digital technologies in terms of cognitive changes allows for the response, "Yes, but your brain changes every time you learn something." For example, a 2009 Seed article criticizing Greenfield and Nicholas Carr included this:
Everything you do changes your brain," says Daphne Bavelier, associate professor in the Department of Brain and Cognitive Sciences at the University of Rochester. "When reading was invented, it also made huge changes to the kind of thinking we do and carried changes to the visual system."
This quote if great because it flattens the distinction between cognitive changes that we should worry about, and those that we shouldn't; and it places our current problems in a long history-- in a way that suggests that either we don't need to worry about them because humans will adjust (we survived books and television, after all!), or because they're inevitable (how many preliterate cultures survived their encounters with Greece, Rome, Spain, Britain, etc.?). (Ben Goldacre's criticism of her work is more substantive.)
Second, talking about "brains" versus "minds" suggests that there's no difference between the two, when arguably there is a very significant difference-- one that we can use to our benefit. Put briefly, most of us care about our minds; and it's useful to see the mind as not just contained within the brain-- not just the expression of a bunch of code that's written by neural pathways and synaptic connections-- but includes the senses, the body, and (if you buy Clark and Chalmer's argument in "The Extended Mind"), can include technologies and objects as well. What we really care about are larger mental capabilities-- our ability to pay attention and concentrate when we need to, to be creative, to be resilient-- that are very complex, and probably not things that can be easily located in particular parts of the brain.
Third, you don't need to be a neuroscientist to actively work to change your mind-- to extend your attention, or your capacity for drawing connections between things, or your ability to listen to other people. It undoubtedly helps to have an understanding of how your brain works, but at this stage it's not decisively useful-- by which I mean, it can help make sense of things you want to do to improve your mind or mental abilities, but it probably won't provide a definitive course of action.
To hear Nicholas Carr and Sven Birkets tell it, the Web makes you stupid in two ways. There's the performative sense: being on the Internet shrinks parts of your brain that deal with really complex reasoning. And there's the cultural sense: you have less exposure to, or interaction with, Serious Thoughts. In both cases, though, you're distracted by the infinite Internet Fun House from your original task: you sit down to read War and Peace or looking up the atomic weight of strontium, but end up watching a YouTube video of a monkey and a parrot playing ping pong.*
In this formulation, being focused and purposeful is good, while anything that leads you to watch teenage Japanese boys take whiffle ball hits to the crotch on a game show is bad. In the graphic below, imagine going from the bottom left corner to the top right.
But there's also an assumption that those are the only two modes-- focused and on-task, versus scattered and fractured. I think, however, that we need to recognize that there's a zone in the middle that consists of a variety of intellectual tasks, which are more or less directed or structured, and more or less "productive", but very much worth recognizing as part of our intellectual lives.
So what else is there between looking up atomic weights and YouTube videos of game-playing animals? You might think of structured procrastination as occupying part of another quadrant. Writing an article instead of grading papers, or writing another article instead of reviewing a grant proposal, are activities that are purposeful yet distracted: you know you should be doing something else, but you're putting it off by doing this other thing.
Other sorts of procrastination might be focused but aimless: reading a few more books so you can add more footnotes to the dissertation you've already finished would be an example.
And of course there are different levels of zoning out, some of which might be a little more engaging than others (e.g., playing a new Angry Birds level rather than replaying an old Angry Birds level).
Roughly in the middle, though, there's an area of activities that you could call exploratory. These don't begin with some specific question that needs an answer (e.g., what's the atomic weight of strontium). Here, you might ask a particular question, but your aim isn't to generate or acquire a piece of information, but to use the question as a probe into a bigger territory. You might browse the stacks, or the contents of a journal. You might spend some time looking at something you normally don't look at, because you want to be exposed to something different but potentially useful. Or you might think about how two different things you've been reading might fit together. (I've been reading Michael Lewis' The Big Short, for example, and am starting to think that the concept "attention economy" does for people's minds what the CDO did for people's homes. Not an obvious connection, but one I'm trying to flesh out.)
The point is, when you're in the middle of these activities, it's not really clear which ones are going to pay off, and which aren't; it's not just hard to figure out, it's completely impossible to do so. At some points in the creative process, it's useful to spend some time on the boundaries of order and randomness, or distraction and focus: it's a less "efficient" way of finding specific facts, but a potentially fruitful way to generate new ideas.
*Incredibly, there does not seem to be a video of a monkey AND a parrot playing ping pong. There's a parrot playing ping pong, however.
I've spent the last week working through the literature criticizing the effects of the Internet on our brains, the balance between our inward (private, contemplative) ;and outward (public, social) lives, our ability to read novels, and wrapped up with arguments for unplugging. Next week I start constructing the response.
Fundamentally, while I'm sympathetic to these arguments, I think the way they've framed the problem and its solutions are seriously flawed. I've already talked about the problems with equating literary reading with thinking and intelligence more generally; three other issues concern me here.
For the last day I've been reading about the digital sabbath movement, using it to try to better understand critiques of digital culture and life: how people self-diagnose the problems of being connected, and how they frame their responses.
When I first got into arguments about "what the Web is doing to our culture and our brains," I was manging editor of the Encyclopaedia Britannica. It was an extraordinary personal experience and a fascinating time to be in reference publishing, but those arguments were also very much shaped by the interests of the time: in particular, debates over the impact of the Internet were taking place in the shadow of the culture wars and arguments about the future of literature and cricism. Reading the literature on how the Web is making us dumb, I'm struck by the fact that while we no longer talk about how Derridian the experience of the reading on the Web is (even though publishers, authorities, and traditional ideas about grammar and quality are busy deconstrcting themselves), the opponents of that view are still going strong. They're still using literature as a proxy for intelligence and education, and are still implicitly making the case for the importance of literature in the formation of the self.
A few weeks ago I read William Powers' book Hamlet's Blackberry, but with the move and everything couldn't really write much about it. Still, it's worth noting.
Powers' book mainly consists of a series of case studies of reactions to new media innovations in the past-- from the inevitable discussion of Phaedrus (does anyone read The Republic any more?), to the equally inevitable reading of Marshall McLuhan. The case studies are made in support the argument has three big parts.
Yesterday I read Nicholas Carr's The Shallows: How the Internet is Changing the Way We Think, Read, and Remember. Frankly, I was prepared to severely dislike it-- his first book, Does IT Matter? drove me around the bend-- but I was a lot more sympathetic to this effort.
The Shallows' main argument is easy enough to summarize: all the time we're spending online is making it harder for us to think deeply, to read intensively, and to remember things. These three changes reinforce each other: the kind of reading Carr talks about is intellectually strenuous; memory turns out to be an important resource for deeper cognitive functions; and our ability to think is implicated in memory and attention.
And losing the cognitive abilities we used to develop by reading novels-- the sine qua non of deep thinking-- is a big loss indeed: "The linear, literary mind has been at the center of art, science and society," Carr contends. "It may soon be yesterday's mind."
Via Duke professor Cathy Davidson, I just came across this L. A. Times piece by Christopher Chabris and Daniel Simons. (They're authors of The Invisible Gorilla. The essay aim at "digital alarmism," the argument that the Internet is making us stupider by "trap[ping] us in a shallow culture of constant interruption as we frenetically tweet, text and e-mail," both leaving us less time to read Proust, and rewiring our brains so we're incapable of paying serious attention to... anything.
Not true, they counter:
I write about people, technology, and the worlds they make.
My book on contemplative computing, The Distraction Addiction, was published by Little, Brown and Company in 2013. (It's been translated into Dutch (as Verslaafd aan afleiding) and Spanish (as Enamorados de la Distracción); Russian, Chinese and Korean translations are in the works.)
My next book, Rest: Why Working Less Gets More Done, is under contract with Basic Books. Until it's out, you can follow my thinking about deliberate rest, creativity, and productivity on the project Web site.
The Distraction Addiction
My latest book, and the first book from the contemplative computing project. The Distraction Addiction is published by Little, Brown and Co.. It's been widely reviewed and garnered lots of good press. You can find your own copy at your local bookstore, or order it through Barnes & Noble, Amazon (check B&N first, as it's usually cheaper there), or IndieBound.
The Spanish edition
The Dutch edition
The Chinese edition
The Korean edition
Empire and the Sun
My first book, Empire and the Sun: Victorian Solar Eclipse Expeditions, was published with Stanford University Press in 2002 (order via Amazon).
PUBLISHED IN 2012
PUBLISHED IN 2011
PUBLISHED IN 2010
PUBLISHED IN 2009