(((See, the real problem with "Web 2.0" is that it
doesn't have enough professors of English in it.)))
Currently, for example, I am leading a collaborative project in the University of California system called "Transliteracies," whose goal is to "improve" online reading practices with an awareness (historical, social, aesthetic, and computational) of what "improvement" might mean. This project involves understanding what actually happens when students "surf the Net" and what kinds of new, untapped intelligences lurk in what might otherwise be called shallow, broad, casual, quick, or lateral browsing/searching. If we can understand better what happens when we surf the net (specifically, when we read online text in adaptive relation to new media and networked environments), then perhaps we can build tools and skills that give users, including students, a better chance of surfing the net to gain knowledge, as opposed to just doing knowledge work. Knowledge involves a self-reflexive circuit in which what we know is mediated by what we know about how we know. Today, universities should not only teach such recursive knowledge at a high, intellectual level ("no data without exposing the metadata" is my slogan) but also intervene at the level of the tools and source code that make knowledge possible.
Companies like Google, Amazon, and Adobe are innovating wonderfully in online reading, for example; but there is also a necessary role for the fully multi-disciplinary, historical, and social-good perspective that is only possible (among today's major social institutions) in the university. Keep in mind that the distance between research and end-user in the university is extremely short. The divide between research and teaching in the university is a cliche that is not really true. Everyday, researchers in the university have to face that lecture hall of uncomprehending, bored, or suspicious students–end-users, in other words,
at the most formative, vulnerable, yet (paradoxically) also shielded point in their lives. And so, ethically and pragmatically, the researcher-teachers of the university need to collaborate to create the tools that allow knowledge actually to work, which is to say, to be shared. (True knowledge work = knowledge sharing).
I've been thinking about the issue of the university and society for a long time, and you pressed the hot button.
GL: Do you find it justified to talk about a Web 2.0 wave? What could be the theoretical tools, for humanity scholars, to analyse blogs and social networks such as Orkut, Flickr and MySpace? Most net artists and activists that I know, can't deal with the subject formation that's happening inside those networks. They don't want to write a personal diary and don't feel going to a dating site after
work.
AL: Our Transliteracies project is beginning to collect for study in its Research Clearinghouse some of the tools that people have invented to analyze online social networking.
http://transliteracies.english.ucsb.edu/category/online-social-networks-tools-for-analyzing/
The social and collective dimension of online reading is one of the project's main concerns. In general, though, I am highly skeptical of the "Web 2.0" hype. There are two reasons for this. One goes back to the issue of history on which our interview started. "Web 2.0" is all about a generation-change in the history of the Web, but from a perspective that is looking at what is happening right now, as opposed to what was happening during the previous generational change (the "1980s" we discussed earlier). It's not
clear that we can really describe a generation change of this magnitude and complexity while we are in the midst of the change itself, except to say that "something" is happening that a future generation may decide is qualitatively different. After all, when people speak of Web 2.0, they are actually referring to a swarm of many kinds of new technologies and developments that are not all
necessarily proceeding in the same direction (for example, toward
decentralization, open content creation and editing, Web-as-service, AJAX, etc.).
It's not at all certain, for example, that open content platforms in the style of blogs, wikis, and content management systems align with a philosophy of decentralized or distributed control, since many such database- or XML-driven technologies require a priesthood of backend and middleware coders to create the underlying systems and templates for the new "open" communications. Just how many
people in the world, for example, can make one of the current generation of open-source content-management systems (which often start out as blog engines) do anything that isn't on the model of "post"-and-"category" or chronological posting? Even the more trivial exercise of re-skinning such systems (with a fresh
template) requires a level of CSS knowledge that is not natural to the user base. So saying that we are making the change from Web 1.0 to 2.0 is like saying that a swarm behavior is definitely moving in a single direction, when in fact it may be moving in several contradictory directions at once. (It's not accidental, by the
way, that many of the best known statements or conferences about Web 2.0 have relied on examples rather than generalizations. For example, Web 2.0 is "Flickr or MySpace.")
My second reason for being skeptical about 'Web. 2.0"–at least the hype about it–is more important. I think that people who make a big deal out of Web 2.0 are trying to take a shortcut to get out of needing to understand the real generation changes that are happening in the background and that underlie any change in the
Web. Those changes occur in social, economic, political, and cultural
institutions. Let's take the example of Facebook or MySpace, which (like other social networking systems) are often spoken of as exemplars of Web 2.0. These systems, of course, are deeply rooted in particular social scenes–especially at different levels of the educational system (even if MySpace started out in the music scene). There was recently a mini-scandal at my daughter's school (she's 13) when it was discovered that many in her class had lied about their age to set up MySpace pages, where they revealed unguarded details and characterizations about themselves without full awareness of what it meant to be online. What is happening
in such social scenes as the generations change?
Web 2.0 is just a high-tech set of waldo gloves or remote-manipulators that tries to tap into the underlying social and cultural changes but really requires the complement of disciplined sociological, communicational, cognitive, visual, textual, and other kinds of study that can get us closer to the actual phenomena.
That is, thinking that Web 2.0 is cool is just a shortcut because the real scene of cool lies underneath; and I don't think there are many developers of Web 2.0 technologies who have done the hard social and cultural studies to help them think about what they are developing. They make a neat system or interface that only
taps into some aspects of the social scene. Then, if there are a lot of hits or users, their system is said to be a paradigm. But it's hit or miss. There is no assurance that such technologies are the real, best, coolest, or even most useful "face," "book," or "space" of people–only that they are the face, book, or space allowed to surface through a particular lash-up of technologies.
What is happening underneath is history, in other words, and it is stupid to think that "Web 2.0" is any better as a formula for that than, for example, the even more stupid formula, "Generation Y." Ultimately, I guess, I don't believe in such concepts as Web 2.0 because I don't believe in People 2.0. (Go to the transhumanists for that). People live and change in relation to all that gave them
their history as people; and that history is swarming, overlapping, conflicted, and multidimensional.
GL: When I first read about the Transliteracies project I was surprised to see that was focussing on online reading. That seems so passive, as if the computer is a mere extension, or hybrid, of the television and the book. There is this widely shared assumption that the computer is there to produce texts, images and sounds.
A high-quality consumption of the produced content will likely happen elsewhere, in the cinema, a magazine, the lounge, through your iPod when you're on the road. And then there is the cultural factor that US-citizens spend a lot of time in front of PCs, whereas other cultures rather do something more social, with family
and friends, out on the street. What are the preassumptions and outcomes of Transliteracies so far in this respect?
AL: I disagree with you here, Geert, though usually we are–as they say–on the same page. The recent, explosive research in the related fields of "history of the book," "history of print culture," and "history of reading" shows that reading has never been a passive task–if by passive we mean the rote usage of information
distributed through well-understood, regulated channels. Consider, for example, William St Claire's recent Reading Nation in the Romantic Period (2004), which is an astonishing work of archival recovery and methodological innovation that, for instance, demonstrates the tremendous variety and inventiveness in the relations between, on the one hand, publication systems and, on the other, what can only be called reading systems (including the many kinds of collective reading societies, book clubs, lending libraries, etc., of the time). Nor is it only the scene of collective reading that teemed with inventive activity in the past. The individual
reader was inventive as well. Much of the recent research on the transition from the ages of orality to that of manuscripts and then of print has been about the way reading changed as a psychological or cognitive activity. And this is not even to mention the tremendous ferment of "writerly" activity that readers have always undertaken, including the medieval culture of copying and glossing, the Early
Modern culture of the "commonplace book" (the precursor of today's sampling, aggregating, etc.), the long history of annotation practices, and so on. It's a hoax that reading has ever been a passive activity. And this is even more the case now in our current moment when we are changing our collective print-reading practices to adapt to online reading practices, and vice versa.
There is no "producer" today in any realm (scholarship, the film industry, the technology industry, journalism, you name it) who is not first of all a prolific and creative "reader." Granted that much of this reading occurs in ways that are quick, distracted, and superficial–that is, not through "deep" or "close" reading
but through scanning, browsing, searching, aggregating, etc. And granted that an increasing proportion of the works that are read belong to such genres as the memo, report, spreadsheet, email, Web page, blog post, text message, podcast audio, and so on. But the premise of Transliteracies is that there are hidden intelligences and social agendas within such contemporary "superficial" reading–especially in its network effects–that can be formulated and improved, both for private and social good. I don't see any reason why corporations like Google, Amazon, Adobe, and so on should have an oligopoly over developing the activities (not the passivities) of online reading practices. They do what they do well. But what are the under-researched and under-developed areas in online
reading technologies, tools, and systems that need to be explored by non-profit and other social sectors to make the overall framework of online reading more robust and diverse? I'd like to see universities, governments, NGOs, and others contribute to that research before everything is graven in stone by the big business of online reading. So, the short answer to your question is: perhaps only the corporations want you to believe that reading is passive. Take what they give you (their systems, their innovations, their file formats and protocols, etc.). But it ain't so.
And as regards "high-quality consumption": I don't actually think it is a done deal that high-quality consumption always means high-sensory consumption of the sort you suggest (cinema, glossy mags, iPod, etc.). For a significant part of the world, including people who are producers in the knowledge-work economy, it
continues to mean low-sensory text. We don't even need to go to the old-school Unix folks for witnesses (all those old postings in the comp.unix.user-friendly or comp.human-factors newsgroups, for example, about why the Unix command line is actually more friendly because it gives control instead of illusory ease-of-use). We just need to consult the "new media" crowd who I think is the immediate
audience of our present interview.
Real high-quality consumption for this crowd means source-code or script view, which is plain-text. And that is not even to
mention the whole new plenum of machine readers (that is, RSS, adaptive aggregators, "Web services" of different kinds, etc.), which sip the fine wine of XML directly. These are also paradigms of active text-reading. Reading practices–individual, social, and machinic–are where the action is. Okay, so you can tell that I am an English professor who grew up reading (to the point where in childhood my parents had to make rules about how many hours I had to play
outdoors instead of curling up with a book). If Marshall McLuhan was an English professor who betrayed reading (the "Gutenberg galaxy") to prophesy the new mediaverse, then I am an English professor who sees a whole new, online textverse within the mediaverse.
# distributed via : no commercial use without permission
# is a moderated mailing list for net criticism,
# collaborative text filtering and cultural politics of the nets
# more info: [email protected] and "info nettime-l" in the msg body
# archive: http://www.nettime.org contact: [email protected]