Social Media and Time

Sarah Perez at ReadWriteWeb provocatively asserts that Real People Don’t Have Time for Social Media. In doing so, she addresses a question posed by Nina Simon of Museum 2.0: 08 How Much Time Does Web 2.0 Take? Each post is thoughtful, and has drawn good comments.

Although both posts are about social media and the time it takes, they address very different aspects. Nina’s post is about depth of involvement, while Sarah’s is about breadth. Nina identifies three levels of involvement: participant (1-5 hours/week), content provider (5-10), community director (10-20).

Sarah starts her post by identifying two broad levels of involvement.

Let’s be honest here: we’re all a bunch of social media addicts. We’re junkies. Whether it’s a new Twitter app, a new Facebook feature, or a new social anything service, we’re all over it. But we may not be the norm. The truth is, being involved in social media takes time, something that most people don’t have a lot of. So how can regular folk get involved with social media? And how much time does it really take?

One of the differences between “us” and regular folk is that, when it comes to social media tools (Twitter and clients for it, the BlogIt Facebook application) we try out new stuff, and are inclined to switch if the new stuff is better. In choosing tools, we maximize: we want the best, and are willing to spend time to become aware of it, to explore it, and to switch to it.

Real people, on the other hand, satisfice. Satisficing is a decision-making strategy which attempts to meet criteria for adequacy, rather than to identify an optimal solution. People, real or otherwise, satisfice most of the time.

In the social media context, real people may well find and stick with a “good enough” service for, say, music discovery. We might use several, not because we need several, but because the discovery tools themselves interest us, and we want to compare them.

Question of the Day, and…

OK, so it’s two questions. What happens when the past becomes so prevalent it is no longer even considered “the past”? When the availability of the archive destroys the very concept of the archive?

To give some context, the question comes from a post, from an English prof’s blog, that embeds a Lemonheads video from 1992, and includes the following academic-speak.

For those of us who came of age when mass media had become a hegemonic force so pervasive that it practically needed no ideology to justify it, it’s harder and harder not to look back. The past is available everywhere today: from eBay to YouTube. It no longer needs to be “demystified” by theorists because its abutment to the contemporary interfaces that display it renders it ironic.

Lest it seem that the above refers only to the web, from which reality is a safe retreat where time behaves well, consider the following look forward from William Gibson.

One of the things our grandchildren will find quaintest about us is that we distinguish the digital from the real, the virtual from the real. In the future, that will become literally impossible. The distinction between cyberspace and that which isn’t cyberspace is going to be unimaginable… Now cyberspace is here for a lot of us, and there has become any state of relative nonconnectivity. There is where they don’t have Wi-Fi.

Putting the above together, the distinction between “here and now” and “everywhere and all the time” is blurring for us, and will cease to be visible or relevant later this century. I’m not sure I believe that, but it’s an interesting thought with which to start the week.

The Gibson quote is from a Rolling Stone interview, which I found via Glyn Moody. I found the question of the day via Liz Hand.