Sunday, September 6, 2009

Retwitting Evolution

The micro blogging and social networking site Twitter took off last year and had more than 44.5 million users worldwide as of June. In the 140-character limited ecosystem of Twitter, users have evolved a language of their own, figuring out creative ways to filter the sometimes overwhelming stream of Twitter posts. Now, Twitter has announced that a user-generated communication technique called retweeting--reposting someone else's message, similar to quoting--will be formally incorporated into Twitter. Some experts say Twitter's approach will hinder the conversational aspect of retweeting; others predict that it will create a new way of communicating.



Twitter has incorporated other user-generated linguistic tools, such as using a hash symbol in front of a word to make it easily searchable (like "#conference09"). Another common technique is typing @ in front of a username to reply directly (but publically) to the user, which Twitter also formalized after users adopted it. These linguistic tools have even trickled into other social media environments, including YouTube, Flickr, Facebook, and blogs.
Currently, there is no set format for retweeting, which loosely consists of reposting someone's tweet and giving due credit. The most common scheme for a retweet involves prefacing the post with the letters "RT," then the @ symbol, and the username of the person being quoted. The retweet rebroadcasts the information to a new set of followers, who see the retweet and have the option of retweeting themselves. In this way, ideas, links, and other information can be distributed--and tracked--fairly quickly.
But the retweeting format is much more inconsistent and complex than the targeted reply and hashtag conventions, according to Microsoft Research social media scientist Danah Boyd, who recently posted a paper on the behavior of retweeting. Variations include typing the attribution at the end and using "via," "by," or "retweet" instead of "RT." What's more, people often add their own comments before or after a retweet. This becomes a problem with Twitter's 140-character limit, explains Boyd. Typing "RT @username" takes up characters, and so does adding a comment. To deal with this, users will paraphrase or omit part of the original text, sometimes leading to incorrect quotes.
Last week, Twitter announced that it will soon implement a button that will let users automatically repost someone else's tweet. While this will make it quicker and easier for users to accurately retweet, the mockup of the new button does not appear to let users edit the retweet, so that commentary can be incorporated. Rather, the "retweet" button will add the image and name of the quoted person to the original tweet and post it for those who follow the retweeter.
The new retweet function "is not going to meet the needs of those who retweet. At the same time, I think it's going to bring retweeting to a whole new population," says Boyd. "Adding commentary is a huge element to why people retweet." Instead of just replying privately to a person with an opinion, by retweeting and adding a comment, users can target a larger audience, sharing their opinions and inviting others to do the same, she says.
Boyd found that the percentage of Twitter users who retweet is fairly small, but she expects that number to increase once the retweet button is incorporated. In her research, Boyd found that 11 percent of the retweets examined contained commentary. But she says that number likely underestimates the phenomenon, as she only looked for comments at the beginning of the message.
"Retweeting is primarily used by the geeks and news folks," she says. "What's really starting to hit [Twitter] in large numbers... are those involved with the pop culture." Boyd expects that a retweet button will bring the practice to those millions of users who follow celebrities, such as Twitter fanatics Ashton Kutcher and Oprah Winfrey, for example. "We're going to see information spread from populations who haven't engaged in that way [before]. We'll see an evolution of the behavior," says Boyd. "It will become a way to validate or agree with other users' content."
Users often employ retweets to provide context in conversation, says Susan Herring, a professor of information science and linguistics at Indiana University and editor in chief of the Language@Internet journal. "I can't imagine that [the new Twitter tool] will be very satisfactory to Twitter retweeters," says Herring. "A retweet plus a comment is a conversation. A retweet alone could be an endorsement, but it's a stretch to view an exchange of endorsements as a conversation." Herring does agree that it will increase retweeting and broaden the range of users who retweet.
Retweets are not just of interest to users but also are valuable to companies and researchers who strive to keep track of how ideas spread. Retweeting "is this elegant viral mechanism," says Dan Zarrella, a Web developer who studies viral marketing in social media. "The scale and data you can extract from [retweets] has never been possible with [other] viral or word-of-mouth communications," says Zarrella, who claims to have a database of more than 30 million retweets.
"I think that having a button and supported structure of retweeting is definitely a good idea, but I disagree with the implementation," Zarrella says, and suggests using a format like third-party Twitter tool TweetDeck and others do: pressing a retweet button there will automatically copy and paste the old link with the "RT" syntax, but the tool still allows the retweeter to modify the text.
By taking out the "RT @username," Twitter is making it impossible for users to search for retweets themselves, says Zarrella. "They're limiting how much you can analyze retweets." Zarrella speculates as to whether the retweet button might have been created so that, down the road, Twitter can charge for different features, such as extensive tracking of retweets.
In addition to showing the original tweeter's image, the new Twitter button will also show the latest 20 retweets of a post. "If they show the breadcrumbs of the trail of everyone who retweeted, that's a good thing," says Steve Garfield, a new media advisor to several large companies and prolific video blogger. "I like to add value to my retweets by adding a comment, to tell people why I like it." If the new function doesn't allow for comments, Garfield says users will just design a new way or revert to the old way.
"People will continue to repurpose Twitter to meet their needs," predicts Herring. "I can't imagine that those who are passionate retweeters will discontinue their practices."

Adding Trust to Wiki


The official motto of the Internet could be "don't believe everything you read," but moves are afoot to help users know better what to be skeptical about and what to trust.
A tool called WikiTrust, which helps users evaluate information on Wikipedia by automatically assigning a reliability color-coding to text, came into the spotlight this week with news that it could be added as an option for general users of Wikipedia. Also, last week the Wikimedia Foundation announced that changes made to pages about living people will soon need to be vetted by an established editor. These moves reflect a broader drive to make online information more accountable. And this week the World Wide Web Consortium published a framework that could help any Web site make verifiable claims about authorship and reliability of content.
WikiTrust, developed by researchers at the University of California, Santa Cruz, color-codes the information on a Wikipedia page using algorithms that evaluate the reliability of the author and the information itself. The algorithms do this by examining how well-received the author's contributions have been within the community. It looks at how quickly a user's edits are revised or reverted and considers the reputation of those people who interact with the author. If a disreputable editor changes something, the original author won't necessarily lose many reputation points. A white background, for example, means that a piece of text has been viewed by many editors who did not change it and that it was written by a reliable author. Shades of orange signify doubt, dubious authorship, or ongoing controversy.
Luca de Alfaro, an associate professor of computer science at the UC Santa Cruz who helped develop WikiTrust, says that most Web users crave more accountability. "Fundamentally, we want to know who did what," he says. According to de Alfaro, WikiTrust makes it harder to change information on a page without anyone noticing, and it makes it easy to see what's happening on a page and analyze it.
The researchers behind WikiTrust are working on a version that includes a full analysis of all the edits made to the English-language version of Wikipedia since its inception. A demo of the full version will be released within the next couple months, de Alfaro says, though it's still uncertain whether that will be hosted on the university's own servers or by the Wikimedia Foundation. The principles used by WikiTrust's algorithms could be brought onto any site with collaboratively created content, de Alfaro adds.
Creating a common language for building trust online is the goal of the Protocol for Web Description Resources (POWDER), released this week by the World Wide Web Consortium.
Powder takes a simpler approach than WikiTrust. By using Powder's specifications, a Web site can make claims about where information came from and how it can be used. For example, a site could say that a page contains medical information provided by specific experts. It could also assure users that certain sites will work on mobile devices, or that content is offered through a Creative Commons license.
Powder is designed to integrate with third-party authentication services and to be machine-readable. Users could install a plug-in that would look for claims made through Powder on any given page, automatically check their authentication, and inform other users of the result. Search engines could also read descriptions made using Powder, allowing them to help users locate the most trustworthy and relevant information.
"From the outset, a fundamental aspect of Powder is that, if the document is to be valid, it must point to the author of that document," says Phil Archer, a project manager for i-sieve technologies who is involved with the Powder working group. "We strongly encourage authors to make available some sort of authentication mechanism."
Ed Chi, a senior research scientist at the Palo Alto Research Center, believes that educating users about online trust evaluation tools could be a major hurdle. "So far, human-computer interaction research seems to suggest that people are willing to do very little [to determine the trustworthiness of websites]--in fact, nothing," he says. As an example, Chi notes the small progress that has been made in teaching users to avoiding phishing scams or to make sure that they enter credit-card information only on sites that encrypt data. "The general state of affairs is pretty depressing," he says.
Even if Web users do learn to use new tools to evaluate the trustworthiness of information, most experts agree that this is unlikely to solve the problem completely. "Trust is a very human thing," Archer says. "[Technology] can never, I don't think, give you an absolute guarantee that what is on your screen can be trusted at face value."