RT @mecredis RANT RANT

First, if you don’t like Twitter (I know, this blog is becoming a Twitter fan page, but hey, its my blog, right?) don’t read this post. It’ll just annoy you, so consider this your fair warning.

Last night I finally figured out how to change Tweetie on the iPhone’s setting to allow me to post RT‘s instead of via‘s. The setting was buried in “Advanced -> Experimental ->  RT-gurgitationability” an obviously spiteful placement and label.

This means that my retweets look like this:

RT @creativecommons: June’s CC Salon NYC / @OpenVideo Conf Pre-Party: http://bit.ly/jAk1b Facebook RSVP: http://bit.ly/qJU3b

instead of this:

June’s CC Salon NYC / @OpenVideo Conf Pre-Party: http://bit.ly/jAk1b Facebook RSVP: http://bit.ly/qJU3b (via @creativecommons)

Why would Tweetie make it so difficult to use the RT convention over their suggested via convention?

This answer seems seems to be rooted in a minority view held by the creator of Tweetie. He doesn’t think the RT form is “cool” and thinks that it discourages people from “thinking for themselves”.

Or something.

The points raised against RT followed by my thoughts:

I don’t know how to reply to this. Is the @ symbol in e-mail cool? Its a convention, get over it.

So what? A massive amount of human creation is “me too”; there’s no reason to discourage this on a software level. Let people filter out the “me-too’ers” using their own agency and following habits. You’re not going to suddenly encourage people to be more original by breaking your own software and bucking a convention.

There are plenty of people that I stopped following on Twitter because their output consisted only of RT’s, and I agree, they were spammy. But again, hiding a useful feature because you think its going to decrease spam is naive at best, and fascist at worst.

More importantly, however, there’s value in verbatim copying: you preserve the tone and the meaning of the source. How should I retweet something that Shaq says, if I want my followers to see it, supposing they don’t already follow him? Am I supposed to rewrite Shaq’s words? The curious way in which Shaq interprets the English language on Twitter is one of the best reasons to follow him. Rewriting Shaq’s tweets would kill the meaning, and so would linking to them.

I also fail to see the difference in the claim that all retweets should be rewritten or linked from the claim  that all journalists must rewrite and link quotes from their sources. The point is making the actual quote available in their words, right now, not through a link, and not through your lens.

I actually have sympathy for this, to a certain extent. Many friends were confused by RT when joining twitter, but they asked questions and discovered the meaning. Same with e-mail.

You’re making my point for me!

One final point against Tweetie’s suggested convention: when you use (via @ … ) you’re adding 3 unnecessary characters compared to RT, which are precious when faced with Twitter’s 140 limit.

Anyway, at the end of the day, the developer of Tweetie’s behavior represents a strong argument for software freedom. If you can view the source, modify it, and distribute a new version, why not just fork the project and “fix” the bug instead?

I suppose this is what I get for using closed source software. Too bad Tweetie works better than the open source clients.

Google is a sucker’s game that only serves the needs of a tiny elite

The following is a modified version of Seth Finkelstein’s Guardian column “Twitter is a sucker’s game that only serves the needs of a tiny elite”:

Let me start by confessing I do have a Google account. But I won’t be fooled again. That is, I refuse to once more play the attention-seeking game, where everyone enriches the contest runner and surrounding marketers for the privilege of aspiring to be one of the very few big winners.

Google is a “search engine”. Users can search quickly for web pages from a host, though the web pages are of dubious quality. Think USENET posts, but faster and less organized.

If someone finds your web page to you, they’re called a “visitor”, while subscribing to a website feed is called “reader”. The language is already revealing of the structure.

When I first heard of Google, I made the mistake of thinking it was like USENET, an old system that allows a group to exchange content among themselves. So I wondered why there was such a fuss over a variant of that ancient idea.

After I saw Google in use, I realised the difference was that, while USENET had all participants equal, Google implements a distilled version of many problematic aspects of posting to newsgroups. Namely, a one-to-many broadcasting system that serves the needs of high-attention individuals, combined with an appeal to low-attention individuals that the details of one’s life matter to an audience.

The “A-list” phenomenon, where a few sources with a large readership dominate the information flow on a topic, was particularly stark. Since the numbers of “visitors” and “readers” are visible, the usual steep ranking curve was immediately evident. A highly ranked site is free to attack anyone lower down the ranks, as there’s no way for the wronged party to effectively reply to the same readers.

Getting a significant readership and thus being socially prominent is also important. Hence, there are major incentives to churn out quick punditry that is pleasing to partisans.

And Google evangelism has gone down a path similar to USENET evangelism. There is the same two-step of arguing: roughly, it can be both diary/chat and journalism, thus a promoter can switch back and forth between those two concepts whenever convenient. The word “conversation” is contorted in a now familiar way, to mean mutual pontification among a tiny elite. The dream of potential stardom of a quasi-intellectual sort is dangled in front of the masses, though the only beneficiaries would be the data-mining companies profiting off the result.

When the entrepreneur Jason Calacanis offered $250,000 to have his product’s account be a “suggested link” for two years, saying, “Google has the ability to unleash a direct marketing business the likes of which the world has never seen”, that was a blunt illustration of the real dynamics at work. Though Google didn’t accept his offer, monetisation must eventually happen somehow. People aren’t being connected, they’re being bundled up and sold.

Recently, venture capitalists invested $35m (£23m) in Google (adding to an earlier $20m in funding). Such a sizable investment can buy a corresponding amount of hype. I suspect money is partially responsible for some (though by no means all) of the breathless media coverage Google has garnered.

Note the potential survivor’s bias effect. You may far more often hear from the rare person who has benefited from the service, than one who reports trying it and finding it a total waste of time. Some sceptical analysis by Nielsen Wire has pointed out that user retention is relatively low: “But despite the hockey-stick growth chart, Google faces an uphill battle in making sure these flocks of new users are enticed to return to the nest.”

Google is low-level celebrity for the chattering class. And the pathologies of celebrity are all on display, including the exploitative industries that prey on the human desire to be heard and noticed. My answer to Google’s slogan of “Are you feeling lucky?” is: “I’m not playing a sucker’s game.”


Aside from searching and replacing “Twitter” with “Google” I also replaced “IRC” with “USENET” and replaced “follower” with “visitor” and “reader”, etc.

The effect is certainly entertaining if not surprising: most of Seth’s criticism’s could be levied against Google and the web a decade ago. Admittedly, other parts don’t make any sense, but many of the overall criticisms still hold. Google’s ranking of sites is based on webmasters linking to those sites. The more links to a particular site, the more Google deems it valuable and the more exposed it becomes in Google search results and the more likely that site will retain larger number of readers.

The reason Seth’s criticisms apply to both Twitter and Google is because they are not criticisms of a certain platform or service, but criticisms of how humans filter for value in social environments. We rationally aggregate things that we consider valuable regardless of whether they’re web sites or people’s status updates. When we have tools to help us organize and discover those things, those tools are going to beat out other less efficient ones.

Twitter is filled with heaps of suckers, as Seth would put it. But the web is also filled with heaps of suckers. Sturgeon’s law applies in all mediums. Google found a way to aggregate this organize the most valuable information in an efficient matter, and it became the standard tool for discovering better information. Twitter is now helping us discover it in another medium and context: the real time short text message.

Seth seems to be worried that individual dissenting voices are being drowned out in the litany of “elite” or “celebrity” voices which are rewarded by massive numbers of followers and those voices will just be reinforced by Twitter. So what? Celebrity names and issues dominate Google’s search terms every year. This doesn’t diminish the general value of the platform.

Seth’s worry is premised on a false dichotomy. Even if there is plenty of junk on Twitter, that doesn’t mean we must ignore the platform all together. The same criticism could have been levied against Google, and even the web itself. But Seth wouldn’t know — he doesn’t actually use Twitter, he follows exactly 0 people. As Mike Masnick said, debating the value of Twitter with a non-user is like debating the value of music with someone’s who’s deaf.

For those of you who don’t use Twitter, for the record, it’s not difficult at all to calibrate one’s Twitter feed into something useful. Just choose to find the people whom you might be interested in hearing from occasionally, and ignore the rest. Just like Google.

When Does Facebook Stop Being a Startup and Start Being A Government?

A lot of my time is spent thinking about the Internet as a public place. That may seem like an obvious and intuitive concept to grasp, but it is practically difficult for a number of reasons. Some of these reasons are legal, such as copyright law, and other are technical.

Many of Facebook’s struggles are, at their core, symptoms of a public vs. private schizophrenia massive centralized platforms are beginning to suffer from. Wikipedia is one solid counter-example: most decisions and policies are the result of decentralized consensus or vote.

The current row over whether Facebook should allow Holocaust deniers the right to organize at first appears as a freedom of speech issue. This is certainly how the Facebook team has justified allowing certain groups to stay online. But because it is all happening on Facebook’s servers, it is also (and perhaps singularly) a Facebook Terms of Service issue.

Facebook has the right to throw people off their service for reasons they deem appropriate just as Club Penguin has the right to censor children from cursing at each other when playing a video game. Facebook is not the United States government and it is therefore not subject to the same kind of first amendment scrutiny when censoring speech.

But Facebook is a government of some kind. With over 175 million users, the site is now more populous than most countries. They’re also holding elections and convening debate over the rights and responsibilities of their users. It’s clear that they are governing user’s actions much in the same way that a government governs citizens’ actions, but it is now totally unclear what inalienable rights Facebook users have when engaging with their friends and colleagues in what has become a public space. It is my hope that projects like Autonomo.us will help shift the debate towards greater user freedom and data portability in the long run, but we aren’t there yet. More specifically, whether Facebook respects an external bill of rights (as drafted by Autonomou.us) is a separate issue of whether Facebook will ever legally be considered a public or private space. This battle has occurred in the physical world, and the law seems conflicted over whether massive private spaces can be considered public. In Iowa, malls are considered private property, but New Jersey’s State Supreme Court disagrees, and the 1980s Supreme Court decision, Pruneyard Shopping Center v. Robins, the court decided that states like California could affirm free speech rights in places like malls.

The ToS modification fiasco is another example of Facebook’s public vs. private schizophrenia. At the heart of the blow-up over the revised Terms of Service, was a sentence claiming that users content “will survive” on Facebook despite said user deleting an account. Consumerist rightly interpreted this phrase as allowing Facebook to exploit (if not behave as if they own) your content in perpetuity. This was a dire and cynical prediction, but not unfounded.  Julius Harper did a masterful job of organizing the outrage over the modified ToS and was subsequently invited into the negotiations, which was certainly a step in the right direction.

A good-will interpretation of Facebook’s new phrasing was that the sites administrators couldn’t be absolutely sure that all of your content would be gone once you deleted your account. Consequently, Facebook’s lawyers wanted to preclude liability (privacy, copyright and otherwise) if your content happened to show up somewhere in a backup or internally archived version of the site. Anyone familliar with running a user platform (and backing it up) will be aware of the complexity involved in keeping track of user data across many servers, so do not dismiss this challenge as an easy task until you talk to a server administrator.

But there was also a feature-based reasoning behind Facebook’s ToS modification. Facebook did not want to be obligated to remove messages, wall posts, and photos from other users accounts and inboxes simply because one user deleted their account.

If Alice sent Bill a message on Facebook, and then deleted her account, should Facebook be obligated to remove Alice’s message from Bill’s Facebook Inbox? This is something the site could do very easily. We’ve all seen instances of our friends removing status updates, profile information, or photos, so there’s no question Facebook can unilaterally perform the same action without our permission. But our intuition says that they shouldn’t do this. Even though Bill may not own the copyright to reproduce Alice’s content, he should at least be afforded the dignity of perpetually retaining a record of his communication with her, despite her desire to remove her presence from Facebook.

This is how the Internet works: if Alice and Bob were communicating over e-mail, there would be no question as to whether Bob would have the right to retain Alice’s e-mail even if she deletes her e-mail account.

But Facebook is not the public Internet, where users have no control of servers across the world. Quite the opposite: Facebook does have control over everything and can actually unilaterally delete e-mails out of inboxes. This presents a unique liability and responsibility that the company’s lawyers were interested in attenuating. I wouldn’t be surprised it was motivated by threat of a lawsuit by an angry user wanting *all* of their content off the site, including messages sent to other users.

Ultimately, Facebook’s desire to retain the metaphors of Internet communication is at odds with the company’s power to unilaterally control that communication. While Facebook actually has the power to delete Alice’s e-mails from Bob’s Facebook Inbox, they choose not to, out of respect for norms established long ago on the public Internet. In other words, Facebook is attempting to behave like a public space while remaining a private company by crafting its own rules and laws.

There’s also the issue of public disclosure of private facts on Facebook. American law prevents me from disclosing private facts about Alice that are not news worthy. However, if Alice had disclosed such private facts in a public space (perhaps in front of a large audience), I can pass on the facts to others and even publish them.

But what if Alice discloses her private fact on her Facebook profile? It remains private in the sense that only I and her friends can see it by logging into Facebook’s private service, but it also arguably public in the sense that I and her friends are also an audience. Does it matter how many friends she has? What privacy settings did she have in place?

The public and private nature of Facebook feels very complicated.

In the end, I don’t think the phrase “walled garden” suits the scale and character of these kinds of issues anymore, as we’re no longer talking just about access to content. These issues are about government, control, public spaces, and censorship, so our freedom and laws should apply accordingly.

The Staggering Hypocrisy of the MPAA

MPAA shows how to videorecord a TV set from timothy vollmer on Vimeo.

This video is shot by my friend Timothy Vollmer at the current DMCA exemption hearings. The issue is whether Congress should allow educators and students the rights to rip DVDs for educational purposes. Peter Decherney succeeded in establishing this right for film historians working at universities, and is now seeking to broaden it to all educators and students.

In the video, a representative from the MPAA is demonstrating that it is “easy” to access and compile content from a DVD without the need to rip it using decryption software. Their suggested technique? A camcorder pointed at a flatscreen hooked into the audio signal.

This is evil and hypocritical a number of reasons. First, the MPAA has positioned themselves against camcording movies. Here, they’re showing how easy it is to do. They’re also one of the main organizations which have successfully lobbied for criminal penalties against people bringing camcorders into movie theaters.

Second, the software used in the presentation is VLC. VLC disables the MPAA’s price fixing scheme known as region encoding and can also decrypt DVDs, providing yet another example of where the MPAA thinks their own rules don’t apply to them.

Third, the MPAA has been leading the pack in attempts to close the “analog hole” through legislation and collusion with hardware manufacturers. The analog hole is precisely the phenomenon demonstrated in this video; since audio and visual data needs to be broadcast into an analog signal eventually (our brains are not capable of decrypting 1s and 0s into images and audio yet), there will always be a avenue in which to record media so long as our computers obey us.

Closing the analog hole” refers to forcing manufactures to cripple hardware so that it is incapable of broadcasting analog signals and also incapable of recording them. It is the stuff of a dystopian science fiction plot not technical reality.

Ultimately this video demonstrates the insidiousness of the MPAA’s strategy: they want to force educators to use a technique that they’re simultaneously lobbying to prohibit.

End result? The precise strategy suggested by the MPAA, the analog hole, gets legislated away by the MPAA, and educators are left wasting money and time on multiple copies of crippled media.

UPDATE: Another way I’m thinking about this video: it proves that the MPAA knows closing the analog hole is impossible, thus exposing their attempts at legislation as disingenuous.

Props go to Tim for posting such a illustrative video (not to mention the nerve to post clips of Harry Potter under fair use!)

What would have Twitter looked like on 9/11?

I spent the first week of college living through September 11th in and around New York City and have since endured recurring plane crash nightmares.

Which is why I was relieved to find out after the fact that today’s close call with Air Force One and two F-16s was a photo-op rather than another generation-defining tragedy.

Reading the New York Times’ extensive coverage of the episode on their blog had me wondering about how the event unfolded on everyone’s-favorite-real-time-reporting-source: Twitter. What was the first tweet that observed the fly by? Was it panicked? How many people retweeted it? What would have Twitter looked like on 9/11?

We’ll never know, but I’ve done a bit of searching for terms related to today’s news (“nyc plane”)* and have discovered one of the first tweets at around 10:30am (around the time of the first flyover) by n8s8e asking JetSetCD whether Obama was supposed to be in NYC:

Shortly after, @The_Pace asks a similar question, and then @hugoyles mentions that Goldman’s trading floor was evacuated. Then @ChicagoSooner reports that CNBC had confirmed the sightings. @Rithesh asked if there was a plane crash in lower NYC, and then @grapejamboy breaks the news that the Pentagon confirmed the flights as a photo-op. From then on, most tweets cover the story properly.

It’s clear that Twitter beat traditional news outlets today in relaying that something was happening with a plane over NYC’s downtown skies. However, as @Rithesh’s tweet demonstrates, there is potential that misinformation gets disseminated (there was no crash) as well, so the system is not noise proof.

There’s also a limit to what can be gleaned from Twitter search at any given moment, and a very real chance that all the signal will itself become noise. As commentators smarter than I have observed, this makes Twitter a fantastic “raw material” in a journalist’s process, but not a final product itself.

But really, what’s the difference between leaving a search open in Tweetdeck and leaving CNN on in the background?

UPDATE: Zander points out this great piece in the Nieman Journalism lab breaking down the Twitter accounts of today in much better and greater detail than I did.

*This search is not scientific at all and is probably leaving out earlier sightings. I tried searching for “plane” but Twitter’s search is frustratingly limited to narrowing queries by day as opposed to hour and minute (which would be ideal here) and will only deliver a max of 1500 results for any term. There are obvious security reasons for this, but it presents a fantastic example of how Twitter can capitalize on search: I’m  willing to shell out a couple of dollars for access to do more sophisticated searching.

Information Overload, Facebook Fatigue, and Twitter’s Awesome Filter

I’ve been using less and less of Facebook recently, and I’ve started to wonder why. I primarily use it to organize events, keep track of contacts (once a month I need to reach someone whose e-mail I don’t have) and occasionally upload photos that I don’t want to put on Flickr and/or want to tag with people I know.

My loss of interest in Facebook is exemplified by my current infatuation with Twitter. I was deeply skeptical of Twitter when I first heard about it, but signed up and quickly forgot about it. After I stumbled across a couple of Twitter accounts and started following them, I decided to actually try it out.

Now I’m hopelessly addicted. As Mike Arrington said, I need Twitter more than Twitter needs me.

But I hadn’t given a lot of thought to moving on from Facebook until I came across Molly Schoemann’s post on “Why I left Facebook“:

Because every damn time I signed on to Facebook, my feed went like this:

[Girl you found distasteful in high school]: Has posted pictures from her wedding!

Click here to view her photos, while wondering if perhaps you misjudged her, back in the day.  Find photos distasteful, even for wedding photos.  Feel slightly depressed, if also vindicated.

You get the idea.  Molly has perfectly articulated my Facebook fatigue. I’ve found that I’ve had trouble separating the signal from the noise. In fact, most of Facebook has just become noise to me. The useful parts are specific ones. I either receive an e-mail telling me that is an event that I want to go to (though I rarely RSVP correctly — either over or under obligating myself for months) or I search for someone’s e-mail or phone number.

The feed now both scares and bores me.

Facebook is now suffering from information overload and we lack the resources to adequately deal with it. Sure, I can select “Less Information about [Guy I Barely Know]” but the problem seems to systemic to Facebook in general. I don’t think Facebook is objectionable because it publishes private or otherwise hard to find information, I think its objectionable merely because it publishes too much valueless information, period.

Creating adequate filters is the essential solution to this problem, and it is why Google was so successful. They created a filter to tame the info-glut of the late 90s on the web.

Google was solving a problem that was essentially an artificial intelligence once: how does a machine know what you are asking for? How can a machine understand what you want to find? Google’s solution was to leverage the collective intelligence of the web in order to infer meaning about its content.

Facebook has tried making stories more interesting by showing me stories involving more than one friend. The system is making an educated guess about what stories I’ll find most interesting. It picks the ones that implicate multiple friends, and to some extent this works as a good indicator as to whether I’ll find a particular item interesting.

But my interest is still waning.

Stop reading now if you hate Twitter, because you’re not going to enjoy this next part.

I think Twitter presents a better solution to taming the Signal-to-Noise Ratio of social networks. This is because Twitter’s inherent filter is better and more active. On Facebook We’ve been brainwashed to mindlessly accept most relationships of people we know in real life, (rejecting a friend request is serious business, most people just leave them queued up in), but we haven’t actually taken into consideration the fact that we’ll be inundated with trivia about their lives.

With Twitter, the filter is better for a number of reasons.

First, relationships are asymmetrical, which removes the friend hoarding incentive. In other words, that there is no reason for me to follow you unless I’m interested in what you have to say. The fact that I follow you means nothing about me. Compare this to the incentive of you and I being friends, symetrically, on Facebook. Even if we aren’t that close, there is little incentive for me to deny your request if I’m interested in showing how popular I am; what human doesn’t want to show how popular they are? Facebook’s architecture rewards friend hoarding, and consequently, information overload, in a way that Twitter’s doesn’t.

Second, if I begin to follow you on Twitter and you are posting boring, irellevant, or uninteresting items, then I will unfollow you. No hard feelings, I’m still probably friendly with you, we might even be good friends IRL, but what you are offering on this platform is not what I want from it. While unfollowing may sometimes precipitate unfriending, the former certainly does not necessitate the latter.

Thirdly, if I miss posts on Twitter, it seems less personal and less of an issue. No big deal, I’ll read your next post.

Fourth, the whole point of Twitter is to Keep it Simple Stupid. By limiting the amount of characters or content a person can post to 140, the emphasis is about conveying as much meaning and value with as litle content as possible. This dramatically increases the quality of the SNR since users feel compelled to not waste characters or posts.

In short, Twitter has avoided the information overload problem, or perhaps I have avoided information overload on Twitter, because its architecture naturally yields a better filter. This is A Good Thing.

Facebook can over come this, maybe, and I think it may still be useful as a friend-indexing-social network for organizing events and looking up phone numbers, but it will be a difficult challenge to get over the info-glut.

The RIAA’s Loss

The RIAA has announced that they have stopped suing individual file sharers for copyright infringement.

The suits were based on the questionable notion that making files available in directories through peer-to-peer software like KaZaa was a violation of the copyright of the owner of the works. The RIAA was not going after people downloading music, or even people who had sent a file to someone else, but rather the set of people who had shared directories with files that looked like music and were available for public perusal.

This was problematic because the copyright statute doesn’t actually say anything about “making available.” The right to control who distributes one’s work is one of the rights granted to authors by the statute, but the RIAA had no evidence that file sharers had actually distributed the files, just that they had made them available and that the files could potentially be distributed.

Consequently, the RIAA had to argue that “making available” was actually part of the copyright statute when it wasn’t. When the handful of suits (out of 35,000) made it to court, some Judges started to realize this, and through the selfless and amazing work of Ray Beckerman, the legal community slowly turned against the prosecution.

All in all, the RIAA’s campaign to sue their own customers was a disaster. CD sales continued to plummet and filesharing’s popularity only increased. This is not to mention the public relations catastrophe the industry now faces. Musicians hate being associated with large corporations that the public perceives as evil, and more substantively, musicians have not seen any of the settlement monies the RIAA has been collecting on their behalf.

THE RIAA TOOK MY MUSIC AWAY

The RIAA is claiming that the campaign “was successful in raising the public’s awareness that file-sharing is illegal” which is demonstrates a gross misunderstanding of the law and technology.

So what is next? The RIAA claims that they will be making deals with ISPs to institute something roughly similar to a 3-strikes and you’re out policy against file sharers. The terms and details of these agreements are not flushed out (and will probably never be made available to the public), but on some level, this is a less vicious form of negotiation with the technical realities their industry is facing.

But there is already evidence of ISPs acting in haste to dismantle legal file sharing outfits. TorrentFreak has a story about an open source software tracker having their service revoked by their ISP because they were accused of hosting an illegal torrent of the game Command and Conquer. YouTube already engages in auto-take-downs of videos that are supposedly infringing.

I’m a huge fan of the site LegalTorrents.com and have used it for distributing the uncompressed (~1gb) versions of two Creative Commons videos, A Shared Culture and Media That Matters: A CC Case Study. Because everything on LegalTorrents is free to share (under an appropriate CC or similar public license), it is the perfect counterexample to the RIAA’s claim that file sharing is inherently illegal.

Put another way, file sharing in and of itself is not illegal (just as crowbars in and of themselves are not illegal) and sites like LegalTorrents demonstrate this. We should not let the RIAA use the fact that they’ve abandoned their campaign as a positive cover to privately intimidate ISPs into breaking the Internet.

More importantly, we should keep the pressure on ISPs about preserving network neutrality. When used on the public net and ISPs, deep packet inspection filtering and application layer filtering are violations of network neutrality and if the RIAA is successful in pushing these technologies as “solutions” to the file sharing problem, we are going to have a much larger problem on our hands than 35,000 dispersed lawsuits.

The WSJ Showing Its Cards

By now everyone knows about the Wall Street Journal’s shoddy net neutrality hit piece.

The article went to great lengths to conjure that Net Neutrality support was waning among its most ardent supporters — Google, Lessig, Obama and others had all said or done things “recently” that indicated they were no longer pushing as hard for the net to stay neutral.

In the last two days, virtually every individual mentioned in the piece has come out against the WSJ and argued that either their positions were misinterpreted or that their quotes were taken out of context.

The WSJ has claimed that their piece has “gotten a rise out of the blogosphere” and has not issued any retractions or corrections to the article.

Other bloggers are commenting on the particular misunderstandings and misinformation in the article, but I’m interested in analyzing the WSJ’s behavior as I believe it is symptomatic of a larger affliction of the newspaper.

Here are some things I think are noteworthy about the situation:

  • The WSJ initially discredited the blogosophere as a legitimate voice in this debate.
    Would they have said that they “got a rise out of the newspaper industry” if they had written an article that got the NYTimes, Washington Post and CNN complaining about inaccuracies? Rise probably isn’t the right word, as Jay Rosen said.
  • This seems to be an example of mainstream press trolling bloggers.
    Typically, bloggers are the ones accused of trolling the mainstream press.
  • Both the original article and the follow up posts are outside the WSJ’s paywall.
    Further evidence of the desire to troll the blog world.
  • The comment system for WSJ is plagued by spam.
    This indicates an immature and underdeveloped comment community. This is not to say that the WSJ should start heavily moderating their comments, just that they obviously don’t seem to care about them.
  • The general attitude of Us vs. The Internet of the article and responses indicates a deep misunderstanding of conversations on the net.
    The net is no longer a community in and of itself; it holds digital representations of an infinite amount of communities that exist in reality. Things used to be otherwise, but to still think so demonstrates a dated perspective.
  • WSJ’s technology writers are either vastly under-skilled for such reporting or are interested in remaining ignorant of the real issues.
    Even if one could make the specious argument that Edge caching does violate network neutrality (and I don’t think anyone believes it does) it wouldn’t be doing so in the same way the telecommunications companies are interested in violating network neutrality. Edge caching does not violate network neutrality in the same way the telecommunications companies are interested in violating network neutrality. More specifically, Google’s movements to place caches at ISP level is not as controversial as the WSJ would like it to be. Despite having many opportunities to get the story right, the WSJ has repeatedly ignored the technological subtlety of the details and has misquoted others who were trying to set it straight.

Network neutrality is one of the primary reasons why digital journalism is viable, and the reason why newspapers are threatened online, so there is no surprise the WSJ sees the principle as a threat: they think it is in their interest to do so.

As Gandhi put it:

“First they ignore you, then they ridicule you, then they fight you, then you win.”

Google Street View’s Revealing Error

Google Streetmap Blurs Faces in Advertisements, Too.

After receiving criticism for the privacy-violating “feature” of Google Street View that enabled anyone to easily identify people who happened to be on the street as Google’s car drove by, the search giant started blurring faces.

What is interesting, and what Mako would consider a “Revealing Error“, is when the auto-blur algorithm can not distinguish between an advertisement’s face and a regular human’s face. For the ad, the model has been compensated to have his likeness (and privacy) commercially exploited for the brand being advertised. On the other hand, there is a legal grey-area as to whether Google can do the same for random people on the street, and rather than face more privacy criticism, Google chooses to blur their identities to avoid raising the issue of whether it is their right to do so, at least in America.

So who cares that the advertisement has been modified? The advertiser, probably. If a 2002 case was any indication, advertisers do not like it when their carefully placed and expensive Manhattan advertisements get digitally altered. While the advertisers lost a case against Sony for changing (and charging for) advertisements in the background of Spiderman scenes located in Times Square, its clear that they were expecting their ads to actually show up in whatever work happened to be created in that space. There are interesting copyright implications here, too, as it demonstrates an implicit desire by big media for work like advertising to be reappropriated and recontextualized because it serves the point of getting a name “out there.”

To put my undergraduate philosophy degree to use, I believe these cases bring up deep ethical and ontological questions about the right to control and exhibit realities (Google Street View being one reality, Spiderman’s Time Square being another) as they obtain to the real reality. Is it just the difference between a fiction and a non-fiction reality? I don’t think so, as no one uses Google maps expecting to retrieve information that is fictional. Regardless, expect these kinds of issues to come up more and more frequently as Google increases its resolution and virtual worlds merge closer to real worlds.