Category: Data

Big Data Review in Emoji

I'm on the board of, a great non-profit focused on technology and art. We do an event every year called Seven on Seven where we pair seven technologists with seven artists.

Saturday was the fifth anniversary of the event, and this year one of the teams paired NYT writer and author Nick Bilton with artist Simon Denny.

Around 5pm on Friday, I got an email from Nick offering to pay me $5 for an emoji version of the White House's report on Big Data:

Email from Nick

The entire report is 85 pages, but they asked for a summary of page 55, a chart showing how federal dollars are being spent on privacy and data research:


Here's what I came up with (click for a larger version):

Big Data Emoji

I'm particularly proud of my emoji-fication of homomorphic encryption:

Homomorphic Encryption

I highly recommend watching the whole event, but Nick and Simon's presentation of the other reports they solicited begins around at the 3 hours and 25 minute mark of the live stream:

Nick, I know you said you'd pay cash, but I'd really prefer to accept the $5 in DOGE.

Please send 10,526.32 DOGE to DKQJsavxSdF381Mn3qZpyehsBzCX3QXzA2. Thanks!

Visualizing CitiBike Share Station Data

CitiBike Share launched yesterday. I finally got my fob activated, but since its raining, I haven't had a chance to take the bikes out for a spin, so I decided to take their data for a spin instead.

Jer RT'd Chris Shiflett's link to the CitiBike Share JSON data:

asking for a "decent visualization" within half an hour:

So I fired up R and after a couple of minutes of JSON-munging, threw together this graph plotting the # of available bikes per station against the total number of docks of that station:

Not surprisingly, there's a positive correlation with the number of docks a station has and the number of bikes available for that station.

But some of the outliers represent stations that are popular and have fewer bikes available, places where CitiBike Share might consider adding capacity. For example, East 33rd street and 1st avenue had 60 total docks, but 0 available bikes.

In contrast, stations like Park Place & Church Street (middle of the graph) lie on the identity line and represent stations with close (or exactly) 100% available bikes for their docks. They may be examples of over-provisioned stations.

I also colored the name of the station based on its latitude to give for a very rough proxy for how "downtown" the station was. This glosses over the fact that Brooklyn has a downtown which is distinct from what people normally consider NYC's downtown, but is interesting to note that some uptown stations (lighter blue) appear to cluster towards the right of the graph indicating uptown stations have been granted more total docks overall. More space in midtown I guess.

I'm not proud of the R code I used to hack this together, but I spent about 10 minutes on it:

The Data Behind My Ideal Bookshelf

My girlfriend / lady partner, Thessaly La Force, recently published a book with the artist Jane Mount called "My Ideal Bookshelf." In it, Thessaly interviews over 100 people and Jane paints their bookshelves:

The books that we choose to keep --let alone read-- can say a lot about who we are and how we see ourselves. In My Ideal Bookshelf, dozens of leading cultural figures share the books that matter to them most; books that define their dreams and ambitions and in many cases helped them find their way in the world. Contributors include Malcolm Gladwell, Thomas Keller, Michael Chabon, Alice Waters, and Tony Hawk among many others.

As I observed Jane and Thessaly compile the book over the last year, I couldn't help but think about all the fun opportunities I could have exploring the data behind the shelves.

Contributor Neighbors

Each of the 101 contributors Thessaly interviewed picked as many books as they thought represented their ideal bookshelf, and I knew some of them would pick identical books.

So what would a taste graph linking contributors to each other using the books on their shelves look like?

Previously, I had worked with Cytoscape to render a network graphs, but this seemed like a good opportunity to make something interactive and also a perfect first project to really use d3. I can't wait to do more with it.

Hover over each node to see the contributor's neighbors.

The layout of the graph is done using d3's force-directed graph layout implementation and each node represents a contributor's shelf and is colored by the contributor's profession.

Each active link is then colored by the neighbor's profession. The nodes at the center of the graph have the most neighbors and exert the most pull over the rest of the graph. Try clicking and dragging a node to get a good feel for its centrality.

Genres and Professions

click for larger

The distribution of contributors professions skews heavily toward writers so all the professions aren't evenly represented: 41 out of the 101 bookshelves were from writers, and there were a total of total of 35 unique professions represented.

click for larger

In the graph above, I picked books chosen by professions which were represented by two or more contributors. Each circle is proportionally sized with the share of books chosen by the profession of the contributor.

For example, fiction books make up the plurality -- 31% -- of books chosen by writers. Similarly, 55% of the books chosen by photographers were classified as photography.

This next graph is a violin plot showing the distribution of page counts of the books chosen by each profession:

click for larger

Excluding the Oxford English Dictionary which comes in at 22,000 pages, legal scholars (Larry Lessig & Jonathan Zittrain) earned the highest average page count of 475, and all the books chosen by the two photographers had page counts under 500.

Year vs. Page Count

Taking it a step farther, here is almost every book (again, excluding the OED) based on the year it was published (X-axis) against the log10 of its page count (Y-axis).

The size of each point represents the number of ratings the book had accumulated on Google Books, and its color represents the book order the contributor placed it on their shelf.

click for larger

The darker the point, the farther on the left of the shelf the book was ordered. Conversely, small teal dots represent books with relatively fewer ratings which were placed towards the right of contributor's shelf.

Unfortunately, not all the books had publishing dates that made sense -- Google reported the dates of Shakespeare's works anywhere from 1853 to 1970 to 2010, and when would you say the Bible was published? 1380? 1450?

Excusing these erroneous points, I think the graph still works -- checkout the cluster of books published in the early 20th century chosen by the writers in the upper left of the graph: they represent popular early 20th century books with page counts between 200 and 500.

Summary Stats

  • Number of shelves: 101
  • Number of Books Chosen: 1,645
  • Unique Books According to Google's API: 1,431
  • Average number of books chosen:16.28
  • Average Pagecount: 381.2
  • Average Year of Publication: 1992
  • Top 5 Chosen Books:
    1. Lolita chosen by 8 contributors
    2. Moby Dick (chosen by 7)
    3. Jesus' Son (chosen by 5 contributors)
    4. The Wind-Up Bird Chronicle (chosen by 5 contributors)
    5. Ulysses (chosen by 5 contributors)
  • Top 5 Authors:
    1. William Shakespeare (10 different books)
    2. Ernest Hemingway (7 different books)
    3. Graham Green (7 different books)
    4. Anton Chekov (6 different books)
    5. Edith Wharton (6 different books)
  • Contributor with the most number of books: James Franco
  • Contributor with the most number of shared books: James Franco
  • Longest Book: The Oxford English Dictionary, Second Edition: Volume XX* chosen by Stephin Merritt, 22,000 pages.
  • Shortest Book: Pac-Mastery: Observations and Critical Discourse by C.F. Gordon chosen by Tauba Auerbach, 12 pages

* Jane was only able to paint one volume of Stephen's OED for his shelf, but the authors agreed it could stand as a synedoche for his choice of the entire edition.


The data I cobbled together from My Ideal Bookshelf is far from perfect, but I think it does a good job of illustrating some of the larger themes and relationships behind the book.

For example, the books of the two legal scholars tended to picked on average some of the longest books in the set, and that professionals tend to pick books related to their jobs (checkout the large proportion of photography books chosen by photographers). Also: if you're James Franco and pick a ton of books, you're gonna have a lot of neighbors in a network graph.

I also discovered how skewed the dataset was towards the choices of writers -- I jumped in expecting a diverse set of contributors which might be useful for representing an ideal ideal bookshelf -- but that would have been a difficult case to make when 40% of the contributors were from one profession.

This might sound like an obvious observation (and something I should have known having spent so much time thinking about the book), but it wasn't something I was able to really observe until looking at simple histogram of their professions. So remember: its always worth thinking critically about whether your samples are representative of the underlying distribution, and simple exploratory data analysis can really help you out there.

Bigger picture, I think this skew demonstrates the nature of coming up with an ideal list of anything: no matter who you ask, the task is essentially a subjective one. Here, it's biased towards the network Thessaly and Jane were able to tap to make the book.

Cleaning and Reconciling the Data

While creating the book Thessaly and Jane had carefully compiled an organized spreadsheet listing each contributor and their chosen books, but I knew there could be subtle typos here and there. These typos could possibly throw off the larger analysis: if the title of two books was even slightly off or missing some punctuation, then aggregating based on their titles would be problematic. I also wanted additional data about the books (data which Thessaly and Jane didn't record), such as the year the book was published, or the number of pages in the book.

So I figured I'd kill two birds with one stone and look for an API which I could automatically search using the title they had entered into the spreadsheet, and get a best-guess at the "true" book it represented. The API would also hopefully return a lot of useful metadata that I could use down the line.

It turns out Google's Book API is the perfect tool for such a job. I could send a book title to it and get back the book that Google thought I was looking for. This allowed me to lean heavily on Google's excellent search technology to reconcile book titles that might have had typos, while also retrieving the individual book's metadata. While I could have used something like Levenshtein distance to try to find book titles that were close to each other in the original dataset, I wouldn't have been able to retrieve any additional metadata.

A quick side note for copyright nerds: the Google Books API played into the HathiTrust case recently, and I'd like to imagine use cases like this were part of the reasoning behind declaring the Google's use of copyrighted materials a fair use.

Google's Book API allows 1,000 queries a day, but since the list contained thousands, I had to write in and ask for a quota-extension -- thanks to whomever at Google granted that -- I now get to hit it 10,000 times a day, which was enough to iterate building a script to compile the data.

Not surprisingly, Google's API returns a ton of metadata. Everything from a thumbnail representing the book's cover, to the number of pages, to the medium it was originally published, to its ISBN10 and ISBN13 ... the list goes on. I tried to choose fields that I knew would be interesting to aggregate on, but also ones that would help me uniquely identify the books.

One particular piece of metadata that was missing was the genre of the book -- only 28% of the books returned from Google had category information. Another option would have been to set up a Mechanical Turk Task to ask humans to try to determine the books' genres. This kind of book ontology is actually a very difficult and somewhat subjective problem. Just think of how complicated the Dewey Decimal system is.

Finally, not all data is created equal -- I've manually corrected a handful of incorrect classifications from Google where the search results clearly did return the right book, but its certainly possible not all books were recognized or reconciled properly.

The tools

Aside from d3 for the interactive plot at the top of this post, I used R and ggplot2 to create the static graphs.

The script I used to query the Google Books API was written in Ruby, and exported the data to a CSV which I then loaded into MySQL and Google Docs to manually review and spot check.

Here's the query I used to generate the data necessary for the force-directed graph:

  ideal_bookshelf_one.contributor_id as source,
  ideal_bookshelf_one.google_title as book,
  ideal_bookshelf_two.contributor_id as target
  ideal_bookshelf as ideal_bookshelf_one,
  ideal_bookshelf as ideal_bookshelf_two
  ideal_bookshelf_one.google_book_id = ideal_bookshelf_two.google_book_id
  AND ideal_bookshelf_one.contributor_id != ideal_bookshelf_two.contributor_id

Sequel Pro's "Copy as JSON" was extremely helpful here -- it took relatively little effort to munge the SQL results into an array of nodes and links required by d3's force layout.

If you liked this post...

Pickup a copy of Thessaly and Jane's book today!

Kickstarter Fulfillment and Product Development: A story of Dogfood and Data Validation

You could think of this post as telling the story of two Kickstarter projects. Since its a long post, here's a quick summary:

  1. I recently ran a Kickstarter project.
  2. I wanted to share all the financials and details of how I shipped my rewards.
  3. I discovered we could do a better job helping creators process their backer's addresses.
  4. We recently deployed a change to backer surveys that should do just that.

So I hope this post will educate Kickstarter creators on how to smoothly fulfill their rewards, but also shed a little light on how we do product development at Kickstarter.

The Kickstarter project was pretty simple -- I FOUGHT SOPA AND ALL I GOT WAS THIS STUPID T-SHIRT-- and the other project was actually a Kickstarter built by our product team, which we (and I use "we" loosely, Jed, Tieg, Daniella and Meaghan did all the work) shipped last week:

The address confirmation tool helps backers validate their addresses when filling out reward surveys from creators.

Just as Netflix or Amazon ask you to confirm your shipping address if it doesn't precisely match an exact address, Kickstarter will now ask backers to confirm a more precise one so that creators can feel more confident about shipping their rewards off. This feature also has the benefit of cutting down some of the data correction creators might face when shipping rewards.

The development of this feature is a good example of the of "dogfooding" your own product, which is software jargon for the act of actually using the tools you've built.

Dogfooding is one of the best possible ways to understand and improve your product, so I'm always interested in getting feedback from people that have run their own Kickstarter projects.

The SOPA shirt project was pretty straightforward. It only really had two reward tiers, but as with all Kickstarter fulfillment, there were a lot of details to get right.

Getting one of those details right -- backer addresses -- made me realize we needed a better way to ensure we were delivering valid backer mailing addresses to project creators at the most crucial part of their project: reward fulfillment.

I also want this post to serve as a bit of a guide for fulfilling Kickstarter projects on a similar scale. So first, some details on how I planned to fulfill my rewards.

Estimating Income and Costs

To determine my actual net dollars from Kickstarter, Amazon, and the credit card fees, I had to download a CSV of my account activity from Amazon's FPS platform, sum the pledges and subtract the credit card fees for each pledge.

The total cost of CC fees was 4.8%, Kickstarter's fee was 5%, so I yielded 90.2% of actual pledges, or $3,497.

I had a chicken and egg problem trying to estimate which shirts to even offer in the survey.

Since different shirt sizes and different colors were different prices (the cost per shirt varies from $6.32 for a Grey Small, to $12.95 for a White XXL), it was crucial to a rough estimate of what the shirt size and color choice distribution would be.

I knew a lot of people were going to order large and medium shirts, but if I had too many XXLs, it might not have been affordable to offer the white shirts. And if it turned out that I couldn’t afford the white shirts, I didn’t want to even offer the white shirts in the survey.

So I took a look at Yancey’s shirt distribution. For his project he had the following distribution of sizes:

  • S: 13%
  • M: 26%
  • L: 30%
  • XL: 21%
  • XXL: 10%

These ratios enabled me to estimate what the distribution of sizes would be for my shirts. I guessed that 2/3rds of my backers would want the grey shirt, and the rest would want the white shirt.

Using these numbers and the bulk costs from, I was able to determine that I could afford to offer both the grey and white shirts in my survey.

Here’s what the actual distribution of sizes turned out to be:

  • S 13.85%
  • M 29.44%
  • L 32.90%
  • XL 15.58%
  • XXL 4.33%

My numbers weren't far off from Yancey’s -- I ended up having fewer XXL than me which surprised me. The grey / white distribution ended up being 77% grey, 23% white.

I threw together a quick graph using ggplot2:

One observation is that the distribution of white shirts looks like it skews smaller. Anecdotally, that might be explained by women preferring the white shirt (again, anecdotal), and since women's sizes tend to run smaller, the distribution of white shirts had proportionally more smaller sizes.

Buying the Shirts

ApparelSource seems to operate multiple websites with basically the same layout and offerings. They had the best prices on CANVAS shirts I could find, though oddly, the prices varied between their different sites by a couple of cents. Some sites had them in stock and others didn't. This made no sense to me because they're all being shipped from the same warehouse.

I ordered one of each shirt style to confirm they were the style I wanted, and then prepared to make the bulk order.

My order was extremely specific (13 Small White, etc.) so I think it raised some flags on ApparelSource's side. I had to spend some time working out the best way to pay them. It turned out paying them via PayPal was the easiest way.

This is a version of the sheet I used to calculate costs. I think working with something like this sheet is crucial for staying sane when fulfilling any Kickstarter project.


Yancey had also suggested I check out Kayrock so I dropped them a line to get a quote. Kayrock had a source they recommended for buying the actual shirts which would have saved me over $1,000, but I wanted to stick with my choice of CANVAS shirts from ApparelSource because I had already tested their quality.

In the original email Kayrock had quoted the print job as “plastisol ink, no shirt rolling, no double hit, no handling fee, no rush fee, no production prepress, no color change fee“, and gave me the quote of $479 + $80 for printing and setup.

What they didn’t tell me, was that if I didn't use their apparel quote and instead had the shirts shipped to them directly, they were going to charge me $0.40 cents per shirt of handling. This added $100 to the total cost, so my Kayrock fees were $660.

Despite this issue, their work and responsiveness over e-mail was very good, and I would recommend checking them out if you're interested in some great NY screen printing.

Once Kayrock finished printing the shirts that I had shipped directly to them, I headed to Greenpoint to pick them up myself so I could start packing and sending t-shirts from Kickstarter HQ.

Backer Addresses and Postage

In general, backer addresses tended to be malformed in a couple of ways. The first was that people tended confuse "Line 1" and "Line 2" -- they either put their Apt # first, then street, or their Street then company.

I also ran into problems with 20+ addresses from the North East because their zip-codes were had their leading zeros removed.

Zip codes like 06897 became 6897 when opened in Google Docs or Excel because they were converted to numbers.

In addition someone put “Meow, Meow” as their address. They didn't get their shirt.

To save on shipping, I did my research and chose First Class mail, which let me specify the ounces for each package. After weighing each style of shirt in an envelope (I correctly assumed the labels weren't going to add any more weight), I had three tiers:

  • S / M white - 5oz - $1.98 postage
  • M / L shirts - 6oz - $2.15 postage
  • XL, XXL shirts - 7oz - $2.31 postage

Sorting by whose shirts had already been delivered (all KSR staff, some friends and family), and removing those people from my CSV yielded 183 packages, summing to $370. I'd recommend keeping careful track of the rewards you ship vs. the rewards you intend to hand deliver.

The cost to ship the same shirts using flat rate envelopes would have been $942.45, so I managed to save a lot by carefully weighing and picking First Class mail instead.

After having lunch with my mom on a Saturday, she offered to help me stuff envelopes and ship the shirts. This was actually a lot of fun and I always forget how rewarding manual labor can be when the majority of my day to day work involves keyboards and LCD screens.

My mom suggested to come up with a color coded system to manage shirt sizes & colors (envelopes marked with Blue represented a Large White shirt, for example). This worked well, but only because I had access to many different colored Sharpies.

Fulfillment would have been chaos without some kind of organized system for tracking sizes and colors.

Tyvek envelopes are great, until you realize they’re a little pricey; Staples sold 50 for $28, so $0.56 per envelope which added to the shipping cost. I probably could have ordered them for free from USPS if I had planned the shipping session in advance. In general I highly recommend using Tyvek envelopes as they’re much lighter, waterproof, and probably more durable than the cardboard flat-rate envelopes.

Endicia Software

I’m not sure how I would have addressed and processed all the envelopes without Endicia which is batch mailing software. I get the feeling many other creators have used it to fulfill their rewards. Endicia is free to use for 30 days, and then $15 a month to keep using it afterwards.

The software allows you to import a CSV, but the requirements are very strict. The fields must be (in the following order):

  • Name
  • Company
  • Address1
  • Address2
  • City
  • State
  • Zip

In order to correlate each person with a label and envelope with a t-shirt, I merged their size and color selection into the name field. So it looked like this for a XXL Grey shirt:

John Smith XXL/G
3105 Sturges Ridge
Santa Rosa, California 95401

This might have been a little odd for backers (are there privacy implications for exposing their shirt size to the post master?), but Endicia didn’t allow any other fields, so there was no where else to put this information on the label.

I bought $500 worth of postage, but only ended up using $370, and was able to get a refund for the left over balance when I canceled my Endicia account.

Printing Labels

Printing labels from Endicia is very scary. You are shown this dialog box (but replace 7 with 180, worth $370):

I did one test print of a label and it seemed OK, so I went with through with the big batch. After that Endicia automatically opened a file inside my /tmp/private directory with a number of rejected addresses, most of which had the ZIP code problem. This was a time consuming and stressful process because any mistake would mean lost money on postage.

After I was done printing the postage from Endicia, I realized I could have sent the batch to a PDF instead of the actual printer, and it would have mitigated some of the risk. Without storing my labels somewhere, if my printer had gone off line or my computer would have crashed, it was unclear how I would recover labels that were still stuck in the queue. I hated trusting the printer software this much, but the overall process went pretty smoothly.

Internal Kickstarter Post-Mortem

Once I shipped my shirts, I wrote up an internal email to Kickstarter staff detailing my fulfillment process. That email became the kernel for a number of discussions about how we could better improve the data processing creators face when delivering their rewards to backers.

One solution to the dirty-address problem was to use an external service to validate the addresses supplied by backers.

Using an external service to validate backer addresses turned out to solve two problems.

First, it would ensure backers had the chance to confirm a valid address, which is always a good thing. Second, it would add a hyphen and the 4-digit add-on to US zip-codes, converting zip-codes like 11217 to strings such as 11217-1142.

That extra hyphen would prevent applications like Google Docs and Excel from converting zip-codes like 06897 into numbers like 6897 (technically, the application would recognize the cell as containing a string so it wouldn't attempt integer coercion).

Preventing zip-codes from losing their preceding zero would mean software like Endicia wouldn't reject addresses from the North East. So we decided to give it a shot.

First, Jed did research on what external services we might like to use. Strike Iron emerged as one of the best options, so we began sending legacy addresses to their API in order to evaluate how many they'd be able to validate and correct. This back testing showed that Strike Iron would be able to supply valid addresses for the vast majority of the sample of the surveys we tried. It also showed that Strike Iron would even be able to suggest corrections for many otherwise undeliverable addresses.

Since then, its been really exciting to watch Jed and Tieg build the actual data flow, and I'm super proud of how it turned out.

So even though this is a somewhat behind-the-scenes change in the way Kickstarter processes data, its exciting to know that it will make the lives of many creators just a little bit easier.

Visualizing SOPA on Twitter

When I heard that Tyler Gray at Public Knowledge was looking for someone to do some analysis on tweets that mentioned SOPA, I thought I might try Cytoscape (an open source tool used for biomedical research, but handy for large scale data visualization) to show some of the relationships between people discussing the controversial bill on Twitter.

The result is a graph of the most active users referencing SOPA

Public Knowledge worked with the Brick Factory to set up their slurp140 tool to record approximately 1.5 million tweets which Tyler sent me in the form 350mb CSV file. I first used Google Refine to clean and narrow the set down to only tweets which were replies to someone else. This left approximately 80,000 tweets which I then imported into R. I then ranked all of usernames by how often they appeared both as senders and recipients, and then picked the approximate top 1,000 users. Since replies are sent from one user to another, the graph is directed: each edge has a direction with an origin and an arrow pointing at the recipient. There are 1,021 nodes identified by their Twitter usernames, and 1,757 edges a good portion of which are labeled with the content of their tweet.

Visualizing networks this large is more of an art than a science

I've tried to strike a balance between visual complexity, aesthetics and readability of tweets, but you'll find that this isn't always successful. Sometimes tweets run into nodes, sometimes edges run into labels, and sometimes the graph feels like a total mess. But that messiness is part of what made the SOPA debate on so interesting over the last month.

Thousands of people participating with plenty of cross talk.

The colors and sizes of the nodes and edges are coded in the following ways:

  • A node and its label size is maps to the number of tweets both posted by a user and and mentioning a user. (Ex: @BarackObama is a huge node because so many people were tweeting at him about SOPA).
  • Node color represents the number of outgoing tweets. The greener the node, the more replies a user posted. (Ex: @Digiphile sent a lot of tweets mentioning SOPA.)
  • Edge thickness represents "edge betweeness" which is how many "shortest paths" that run through it. This is a rough measure of how central a given tweet is in a network. (Ex: @declanm and @mmasnick have a thick line connecting them because many other nodes are connected to the two through that tweet.)
  • Edge color represents the language of the tweet. (Ex: Tweets in English are blue, Spanish are yellow.)

The nodes are positioned using an "force directed" algorithm which is typically designed for undirected graphs, but I found it to be the most visually compelling of Cytoscape's layout options. To learn more about force directed graphs, take a look at this d3 tutorial visualizing the characters in Victor Hugo's Les Misérables.

To really browse the graph visit GigaPan where I've uploaded a 32,000 x 32,000 pixel version.

I highly recommend GigaPan's full screen mode. I've also created a couple snapshots on GigaPan that highlight interesting nodes: @BarackObama, @GoDaddy, and @LamarSmithTX21 and @DarellIssa.

If you really want, you can also download the 36mb gigapixel file, the Cytoscape source file, and the PDF vector version of the network graph.

Thanks again to Public Knowledge, The Brick Factory for providing the infrastructure to record the tweets, and everyone who has helped fight against SOPA and PIPA over the last couple of months, especially those who tweeted about it.