Subredditor

Over Thanksgiving vacation I decided I wanted to see how various subreddits were connected and what their relative sizes were. Some projects seemed to tackle this goal, but I didn’t like the interface or how old their datasets were so I started to build my own. I began by building a scraper that would look at a specific subreddit, find the related subreddits section (for this I used PRAW), parse the section for the subreddit links, and build a map of the connections.  The application would visit each subreddit referenced until all known subreddits were visited.  The code for the crawler is here: https://github.com/cdated/reddit-crawler

Since crawling the entire site (with rate-limiting) took a couple of days I eventually updated the crawler to insert the additions into MongoDB. This ensured progress would not be lost if the application crashed or the internet connection was interrupted. Once the dataset was generated I wanted to make an interactive graph anyone could access on the internet. So first I needed a simple web server that would accept a few parameter; subreddit, graph depth, and nsfw. Without much trouble I got a flask server to return a static graph image using Python’s graphviz library. I had a little experience with Heroku so decided to put my current work up there.

Having a public interface to my project, I was emboldened to improve the usability and wanted to try out D3.js.  From the D3.js homepage I found an interactive graph example that would suit my needs. After altering the graph data to match the D3.js format I was able to get what I wanted working in JavaScript. This opened a lot of options for me to make the nodes draggable, turn the nodes into links, change the size of the nodes to represent the subscriber counts, and dynamically color the nodes and links to make graph look more attractive.

subredditor sample graph

I still have a lot of changes I want to make to the project when I have time. The database currently uses MongoLab’s free tier which makes the deployment a lot slower than my development environment. I eventually want to update the crawler to use Postgresql’s hstore then I can leverage Heroku’s Posgresql support. Likewise, while deploying in Heroku is very convenient it also imposes many constraints. Migrating to a VPS would force me to work at all levels of the deployment.

The code: https://github.com/cdated/subredditor

Live instance: http://subredditor.com

13 Years of Reading (Stats)

My Grandmother, when she was alive, was a voracious reader. No matter how thick the tome she’d be through with it in a matter of hours.  Having struggled with reading in grade school I was skeptical of her pace, I still won’t finish a short story without at least spending some time to stop and digest the content. She was a happy speed reader, but I knew there was no way I would be too.  Out of curiosity I wanted to know how it affected her retention, so I quizzed her on past readings authors, titles, subject, but her recollection was very limited. Not only has she forgotten plots, but authors and titles as well. While it should have been no surprise to me that books she read over 40 years ago are long gone, it was also horrifying to think how a book I would literally spend months trying to finish would be forgotten to the point where the entire experience is lost.  Billy Collins, in his poem Forgetfulness, eloquently described how what I had just realized was an inevitability:

The name of the author is the first to go followed obediently by the title, the plot, the heartbreaking conclusion, the entire novel which suddenly becomes one you have never read, never even heard of, as if, one by one, the memories you used to harbor decided to retire to the southern hemisphere of the brain, to a little fishing village where there are no phones. Long ago you kissed the names of the nine Muses goodbye and watched the quadratic equation pack its bag, and even now as you memorize the order of the planets, something else is slipping away, a state flower perhaps, the address of an uncle, the capital of Paraguay. Whatever it is you are struggling to remember, it is not poised on the tip of your tongue, not even lurking in some obscure corner of your spleen. It has floated away down a dark mythological river whose name begins with an L as far as you can recall, well on your own way to oblivion where you will join those who have even forgotten how to swim and how to ride a bicycle. No wonder you rise in the middle of the night to look up the date of a famous battle in a book on war. No wonder the moon in the window seems to have drifted out of a love poem that you used to know by heart.

After the conversation with my Grandmother, I became paranoid about losing what I would invest so heavily in and started keeping records of the books I finished.  I made sure that just after reading the last page I would append some information about the book to a spreadsheet. So far I have been adding to it over the last 13 years, making it one of the few habits I keep.

I decided it’d be fun to visual this small amount of data I’ve been slowly compiling, so I synced up my spreadsheet with goodreads and exported a CSV to play with. Considering I wanted to delete most of the columns with no valuable information, I needed something quick and dirty to edit the sheet.  Out of curiosity I checked if there was way to get Vim to parse a spreadsheet well enough, which led me to the csv.vim plugin.  After a quick install I browsed the sheet and decided I only wanted the dates I read each book, the page counts, and the publication dates.

In favor of doing things the “hard-way” I wrote this one-liner:

for year in {1999..2012}; do cat goodreads_truncated.csv | grep "$year/" | cut -f2 -d'"' | awk -v year=$year '{S += $1} {count += 1} END {print year "\t" count "\t" S}'; done

to give me this table:

1999 1  272
2000 6  1184
2001 15 3874
2002 25 7470
2003 19 4277
2004 31 6917
2005 7  2157
2006 6  1243
2007 3  1143
2008 5  1362
2009 1  450
2010 1  247
2011 14 3644
2012 5  1133

That is, I used the for loop to grep for each of the years from 1999 to 2012 (YYYY/MM/DD), used cut to return the field with the page numbers, passed the year variable into awk and had awk add all the pages by year, count the number of books per year, and format it into a table.

Lastly, I put the data onto a Google Docs spreadsheet (without 2012) to look at and got the following chart:

I used pages despite their inconsistency because they still reveal more than book count, consider Moby Dick versus The Importance of Being Earnest.  The average pages per book in are the red bars, so however many can be stacked up against the blue bars are the books read that year, and average size.  I also graphed out the publication dates against the read dates, but all it showed was I read mostly contemporary literature.

Being a slow and picky reader, I think my data set is rather small but it still reveals some interesting details over the last 13 years of my life.  I attended high school between 2000 and 2004 so I had books I needed to read as part of my course work, I was beginning to read seriously for the first time, and I spent a fair amount of time waiting on buses. I took AP English between 2003 and 2004, and college was the following year, so a lot was crammed in before I had worry about college courses.  Between 2004 and 2008 my reading (free time) dropped off substantially.  From 2008 on I have been working full-time as Software Engineer and spare time is more often spent on projects and technical books that don’t meet my cover to cover criteria for being added.  In November 2010 I purchased a Kindle and have been trying read like I used to.

Hopefully, in another 13 years I’ll have more data to play with and a few better ideas of what to do with it.