Total Recall – Collecting & Storing My Digital Memories

Posted on: by

Total Recall
A few years ago I read Total Recall. It really opens up what is possible when it comes to saving our memories in the digital age.

This was later reinforced further when the TV series Caprica came out.  One of the characters in Caprica, Zoe, was killed and recreated from her digital detritus, bits of information about her that had been collected, organized and mined over time.

While Zoe’s story is about the future, the concept is intriguing and is the logical conclusion of Gordon Bell’s work into e-memory.

Today, we leave lots of digital bits of ourselves all over the Internet and, while it’s becoming harder to collect these bits up, it’s not impossible.

When you examine what all of our digital data consists of, it really only comes in three forms: text (or a format that can become text), images, and video (which is really just complex images).  To save all of this data, you need something that will accept those three data types, preferably easily.

But storing it is only half the battle.  It has to be easy to get information in, get it out, make it searchable and make it last.

Making it last is a difficult problem and a good deal of time is spent discussing the problem of future proofing digital data in Total Recall.

For me, the perfect system is Evernote. The main reason for this, besides having an API that virtually everything works with, is that I can easily export all of my data from Evernote into XHTML, a very future proof and machine readable format.

One of the things I have decided to do this year is automate the flow of data from various places I publish on the Internet into Evernote.

This seems like a monumental task but I have found some items that make it easier.

First, I conducted an inventory of every place I publish data. This included the obvious ones, like Twitter, Google, and Facebook, but it also includes services like Goodreads and Withings.

Instead of turning immediately to my favorite text editor to start coding exporters and importers for the various APIs, I instead use the IFTTT service as a kind of middleware.  IFTTT works based on channels and each channel represents a service and API.  If a service I use has a channel that exposes the data I need, great it become the This.  Evernote is always the That.

Sometimes, a services doesn’t have a channel or the channel doesn’t have an outlet for the data I am interested in so I then have to dig into the API documentation. I still use IFTTT as my importer, but the exporter becomes some custom code I write to interface with the API and I output an RSS feed, which I then use as the This part of the equation.

For instance, YouTube has a channel in IFTTT, but it will only access the public data from YouTube.  I want to capture my Watch History and that is private data not exposed in the channel.  No problem.  Google has a number of SDK’s for various languages and some ok documentation that I was able to automate the pulling of the Watch History and generation of an RSS feed which I can then feed into IFTTT and have it push the data into Evernote.

Once the data is in Evernote, there is just a ton that you can do with it.  I already mentioned the future proof portable formatted archive, but you can do a ton of interesting searches to bubble up interesting data.  How often do I post on Facebook? What all did I post today?  What did I post this time a year ago?  The sky is the limit.

If you are really interested in a set of data, you could export it to XHTML to enable easy parsing and begin to mine it for even more interesting information.

The key is to start collecting data, automate the collection, and have the ability to get it into a portable future proof format.

Evernote is great at this.