I thought the readings for this week were an interesting look into the “big data” world, at the shear amount of data that is collected about all of us, all of the time, many of the ways in which it is used, and by whom.

In Tuesday’s class we discussed both sides of the “big data” issue, where tailored movie genres and personally targeted ads are beneficial to us in many cases, but the tracking of actions on the computer in combination with offline information (census data, gender, age, educational info, etc.) can eventually affect us in a negative way, such as health insurers, air fares, and leaked information. There’s also the viewpoint that, “There’s not much I can easily do to avoid it, and it hasn’t been a problem so far…” which is what many of us can relate to.

As I was reading the Foreman piece, however, I felt that his argument about humanity being completely broken down by the presence of big data was a bit dramatic. “But while they increase our happiness, these companies may be doing nothing short of destroying humanity as we know it.” I agree with Sophia’s post, in that humans cannot be perfectly modeled by algorithms as this author was so determined.  Foreman states that, with the information gathered that can be used on us, we are almost completely predictable, losing worth as humans as we become fat pieces of meat as well as “sad robots”….

He also questions “in the future will we know our own mind enough to choose our attitudes? Or will the disingenuous arguments directed at us be so powerful that it will become impossible to know our own mind?” I can’t bring myself to believe that personalized content being presented to us will eliminate our ability to make a decision, though. Sure, maybe it takes less effort to find a Netflix show you’ll really enjoy, or that coffee shop you’ve really been meaning to try but didn’t know what it was called. But not all of our decisions are made on internet ads and Netflix movies (although that’s a very important subject). Things happen in the real world that the computer algorithms and models just can’t predict. We are human, not digital beings. The Google Flu article for Thursday’s readings reinforces this. GoogleFlu was so confident about being able to “predict” flu outbreaks before anyone else. The model that was used did an “almost perfect job predicting the flu for previous years.” However, the model hardly ever had correct predictions in the Lazer et al. study.

Humans can’t be predicted by a combination of equations, and our minds cannot be wiped of deciding power by giving us information you think we want. I just don’t think that’s how humans work.


This is a rather odd lyric video for a song, but I though that it modeled some of the types of information that we’ve been talking about, often collected as Big Data.