This lab was a bit more difficult than the previous two primarily because it involved learning a new language. In learning to use R to play around with the plain-text documents included in the Jockers exercise, I found myself struggling to understand the particular syntax that makes the whole thing work. I had little trouble with the first two exercises. Following the instructions led me to the arguments I needed and it was just a matter of common sense putting them together in a scheme that would call up the appropriate plot. However, things got difficult in 3.3 wherein the arguments and their compositions no longer related to cognates I could apply to my pre-existing comprehension of linguistics. I had to look at the solutions provided just to understand how to use the UNIQUE function. I still do not fully understand why those lines of code are arranged as they are. I cannot help but feel that I need a lot more time and practice with R before I come close to understanding it. I am particularly confused by how the ‘c’ and ‘which’ functions are supposed to be applied to strings that reference more than one text. The exercise using %in% was also quite taxing. Overall I would rate my competency with the language quite poorly, though I think I could, on my own, pull up and partition out a plain-text document in the same way the second chapter calls for.
Just as a way of starting the conversation on applicability: I performed these exercises sitting next to my wife. She saw my consternation and asked me what this could possibly be good for. It was about this time that I got to the rel.freqs.t portion of the lab. I was able to tell her how many times the word “him” appeared compared to the word “her” in Moby Dick. She remained incredulous, but when I was able to compare that data to Sense and Sensibility, her interest was piqued. This is relevant because, as a sociologist specializing in gender relations, my wife makes use of data like this all the time (just not textually). All this as a way of saying that these techniques are likely a great way to work with other disciplines to develop hypotheses (this coming from a literature-centric perspective) that combine the analytical nature of literary criticism with the statistics-driven objectivity of disciplines like history and sociology.
I think it is also worth asking whether or not these post-human analytics can be used in any traditional way to examine literature. To elaborate: how can we make these sorts of claims about texts that were never meant to be read this way or viewed as a body rather than as individuals? As Moretti says, we are dealing with “invisible objects.” My instinct is to question the fundamental humanity of the author. Is there something about writing that takes the author beyond the realm of the human? If we can apply R to text in a meaningful way, can something akin to the reverse be said about the writing process? That brings the discussion back to language and the way texts are constructed in the first place. Programming languages are governed by their textual antecedents and designed to act on the same. But they also follow a logic that is extra-lingual. Perhaps authors do as well.