Attempting to work with text analysis and computers in general is first and foremost an issue is primarily a challenge of learning a new language. Even if I had a mastery of the theory behind analyzing texts at such a large scale (which I don’t), the chasm between knowing what I wanted to use the R program for and finding the appropriate combination of commands, symbols, and terms remained deep and difficult to navigate. Though the Jockers piece generally provided instructions in manageable portions, there were moments where I was unable to even come up with an idea for a first step in accomplishing the practice problems. While I didn’t have any issues with step three (again, the steps were outlined and examples were given of the proper code to write), I struggled mightily to get the Sense and Sensibility graph to look anything like the one provided in the solutions. I even went so far as to type out each line individually from the solutions sheet and was only able to achieve the graph that I posted on the box website (hint: it’s still wrong). This brings be back to the issue of language: though I thought that I was copying the correct commands in order, the end result still wasn’t consistent with what I thought that it should have been. Without any prior experience working with such tools, and with my only guidance being found in the solutions sheet that I was attempting to copy, I felt that I had clearly made a mistake at some point but I had no idea how to locate and rectify it. Working with a different medium often can be disorienting in a manner that transcends the simple confusion that accompanies learning new materials and methods.

That being said, working with the R program and being able to create graphs that help visualize otherwise abstract concepts was a valuable experience. Being able to determine which words comprise what percentage of a given novel was a feat that otherwise would have been nearly impossible. The changes in scales of analysis can certainly provide a framework for new types of studies and new bodies of knowledge within the humanities that traditional close reading would not be able to uncover on its own. It should be noted, though, that digital text analysis on its own can accomplish nothing in regards to generating knowledge. After embarking on a herculean effort to wrangle a computer into manipulating a text in a certain way, the user is still left with a collection of pictures and numbers of undetermined significance. Clement did an exemplary job of showing how complicated texts can be broken down and reconstituted in a different form, and she went beyond that to show how the challenge of making meaning, drawing conclusions, and utilizing different types of data to form a coherent argument relies heavily on one’s ability to sift through the overwhelming combinations of graphs and codes in order to find what’s relevant and useful. A digital text analysis is only as useful as its interpreter, and the new scales and types of data that the digital humanities can produce are only as useful as their openness to interpretation.