In Module 3, we used TEI to mark up primary documents. Melodee Beals has been using TEI to markup newspaper articles, creating the Colonial Newspapers Database (which she shared on github). We then used Github Pages and an XLST stylesheet to convert that database into a table of comma-separated values https://raw.githubusercontent.com/shawngraham/exercise/gh-pages/CND.csv. We are now going to topic model the text of those newspaper articles, to see what patterns of discourse may lie within.
First we need to set up our workspace. On my machine, I’ve created a folder on my desktop called ‘Beals’ to keep all of this neat and tidy.
setwd("C:\\Users\\Shawn Graham\\Desktop\\Beals")
# give yourself as much memory as you've got
options(java.parameters = "-Xmx5120m")
#install.packages('rJava')
library(rJava)
## from http://cran.r-project.org/web/packages/mallet/mallet.pdf
#install.packages('mallet')
library(mallet)
The first line sets our working directory to that new folder. The next line increases the available memory for java (since our topic modeling tool, MALLET, uses Java). Then we tell R Studio to use the rJava and the Mallet libraries. If you were running that code above for the first time, you would remove the # in front of the ‘install.packages’ command (use ctrl+f to find every line with ‘install’ on it, and remove the #). Copy the code above into a new R script document in R studio. Put the cursor in the first line; hit ctrl+enter to run the line through the console.
Now we want to tell R Studio to grab our data from our github page. The thing is, R Studio can easily grab materials from websites where the url is http; but when it is https (as it is with github), things get a bit more fussy. So what we do is use a special package to grab the data, and then shove it into a variable that we can then tease apart for our analysis.
#install.packages('RCurl')
library(RCurl)
## Loading required package: bitops
##
## Attaching package: 'RCurl'
##
## The following object is masked from 'package:rJava':
##
## clone
x <- getURL("https://raw.githubusercontent.com/shawngraham/exercise/gh-pages/CND.csv", .opts = list(ssl.verifypeer = FALSE))
#on a mac, the final bit of that line beginning with .opts was not necessary
documents <- read.csv(text = x, col.names=c("Article_ID", "Newspaper Title", "Newspaper City", "Newspaper Province", "Newspaper Country", "Year", "Month", "Day", "Article Type", "Text", "Keywords"), colClasses=rep("character", 3), sep=",", quote="")
Remember how we used ‘curl’ in one of our scripts in Module 3 to grab data from the Canadiana.org api? RCurl is its doppleganger within R. So, we create a variable called ‘x’ and we pass to it the Colonial Newspaper Database as csv. Then, we read the contents of that csv file and tell R what the structure of the data is. All of this gets passed to a variable called ‘documents’. Incidentally, if we then wanted to look only at the keywords in this table, we would do that by querying ‘documents$Keywords’. The $ is quite handy!
Now, let’s take a look at our data.
counts <- table(documents$Newspaper.City)
barplot(counts, main="Cities", xlab="Number of Articles")
Clearly, we’re getting an Edinburgh/Glasgow perspective on things. And somewhere in our data, there’s a mispelled ‘Edinbugh’. Do you see any other error(s) in the plot? How would you correct it(them)?
years <- table(documents$Year)
barplot(years, main="Publication Year", xlab="Year", ylab="Number of Articles")
There’s a lot of material in 1789, another peak around 1819, againg in the late 1830s. We can ask ourselves now: is this an artefact of the data, or of our collection methods? This would be a question a reviewer would want answers to. Let’s assume for now that these two plots are ‘true’ - that, for whatever reasons, only Edinburgh and Glasgow were concerned with these colonial reports, and that they were particulary interested during those three periods. This is already an interesting question that we as historians would want to explore. Try making some more visualizations like this of other aspects of the data. What other patterns do you see that are worth investigating? Here’s some extra help on plotting in R.
This quick distant look now necessitates some close reading - and back again!
Let’s look at the words themselves in our data (and, as an aside, you might read this by David Mimno)
mallet.instances <- mallet.import(documents$Article_ID, documents$Text, "C:\\Mallet\\stoplists\\en.txt", token.regexp = "\\p{L}[\\p{L}\\p{P}]+\\p{L}")
#when you direct to a path on a windows machine, you need to use \\ rather than one \.
That line above passes the article ID and the text of our newspaper articles to the Mallet routine. Note also the path to my stoplist. That’s a generic one I use all the time, when I’m first playing with topic models. You might want to create your own, one for each project given the particulars of your project. Here’s a stoplist you can use http://bridge.library.wisc.edu/jockersStopList.txt Download that, and save it into your working directory. Then, in the block above where I’ve got the location of my stoplist, just put “jockersStopList.txt” and R Studio will look for it in your working directory. Note that Jockers compiled this stoplist for his research in literary history of the 19th century. Your mileage may vary! Finally, the last bit after ‘token.regexp’ applies a regular expression against our newspaper articles, cleaning them up.
#set the number of desired topics
num.topics <- 50
topic.model <- MalletLDA(num.topics)
Now we’ve told Mallet how many topics to search for; this is a number you’d want to fiddle with, to find the ‘best’ number of topics. The next line creates a variable ‘topic.model’ which will eventually be filled by Mallet using the LDA approach, for 50 topics. Let’s get some info on our topic model, on our distribution of words in these newspaper articles.
topic.model$loadDocuments(mallet.instances)
## Get the vocabulary, and some statistics about word frequencies.
## These may be useful in further curating the stopword list.
vocabulary <- topic.model$getVocabulary()
word.freqs <- mallet.word.freqs(topic.model)
head(word.freqs)
## words term.freq doc.freq
## 1 commerce 8 4
## 2 canada.extract 1 1
## 3 letter 122 83
## 4 population 69 41
## 5 canada 137 48
## 6 reckoned 1 1
write.csv(word.freqs, "cnd-word-freqs.csv" )
As the comment indicates, if we take a look at word frequencies right now, before we generate the model, we can see if there are words that would be better added to our stopword list. The final line writes these word frequencies to a new csv file. Do you see how you might create a bar plot of word frequencies?
Now we do the heavy lifting: generating a topic model:
## Optimize hyperparameters every 20 iterations,
## after 50 burn-in iterations.
topic.model$setAlphaOptimization(20, 50)
## Now train a model. Note that hyperparameter optimization is on, by default.
## We can specify the number of iterations. Here we'll use a large-ish round number.
topic.model$train(1000)
## Run through a few iterations where we pick the best topic for each token,
## rather than sampling from the posterior distribution.
topic.model$maximize(10)
## Get the probability of topics in documents and the probability of words in topics.
## By default, these functions return raw word counts. Here we want probabilities,
## so we normalize, and add "smoothing" so that nothing has exactly 0 probability.
doc.topics <- mallet.doc.topics(topic.model, smoothed=T, normalized=T)
topic.words <- mallet.topic.words(topic.model, smoothed=T, normalized=T)
Let’s look at some of our topics.
## What are the top words in topic 7?
## Notice that R indexes from 1, so this will be the topic that mallet called topic 6.
mallet.top.words(topic.model, topic.words[7,])
## words weights
## 1 country 0.02917193
## 2 great 0.02661624
## 3 made 0.01938980
## 4 time 0.01930167
## 5 place 0.01639347
## 6 present 0.01551220
## 7 part 0.01436654
## 8 state 0.01286838
## 9 good 0.01084145
## 10 day 0.01022456
Now we’ll write the distribution of the topics by document (ie newspaper article) to a csv file that we could explore/visualize with other tools. Then, we’ll take a look at the key words describing each topic.
topic.docs <- t(doc.topics)
topic.docs <- topic.docs / rowSums(topic.docs)
write.csv(topic.docs, "cnd-topics-docs.csv" )
## Get a vector containing short names for the topics
topics.labels <- rep("", num.topics)
for (topic in 1:num.topics) topics.labels[topic] <- paste(mallet.top.words(topic.model, topic.words[topic,], num.top.words=5)$words, collapse=" ")
# have a look at keywords for each topic
topics.labels
## [1] "citizens united states congress information"
## [2] "river miles governor bathurst mountains"
## [3] "quakers round newspaper dance thought"
## [4] "meeting lord john members committee"
## [5] "duke military death son henry"
## [6] "bill hear house hon committee"
## [7] "country great made time place"
## [8] "companies james emigrants ireland political"
## [9] "edinburgh understand rev long congregation"
## [10] "cape good colony hope colonial"
## [11] "house fire evening miles week"
## [12] "esq company amount capital parties"
## [13] "years age wife marriage husband"
## [14] "country land dollars miles good"
## [15] "states united america york acres"
## [16] "ditto june april sept john"
## [17] "canada upper province quebec montreal"
## [18] "scene maraino piece consistent fighting"
## [19] "coast cape port miles readers"
## [20] "wheat flour corn bushels barrels"
## [21] "islands american british timber britain"
## [22] "wool bales market wools sales"
## [23] "church commission presbytery council motion"
## [24] "general army men fort regiment"
## [25] "gold bank coin silver house"
## [26] "west company north fort river"
## [27] "persons settlers family children government"
## [28] "peru lima town water produce"
## [29] "boat boats whale squadron sea"
## [30] "public government subject house object"
## [31] "state means legislature legislative constitution"
## [32] "ship board captain vessel passengers"
## [33] "king chief natives people crew"
## [34] "ladies fired offer carriage steam"
## [35] "indians killed men war wounded"
## [36] "number year emigrants population british"
## [37] "slaves free negroes whites coloured"
## [38] "people men property common paper"
## [39] "general community labourer subject class"
## [40] "convicts society punishment transportation prisoners"
## [41] "received town arrived letter days"
## [42] "british missionaries france americans king"
## [43] "remarks men promise find bad"
## [44] "wild woodland president full chorus"
## [45] "friend iron piece ordering jail"
## [46] "south land colony wales emigration"
## [47] "spaniards spanish cherokee chiefs cherokees"
## [48] "labour colony means situation produce"
## [49] "states united vessels act colonies"
## [50] "hospital medical edinburgh court motion"
write.csv(topics.labels, "cnd-topics-labels.csv")
Some interesting patterns suggest themselves already! But a list of words doesn’t capture the relative importance of particular words in particular topics. A word might appear in more than one topic, for instance, but really dominate one rather than the other. Word-clouds, that much-maligned technique, is actually rather useful in this regard. Let’s visualize these topic words.
### do word clouds of the topics
#install.packages('wordcloud')
library(wordcloud)
## Loading required package: Rcpp
## Loading required package: RColorBrewer
for(i in 1:num.topics){
topic.top.words <- mallet.top.words(topic.model,
topic.words[i,], 25)
print(wordcloud(topic.top.words$words,
topic.top.words$weights,
c(4,.8), rot.per=0,
random.order=F))
}
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
## NULL
Note that when you run all this in RStudio, your plots will appear in the ‘plots’ window in the bottom right quadrant of the interface, and you can export them as png or pdf. If you export as pdf, you can then open them in a vector graphics program like Inkscape or Illustrator, and modify the individual elements to make a more pleasing visual.
Finally, let’s do some clustering. We can cluster topics based on how they share words in your corpus:
## cluster based on shared words
plot(hclust(dist(topic.words)), labels=topics.labels)
The plot is a bit crowded; in RStudio you can open it in a new window to see the dendrogram more clearly.
Now, if we want to get really fancy, we can make a network visualization of how topics interlink due to their distribution in documents. The next bit of code does that, and saves in .graphml format, which packages like Gephi http://gephi.org can read.
topic_docs <- data.frame(topic.docs)
names(topic_docs) <- documents$article_id
#install.packages("cluster")
library(cluster)
topic_df_dist <- as.matrix(daisy(t(topic_docs), metric = "euclidean", stand = TRUE))
# Change row values to zero if less than row minimum plus row standard deviation
# keep only closely related documents and avoid a dense spagetti diagram
# that's difficult to interpret (hat-tip: http://stackoverflow.com/a/16047196/1036500)
topic_df_dist[ sweep(topic_df_dist, 1, (apply(topic_df_dist,1,min) + apply(topic_df_dist,1,sd) )) > 0 ] <- 0
#install.packages("igraph")
library(igraph)
g <- as.undirected(graph.adjacency(topic_df_dist))
layout1 <- layout.fruchterman.reingold(g, niter=500)
plot(g, layout=layout1, edge.curved = TRUE, vertex.size = 1, vertex.color= "grey", edge.arrow.size = 0, vertex.label.dist=0.5, vertex.label = NA)
write.graph(g, file="cnd.graphml", format="graphml")
There are many ways of visualizing and transforming our data. This document only captures a small fraction of the kinds of things you could do. Another good exploration is at http://bridge.library.wisc.edu/hw1a-Rcoding-Jockers.html. Ben Marwick does really fun things with the Day of Archaeology blog posts https://github.com/benmarwick/dayofarchaeology and indeed, some of the code above comes from Marwick’s explorations. Keep your R scripts in your open notebook, and somebody might come along and use them, cite them, improve them, share them! Keep also all your data. Here’s an example from my own work https://github.com/shawngraham/ferguson.