DM Answers

Data Mining with STATISTICA

Extending STATISTICA Text Mining Capabilities using R Integration

12th of April, 2014

Last summer I kicked off my blog with a series of posts about text mining Pub Med journal articles using STATISTICA Text Miner.  I had a reader ask a question in the Fall of 2013 concerning adding phrases to the text analysis.  STATISTICA allows phrases to be specified.  The problem is that there is no provision for counting phrases within STATISTICA so the text miner knows which phrases to include in the analysis.  I suggested to the reader that R could possibly be used to generate n-grams which in turn could be specified in STATISTICA.

I didn’t give it much thought for a few months until I came upon another text mining project.  I set forth with resolve to find a method to get the n-grams within STATISTICA using R.  The following video shows the results of my research.  I found that the tm and RWeka packages could be used together to get the desired results.  This of course presumes that you have WEKA installed on your computer to be accessed through the RWeka package.

If you would like to try the code with a text mining project within STATISTICA, save the following text in a text editior such as Notepad with an extension of .R   When the file is opened in STATISTICA it will be automatically recognized as an R macro.

if (Sys.getenv(“JAVA_HOME”)!=””)
Sys.setenv(JAVA_HOME=””)
library(RWeka)
library(tm)
pubmed <- ActiveDataSet
pubmed <- pubmed[,1]
doc.vec <- VectorSource(pubmed)
doc.corpus <- Corpus(doc.vec)
BigramTokenizer <- function(x) NGramTokenizer(x, Weka_control(min = 2, max = 3))
tdm <- TermDocumentMatrix(doc.corpus, control = list(tokenize = BigramTokenizer))
skipWords <- function(x) removeWords(x, c(“i”,”me”,”my”,”myself”,”we”,”our”,”ours”,”ourselves”,
“you”,”your”,”yours”,”yourself”,”yourselves”,”he”,”him”,”his”,”himself”,”she”,”her”,”hers”,”herself”,
“it”,”its”,”itself”,”they”,”them”,”their”,”theirs”,”themselves”,”what”,”which”,”who”,”whom”,
“this”,”that”,”these”,”those”,”am”,”is”,”are”,”was”,”were”,”be”,”been”,”being”,”have”,”has”,”had”,
“having”,”do”,”does”,”did”,”doing”,”a”,”an”,”the”,”and”,”but”,”if”,”or”,”because”,”as”,”until”,”while”,
“of”,”at”,”by”,”for”,”with”,”about”,”against”,”between”,”into”,”through”,”during”,”before”,”after”,
“above”,”below”,”to”,”from”,”up”,”down”,”in”,”out”,”on”,”off”,”over”,”under”,”again”,”further”,
“then”,”once”,”here”,”there”,”when”,”where”,”why”,”how”,”all”,”any”,”both”,”each”,”few”,”more”,
“most”,”other”,”some”,”such”,”no”,”nor”,”not”,”only”,”own”,”same”,”so”,”than”,”too”,”very”,”epub”))
funcs <- list(stripWhitespace, skipWords, removePunctuation, tolower, stemDocument)
y <- tm_map(doc.corpus, FUN=tm_reduce, tmFuns= funcs)
tdm <- TermDocumentMatrix(y, control = list(tokenize = BigramTokenizer))
findFreqTerms(tdm, low=20, highfreq = Inf)

Leave a Reply

Your email address will not be published. Required fields are marked *

*

*