Word processing: How can I get token numbers from a document?
8 views (last 30 days)
I'm trying to tokenize a huge document (wikipedia) (so that I can convert the document to word vectors). I want to convert the giant char array into a numeric array of token IDs (indexing into a dictionary I have) in word order. I was able to write code for this using for loops of regexp()'s, but it's taking days and days to run. I see that tokenizedDocument() might be a good alternative, except that I can't figure out how to get the document back as a list of numeric token IDs.
Has anyone successfully tokenized a document in this way? If so, how?