ITP Blog

RWET - #5 :: Use Predictive Models To Generate Text

>>>>>>Github Link<<<<<<

Behind Tisch Building 

Screen Shot 2018-03-30 at 1.17.17 PM.png
WhatsApp Image 2018-03-23 at 8.30.26 PM.jpeg

The notebook I wrote is trying to create fake medicine names with a recognizable combination of a solution to my anxiety and drug stems. 

I think it does not make too much sense to use RNN in this case to pass the idea to the audience, however, playing with RNN is a lot of fun.


Some basic takeaway:

RNN output usually has the pattern of input.

But the drug stems I use have dash(-), which I think doesn't work very well based on the pre-training data. Sometimes it returns a short phrase or sentence, which makes it difficult to fit in a tracery template if the output is not processed first.

The tracery grammar modifier, to my surprise, works for the generative words. I tested capitalized words and transform the words to the plural, both of them works.


My thoughts:

What I'm trying to do is a formula:

"Solution Item" + "Drug Stem" = "Fake Medicine"

I think the biggest problem, which I didn't solve yet, is how to input two sources without polluting each other but still has sort of connections to each other.

In the end, I only input drug stem to RNN and the output based on my choice, I would describe it as "polluted" in this case. Most of the output is no longer has the "texture" feeling of drug stems and some of the output are phrases and sentences. The reason caused this problem may be the source used by the pre-training model.

The RNN feeling more unpredictable for single words. When I tested to input a sentence strings list, the result make more sense



I add more epochs and the results is better and the format of output is also more structured.

Screen Shot 2018-03-30 at 2.43.25 PM.png