Can AI write the next 'War and Peace'?
Sure, science-fiction is an exciting genre. But the next time you pick a book on the dystopian Artificial Intelligent future, consider how would you feel if that book was actually written by an AI machine?
On the 7th of November, OpenAI, a start-up venture funded by tech tycoons like Elon Musk, released GPT-2; this is a text-generating system with a truly impressive capacity to produce the verisimilitude of original text, and that too with very short inputs. Moving forward, it is imperative for us to remember that GPT-2 doesn't have an actual cognitive aspect to its behind-the-curtains functioning. The system produces outputs by using a tremendously large database in order to train a recurrent neural network. GPT-2 has its shortcomings, like the fact that it cannot maintain intelligent writing for much more than one or two lines. Yet, GPT-2 isn't all that un-intelligent: OpenAI divided its release into multiple segments because they feared that it might be used to facilitate the spread of fake news and malicious texts using it, so it remains capable of some rather dark magic.
Ross Goodwin, a former ghostwriter for the Obama administration and a self-proclaimed "writer of writers" remarks while talking about one of the texts produced by the system that "there are characters in it, which is really strange." A mysterious painter, for instance, appears in the third line to ask, "What is it?" It continues to show up throughout: "A body of water came down from the side of the street. The painter laughed and then said, I like that, and I don't want to see it." It seems as if the system is able to decide on its own the concept of characters and their recurrent feature to a story. Something a little sinister or maybe intriguing about GPT-2 is that the character, as mentioned above of the painter, is aware of the fact that he is in a model and aware of what it does. This may or may not create a self-looping and training neural network without its own creators' knowledge about it.
Here is your food for thought today: what if, a few hundred years from now, when GPT or something similar is very intelligent and developed, would it matter to be a Tolstoy or a John Carmack.
It is safe to say though, that there is plenty of time before Summit, by IBM --the world's strongest supercomputer-- is replacing Leo Tolstoy. Sure, GPT-2 cannot currently write anything remotely similar to Tolstoy, but this hasn't stopped it from becoming the soul of more than 80 NaNoGenMo projects. What is NaNoGenMo, you ask? Well, it is an adaptation of Darius Kazemi's tweet idea from a few years ago, where people around the world generate over 50,000 words in a matter of weeks. More projects directly correlate to better development of the model. Janelle Shane, a programmer, most noted for her creative experiments with AI, remarked at times the sentences that GPT-2 generated were so beautiful that she wondered if they were copied and pasted straight from the training dataset.
If all this talk about GPT-2, has fueled curiosity in you to try it for yourself, it may be a tricky model to access locally and run on one's Command window due to its sizeable git-hub repository. You're not completely out of luck though, you can enter an input of your choice with TalkToTransformer, a more comfortable and accessible websit, and it generates an output based upon the full GPT-2 release. GPT-2 currently, no doubt is, more of a talking point than a using point (for malicious reasons or otherwise), the wide variety of experiments from the generartion of complete academic papers to the generation of poetry have helped this branch of linguistic-programming get coverage and significant heed. Here is your food for thought today: what if, a few hundred years from now, when GPT or something similar is very intelligent and developed, would it matter to be a Tolstoy or a John Carmack.