AI News, We Taught a Neural Network to Write a Blog
- On Sunday, September 30, 2018
- By Read More
We Taught a Neural Network to Write a Blog
Our main company blog has over 8 million words and our technical blog is currently sitting at 114,000 words (though I just added another 2500 words through this post and the two generated posts!).
Initially, I didn’t have a specific project in mind but as I started my research I thought it’d be funny to use ML to create blog posts.
tl;dr I used Google’s Colaboratory, Max Woolf’s textgenrnn, and Jeremy Singer-Vine’s markovify to create a couple of funny blog posts by training it against our existing posts.
This article would be way too long if I covered what an RNN is and how it works, so if you’d like more information check out this great article by Andrej Karpathy.
When I tried to train the network on G Adventures’ main blog posts it was taking 9 hours per iteration!
I wrote a quick script that generated a blog post title and 7 paragraphs of text and moved on.
As I was waiting for the initial RNN training to be completed I researched how others managed to train their networks on relatively small datasets.
9 hours per iteration (90 hours total) seemed like a really long time and I thought there had to be a better way.
Markov Chain’s basically pair up all of the words in your text corpus (blog posts in this case) and determine the next word in a sentence based off the probability of pairs found in the past.
By default, the make_sentence method tries a maximum of 10 times per invocation, to make a sentence that doesn't overlap too much with the original text.
The generated blog posts won’t win any Pulitzers – they don’t even pass for English – but I found it funny and the company got a kick out of it.
- On Tuesday, March 26, 2019
Computer evolves to generate baroque music!
I put the word "evolve" in there because you guys like "evolution" videos, but this computer is actually learning with gradient descent! All music in this video is ...
How to Predict Stock Prices Easily - Intro to Deep Learning #7
We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use ...
Neural Network Tries to Generate English Speech (RNN/LSTM)
By popular demand, I threw my own voice into a neural network (3 times) and got it to recreate what it had learned along the way! This is 3 different recurrent ...
Generate Music in TensorFlow
In this video, I go over some of the state of the art advances in music generation coming out of DeepMind. Then we build our own music generation script in ...
How to Deploy a Tensorflow Model to Production
Once we've trained a model, we need a way of deploying it to a server so we can use it as a web or mobile app! We're going to use the Tensorflow Serving ...
How to Generate Images - Intro to Deep Learning #14
We're going to build a variational autoencoder capable of generating novel images after being trained on a collection of images. We'll be using handwritten digit ...
Keras Tutorial TensorFlow | Deep Learning with Keras | Building Models with Keras | Edureka
TensorFlow Training - ** This Edureka Keras Tutorial TensorFlow video (Blog: .
Predicting Stock Prices - Learn Python for Data Science #4
In this video, we build an Apple Stock Prediction script in 40 lines of Python using the scikit-learn library and plot the graph using the matplotlib library.
Predicting the Winning Team with Machine Learning
Can we predict the outcome of a football game given a dataset of past games? That's the question that we'll answer in this episode by using the scikit-learn ...
Computer tries to replicate my voice!
Skip to 9:27 if you want to hear the computer speaking (and not the process by which I made it happen). The comment was originally by ContactingTheDead, ...