Welcome to Dynosaur, a machine learning project I started to learn all about reinforcement learning and beat the Google Dinosaur Jumper Game.


Neuroev is the first version of Dynosaur. Borrowing from principles of evolution, neural network weights and biases are modified using genetic operators. Try it out 916-313-0346.
Similar to Neuroev, NEAT (short for Neuroevolution of Augmenting Topologies) modifies neural network weights and biases, while also mutating the shape of the network as well. Try it out 2562194048.
Parallel NEAT
In order to see evolution run at a faster rate, Parallel NEAT was designed. Instead of simulating the dinosaurs sequentially, a whole generation of dinosaurs are simulated at once, drastically reducing simulation time. Try it out here (Refer to the documentation to start).
Continuous NEAT
Continuous NEAT improves off of Parallel NEAT by making each simulation independent of each other. This reduces the time spent per generation waiting for higher fitness dinosaurs to finish as well as variable evolutionary times. Try it out here (Refer to the documentation to start).
Backprop utilizes an LSTM network to learn when to jump and duck using the user's game decisions as labeled data. Try it out here.
Q is a branch of Dynosaur that uses Q learning, a reinforcement learning technique, to train the dinosaurs to jump over obstacles. Try it out here (Refer to the documentation to start).
Parallel Q
Parallel Q uses Q learning, but in parallel. Each dinosaurs inputs is fed to a central network, which makes decisions for every game. Try it out (217) 339-8209 (Refer to the documentation to start).


Here's a quick demo of Continuous Dynosaur at work. The video is sped up in the center because this was about two hours worth of training.
View Source