jngannon.github.io

Blog

View My GitHub Profile

Fun and Failures

Here are some things that I have worked on for fun, some were longshots that I were probably not going to work. Some were fun programming excersizes that helped give me a better understanding of neural networks


Adadad Optimizer

I tried making a couple of optimizers based on stochastic gradient descent with momentum. I found that the ran slightly faster than SGD but not nearly as well as Adam, which I also implemented in the software library.


Subnet finder

An experiment with the lottery ticket hypotesis. Trying to find subnetworks within a randomly initialized neural network without training. I have not been able to run it for long enough to get a result (GPU overhearing in Brisbane summer).