In this work we propose a method for discovering neural wirings. We relax the typical notion of layers and instead enable channels to form connections independent of each other. This allows for a much larger space of possible networks. The wiring of our network is not fixed during training. As we learn the network parameters, we also learn the structure itself.
0
The above demo shows the training of a small network on the MNIST dataset. Move the slider to see how the structure changes as the training progresses.