While attending the University of Tennessee, Alex Chaloux and I took a class on Biologically Inspired Computation and worked on the labs together. While I cannot say that this area of research is my passion, I cannot deny that the ideas behind the algorithms are fascinating, and their potential to solve large and complicated problems tremendous.
Genetic Algorithm
Genetic algorithms work, in a way, very similarly to selective breeding. A variety of prospective solutions are randomly generated and then compared against a fitness function. The more fit a prospective solution, the more likely its properties will be passed along to successive generations, until a maximum fitness, and thus an optimal solution, is found.
Particle Swarm Optimization
For this project, we wrote a program to simulate particle swarm optimization. This is a problem optimization method that utilizes a “swarm” of points in a 2D or 3D environment. These points represent potential solutions to a problem, where the problem is a fitness function that takes 2 or 3 inputs. The particles are given velocities, inertia, and slightly randomized tendencies, and are then sent in the direction of the swarm’s most fit particle. The point on which the swarm eventually settles is the most optimal solution for the fitness function.
Hopfield Neural Networks
For this project, we created a Hopfield Neural Network. This type of network uses artificial neurons, which can be given a weight, which represents imprinted memory. Data can be then compared to this network and be “recognized” by the neurons. This type of system can be used for image recognition.
Spatial Structure by Activator Inhibitor Cellular Automaton
In this project, we examined the creation of spatial structure by activator / inhibitor cellular automaton in terms of spatial correlation and mutual information. To do this, we wrote a simulator in C++ that programmatically created samples of 30 x 30 cellular automaton for simulation. These simulations were run until they stabilized and then evaluated.
Backpropagation of Neural Networks
For this project, we first designed a simulator similar to that in the Hopfield Network, where artificial neurons were connected to one another, with an input and output neuron. The neurons were initialized with random weights. Training data was then passed through the network. The output of the network was evaluated, and the algorithm propagated back, applying a calculated change the weights of each neuron. This continued until the network was sufficiently trained, at which point testing data was pushed through the network for evaluation.