One of the first things I wanted to try was image recognition.

Of course, the first thing I did was tried to reinvent the wheel.

The retina of the eye contains photoreceptors known as “rods and cones” that function as the input for our optical neurons.  Each of these photoreceptors connects to multiple neurons.  There are several types of neurons in the retina.  As a matter of fact, the neurological structure of the retina is very similar to the upper (outer) layers of the brain.  So some degree of image processing actually happens in the eye before it ever gets to the brain.

So, to replicate this model, I started thinking about how to connect inputs to multiple neurons for processing.  Some of my first experiments involved using 3-D space, assigning neurons a random location on a plane and connecting neurons that were within a certain distance of the photoreceptor.  This produced a tangle of neurons that became more and more difficult to keep organized and efficient.

At one point, while I was trying to simplify the design, I had a eureka moment:  Convolutional networks actually provide exactly  the same functionality.  It was all a matter of looking at the problem from another angle.

I don’t consider myself so smart that I could have thought of this on my own.  It was because I was already familiar with Convolutional networks that I realized it.  Of course if I had taken the time to read all the theory involved in Image Recognition I would have seen that the inspiration for convolutional networks in image processing was in fact the visual cortex.  Even a wikipedia search could have told me that much…

But… most research papers are dry reads… They make little sense unless you are also a researcher in the same field.  But they do make exellent references if you are trying to do something similar and want to know exactly how another person (or team) cleared a specific hurdle with their system.

Convolutional networks make use of a structure called a feature map.  A feature map can be imagined as a small set of artificial neurons that connect to a portion of the input (image) and process it.  Most convolutional networks make use of several feature maps in a layer.  These feature maps are then applied to the entire image by sliding the feature map across the entire input image.

This method allows us to use a limted number of artificial neurons to process the entire image.  An actual brain (including the neurons in the visual cortex) would actually need many more neurons to accomplish the same process.  That said, an actual brain can parallel process much faster than any computer.

It is often said that the human brain is much more powerful than any computer on the planet due to it’s ability to massively parallel process.  However, when you consider that methods like the one above allow for a more efficient use of certain neurons for specific tasks and remember that artificial neurons can activate millions of times faster than their natural counterpart, we can see that technology is rapidly narrowing the gap.

Whether or not that leads to true Artificial Intelligence or just smarter machines; whether they love their human progenitors or seek to destroy the world.  These are questions only time will tell.  Regardless of the endgame, it is a fascinating subject.

 

 

Categories: Machine Learning

Leave a Reply

Your email address will not be published.