Musings on Value vs. Pulsed Neurons and Locality of Reference

Neural systems in the style of Convolutional Neural Networks et. al. use matrix math and tensors to simulate the overall activity level of neurons. The reason for "overall activity level" rather than individual pulses from neurons is that the overall activity is simply the integration over time of what would be pulses in real neurons. Overall activity also is amenable to calculus with derivatives for back propagation.
The reason for matrix math is locality of memory reference and simplicity of written equations for what the system is doing. Microprocessors have local CPU memory cache, and main memory. Getting data from main memory to local cache is very slow relative to processor instruction times.
Matrix math tends to make operations happen in address sequential order so is efficient in cache.  Also GPU design favors sequential and matrix operation and is ultra-fast.
All the above makes tensor style processing for AI efficient.

Pulsed neurons are more object oriented, such that each neuron is an object.  The neural connections can be from any neuron to any neuron and may or may not be pulsed vs. overall activity level. In the case of simulations of neural systems such as robot control and subsumptive architectures, pulsed neurons have several advantages and disadvantages.
The major disadvantage of pulsed neural architectures is the complete non-locality of reference. As a slight efficiency gain one can use a N-dimensional map of neurons to simulate a cluster of neurons and or some concept in the system and gain locality of reference to some degree. This locality of reference makes a linear pass over the cluster updating neuron charge, fire state flags, and activation levels efficient. One still has no locality of reference in regards to connections to other areas of the system.
To further complicate things, a cognitive system with learning going on via. self adaptive connections will be constantly rewiring the neural connections and further reduce locality of reference.
One huge gain in subsumptive pulsed neural simulations is that the vast majority of neurons are inactive most of the time.  If a neuron does not pulse in a given time tick, then the target down stream neurons do not even have to be accessed so there is no locality of reference problem.

References
Cache vs. Memory
Locality of reference and matrix math

Comments

Popular posts from this blog

UE4 Asset Directory Listing Blueprint

Subsumption and Neural Hiding - The 'Object Oriented Paradigm' of Artificial Intelligence

Self Cycling Asynchronous Neural Systems