Cortical Neuron Model - SRS

In a Subsumptive Regular System (SRS here)  the  regular array of neurons (cortex) that has the goal of predicting the future state of the subsumptive (cerebrum) system.
If the match is strong on the present state of the subsumptive system then the stimulus of the subsumptive system into the predicted state is strong.  In the event of sudden changes in the actual external environment one wants to ignore the predicted state and react to the immediate environment.
This has an evolutionary advantage of pre-reacting to the environment and getting a jump on surviving the future state.
This document calls out the various axis of selection for algorithms.  Research has yet to show where the decision points are optimal.
There are several decision points on the algorithms to select:

Connectivity from the Subsumptive to the Cortex

The subsumptive part of the SRS is hard-wired to take inputs, compute concept mappings (abstractions) of the external environment and generate outputs or reflexes.  This system does not have to be entirely hard-wired and can evolve slowly over time, but for the purposes of short term operational algorithms assuming it is unchanging us useful.
The cortex takes inputs from the subsumptive system and from other cortex neurons via weighted connections.  The choice of input neurons is randomly selected such that N neurons get selected.

Semi-Random by 'distance'

The selection can be semi-random and one idea is to have an affinity to some location in the subsumptive system where connection is far more likely and tapers off in connection count over distance.  In the actual physical brain distance could be literal. In a digital system there is no inherent locality in two or three dimensions so we can simulate distance.  A good choice would appear to be conceptual distance.  For example the Concept Map for 'vertical line detection' would be very close to 'horizontal line detection', further from 'center of view curvature', even further from central attention point hue', and very far from 'left hand gripper contact pressure'. If we choose a 3D concept space we should be able to separate concepts pretty well. One could choose higher or lower dimensions for concept space but 3D intuitively appears a good choice. Cortical space could be mapped as a 2D overlay on top of the 3D 'cube' of the subsumptive system.  This would roughly apportion cortical neurons evenly over the subsumptive system, and make cortical neurons that are 'near each other' tend to be predicting, as a group, over a sub-region of the subsumptive system. Such groups would be high order concepts such as 'a chair'.

Full Random

One choice is fully random connectivity to any neuron in the entire system, cortical or subsumptive.  This would let the system correlate apparently unrelated environmental and conceptual factors.

Random cortical vs. subsumptive

One choice to be made is what fraction of the random connections are made to cortical neurons vs. subsumptive. This balances how much we pattern match on the direct concepts vs the high dimensional concepts of the cortex.

Connection Reaping (Small-scale axon terminal arbor pruning)

The cortex neurons are always learning. If a input connection to a cortical neuron tends to have zero weight then that indicates it is not a factor in the prediction of the cortex neuron's target neuron(s).
Such irrelevant input connections should be deleted after a while, and new random connections should be generated such that the overall N count of input connections remains the same.

Connections from the Cortex to the Subsumptive

Each cortex neuron is trying to predict the state of one or more target neurons in the subsumptive system. The simplest model is where one cortical neuron has one target. Physical brains appear to have many targets for a single cortical neuron and is thus in effect predicting the general state of some cluster of neurons.
The target(s) thus are semi-randomly chosen for each cortical neuron.  This decision of single vs. many target neurons will require some experimental research.
The issue of weights for connections to targets is as yet undeveloped. It may be that no weight (in effect 1 or -1) is desirable.  If there is an adjustable weight what is the adjustment criterion?
The system would work just as well if only trying to predict some fraction of the subsumptive system neurons.  Again, a topic for experimentation.

Weight Adjustment of Cortical Inputs

The cortex is always learning.  On must decide how to adjust the weights of the inputs to cortical neurons. In a standard neural model learning is derived from knowing the correct output for a given input thus training the system. Weight adjustment is often proportional to the derivative of the input error and the incoming weights through some transfer function. In out case every moment in time is a novel situation where the 'correct answer' is the state of the subsumptive system, and the 'input' is the state of the subsumptive system.

Activity Selection

The entire Regular System (cortex) is most always miss-predicting the state of the subsumptive system due to a given cortical neuron being a predictor for some completely different subsumptive state.  Only one state of the subsumptive system is the correct state being predicted for a small subset of the cortical neurons. One must decide which cortical neurons are the 'current cortical neurons' out of the entire cortex.
Taking a hint from biology we must assume neurons are not smart in the individual sense and react only to local stimulus to decide to strengthen weights. By 'local' in this case we mean other neurons the cortical neuron is connected to. The best indication of learning rate adjustment overall all is endorphin release signaling 'this is good'. Unfortunately the endorphin release is global to the system so does not help adjust individual neural weights. 

Strengthening Weights

The evident condition for strengthening weights is when a cortical neuron observes it's target subsumptive neuron is active (either + or -) and then adjusts weights of all inputs toward a stronger match. If the target is not active then no adjustment happens.  
This is somewhat different from standard learning models in that if the target subsumptive neuron is no active (near zero activation) then no adjustment happens.  If the target activation is negative (we allow this in out neural model) then adjustment does happen.

Decreasing Weights

The competing force on weights would be a very low rate of weight decay and also weight increases get limited as the connection gets older.  This decay gets slower and longer as the connection gets older thus fixing the connection weight over time.
Also non-correlation of the target to a given cortical input connection will tend toward zero weight.
One problem with such a system is that one could end up with all cortical neurons predicting the future state of a very small subset of subsumptive neurons.  Thus we end up wanting the target subsumptive neuron to only allow a single cortical neuron to have it as an prediction target. If we decide on multiple targets then some general target limit should get enforced.
A stated above, if a connection weight drops to near zero for a long time the connection can be replaced with a new random connection.

Interestingly, if each cortical neuron has one and only one target subsumptive neuron then the entire variables and code for cortical neurons can simply be rolled into the subsumptive neuron class implementation.  This would greatly increase locality of reference in memory. 
We are not limited by evolutionary development sequence like biological brains are limited. So the cortex does not have to be an add-on to the subsumptive system.



Comments

Popular posts from this blog

UE4 Asset Directory Listing Blueprint

Subsumption and Neural Hiding - The 'Object Oriented Paradigm' of Artificial Intelligence

Self Cycling Asynchronous Neural Systems