Self Cycling Asynchronous Neural Systems

Current neural network systems have a general paradigm of being given some input, and then producing a resulting output. A true general AI should act like a biological system and be self-sequencing. The neural system should just free-run, responding to inputs and creating outputs and learning.

One method I am experimenting with in Cognate and NuTank is having a hard-wired set of concept maps (a N-dimensional set of neurons used to embody a concept, e.g. foveolar edge angles.) that take inputs from the external world and produce hard-wired reactions as outputs to control a real or simulated robot. Such a system also uses subsumption to aid in much easier design. This is the Concept Map System (CMS) for the purposes of this post.

For now lets ignore the idea that the hard-wired concept map system might also be adapting and evolving over time simulating neroplasticity.

One then has a large 3D array of neurons that are in acting as a pattern matcher on the entire CMS.  What this 3D array (cortex) does is observer the CMS's current state and try to predict the state on the next or later time ticks.  The 3D array then stimulates the CMS into the predicted state with a strength depending on the strength of the match.

Such a system will go into asynchronous oscillation in a chain of match-stim-match-stim...
This oscillation should also be asynchronous in that various parts of the pattern matcher will be getting stronger or weaker matches.  This then effect the CMS which feeds back to the 3D array.

Making every neuron in the 3D array have every neuron in the CMS as an input would overwhelm memory, and in a biological brain it would be nearly infinite connections.  Instead the 3D array would have affinity for areas of the CMS such that the 3D array would have virtual coordinates and each concept map in the CMS would have virtual coordinates. A given region of the 3D array would more strongly target neurons in the concepts maps that are 'near' on coordinates.Also in the 3D array the predictive neurons would also take nearby neurons in the 3D array as inputs.

In a continuous fashion weights into the 3D array that tend toward zero would be destroyed and a new semi-random connection would be made.

Training will be continuous and the rule would be that if something 'good' happens weights would be increased in the direction to improve the pattern match.  'Bad' would reduce weight ever so slightly. This simulates the rule that learning happens quickly when endorphins are present.

The system will be overall driven by what produces the simulated endorphin level.  It will be interesting to investigate various concepts of 'good' for the system.
For example good might be defined as finding food, killing an enemy (big goal in the NuTank 'game'), or simply high degrees of correlation across the system. This high correlation might represent a simulated 'ah ha!' moment of a sweeping correlation.

More later.

Comments

Popular posts from this blog

UE4 Asset Directory Listing Blueprint

Subsumption and Neural Hiding - The 'Object Oriented Paradigm' of Artificial Intelligence