Subsumption and Neural Hiding - The 'Object Oriented Paradigm' of Artificial Intelligence

In computer programming Object Oriented Programming (OOP) gave programmers a neat way to compartmentalize programs, rather than having a mess of disorganized code and data.  One can neatly make an object, treat it as a black-box and reduce the cognitive load on the programmer.  Nifty.

In reactive neural networks where one is taking world inputs and generating response outputs Subsumption vastly simplifies the work of designing the neural processing.

From dictionary.com:

verb (used with object), sub·sumed, sub·sum·ing.

  1. to consider or include (an idea, term, proposition, etc.) as part of a more comprehensive one.
  2. to bring (a case, instance, etc.) under a rule.
  3. to take up into a more inclusive classification.

Subsumption in the context of neural networks is where some reflex is designed as a response to inputs or other neurons and overrides 'lower level' reflexes.  Then some other reflex is desired so that can then be designed.  The problem with linear neuron implementations is that every time a reflex is designed it can effect every other reflex in the system.  By linear I mean non-spiking neurons where linear values are summed to some output.  The system ends up trying to act on every reflex at the same time.

With spiking neurons the various reflexes that are inactive appear to not exist in the system and are thus hidden. I call this neural hiding.  When the inputs to some reflex indicate the reflex should be active, it can activate and override other less important reflexes.

When one is designing a tank in NuTank this becomes very important because design can be incremental.  One simply designs the given reflex and then, usually, a single neuron that determines conditions for activation. This actually works in practice and has been shown from experience.

From an evolutionary perspective neural hiding and subsumption allow for genetic mutations to not always be detrimental.  A small addition of somewhat random neurons might not corrupt the whole system resulting in a disabled creature, and even might result in beneficial behavior.

Correction: The neurons can be spiking or linear.  It is how the stimulus is summed forward that matters.

Comments

  1. Cool dad, well summed. Next, help me hack my ADHD, so I can study Summation Notation? :)

    ReplyDelete

Post a Comment

Popular posts from this blog

UE4 Asset Directory Listing Blueprint

Self Cycling Asynchronous Neural Systems