Until 2017 there was wasted time and resources to train neural networks with fixed number of units and backpropagating the same network multiple times. Then first concept of dynamical growing neural networks by Munteanu initiated a generation of extensions like Transformers,… and papers “Attention is all you need“…
How dynamic nonlinear neural network can be easier interpreted and understood?
The concept is much simplier than hype deeplearning enthusiasts are making .
- A neuron or task is seen as an main module that is connected with a list of its more primitive parts from lover levels, any new complex neuron / task is just a composition of more than one primitive tasks.
The conclusion mechanism/or so called inference, is a function that detects when one or more subtasks have been completed. I called this state a conclusion because it is similar to a conclusion a person makes when sees a serie of events. Conclusion is seen as a summ of oscilations from the lower level subtasks/neurons, simplier to say how much subtasks was completted.
- The reflex. I have called reflex the reaction momentum of an agent to its environment given an active conclusion/ or so called context. Reflexes also are called difusive or generative stimulations that moves stimulation in opposite direction of inference also from higher level to lower and act as demonstrated by DALLE and GPT extensions where network for a givent condition/context text returns an set of pixels forming an image.
- The attention. Attention is a conclusion/context that is keept in mind until a reflex or reaction is sampled. You can also call attention as active/valid context that can help to make a decision and react to environment. When a neuron is concluded its more primitive parts also lower level neurons are reseted and become inactive and ready to detect or to reflect new actions.
- The past present and future have been used as components of problem and solution modelling. 1 I have named past events that have previously happened and that can be used as condition/context for reflex or a reaction of an agent to its environment. 2 The present. I have called present the state of reaction/difusion from upper level toward lower levels given an active context that encourage reaction of agent toward environment for example a car detected an obstacle and reacts with avoidance. 3 The future. Future are called all posible reflexes asociated with present context that can help to decide and react to environment, but are not yet sampled.
- The inactive neurons. When a problem is solved or current task is completed there is keept in mind just the higher level representation or just the leader/interface of a problem but lower level elements are reset for the next use. It helps for learning process since to compose new experience you need just references of high-level-representation/interface of a problem for eficient information modelling.
- The learning mechanism. Learning is done by acumulating the higher-level- modules/interfaces. The list of interfaces then are put together and asociated with a new higher level module in neural hierarchy. So this way by composing old experieces you form new and more complex ones.