RE: CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]

Preview. Next post. Continued.

We popularly address some notions of doing logic in a way that the logs generated allow simple machine inference and can be used to reveal where a logical set of maps fails to achieve a goal in general problem solving type problem solving.

 

— 〈  2  〉—

NEURONAL GROWTH

 

If more than one load, a clone of the class-like actor-agent is created, an instance with only one load in it, which inherits the operationsset of the actor-agent whose clone it is, and the load is removed from the loadsset of the class-like actor-agent, while the clone it has temporary existence. The number of actor-agents that comprise the system increases therefore. But in whatever order it happens, the class-like actor-agent works to become progressively less loaded, until it is unloaded, except if new loads are passed it as messages faster than it unloads itself.

This is an old approach, dating back to Hewitt and earlier. What we really want to discuss is why this, often more roundabout, method of doing things is useful. Refer for the moment to [BOD06.1,2] for a review of the literature.

It's a controlled experiment model of computation, such that even when many arguments needed for an operation to complete, only one thing at a time is done, changed, and logged, in the style of Newell, Shaw, and Simon, and the log can be used to learn. As was suggested back then. Each failure tags and identifies pragmatically what an unclassified object is, and the operation that identifies it, which variable specifically makes it different from other objects, is known. Or can be learned after some randomness and repetition.

There is always confusion regarding how this occurs, even though it is contained in the axioms, and mentioned more or less explicitly in some papers; therefore let us discuss this point.

For example, to exclude a subset N from a list of phrases M, an operation requires two arguments before it can work: N and M.

But here the class-like actor-agent only selects an argument when more than one argument is ever present in the loadsset and separates a single argument and clones the operationsset and only then, when only a single argument ever present, does any operation take and transform any argument.

There are petri net methods and negotiation methods and memory in operations instances which can treat multiargument transformations, multiarrows in a category, but ideally for learning there is simply operation construction and updating the operationsset of actor-agents as needed.

One actor-agent gets an argument, such as N and has a BuildExcludeOperation ... operation ... that ... builds ExcludeN. Which is then passed to the actor-agent which will receive M or has M and the ExcludeN occurs there as a load. That actor-agent has a Hewittian UpdateIt operation in it. The temporary actor-agent can only run UpdateIt on an operation-type load, in this case, ExcludeN. Many ways to go from here, but, for example, being a clone of the same actor-agent, there can be private encapsulated state that gets cloned and can be passed as a token that unlocks the class-like actor-agent, the master, whose instance is the temporary actor-agent and allows its clone to modify it in some specific manner, such as updating its operationsset as if it was originally created with this other operationsset.

Then the class-like actor-agent receives M and happens to run ExcludeN on it. But the CEMOC format is preserved and aids machine learning.

Disclaimer: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. No promise to do anything or that anything is done is involved.

H2
H3
H4
3 columns
2 columns
1 column
1 Comment
Ecency