One objection regarding heavily heuristic based methods is that they separate the operation from the packaging, the actor agent, and further, the tag token from the operations and actor agents which it synchronizes, but it also, for the same reason, allow the "wrong" operation to be applied to the "wrong" input from the end user or from another kind of actor agent.
Regarding that, consider it another way. While it's definitely less useful (or at least straightforward) for implementing a particular procedure that always gives the correct answer and stops, it's actually quite useful if you encounter situations where the system must be adaptive and to give some kind of answer. Where the answer that is returned is sufficiently often enough, sufficiently close to correct.
Heuristics that automatically construct heuristics that automatically ... that will however definitely terminate "quickly" enough give a "useful" result on inputs and combinations from the end user that we cannot foresee. (And the other problem addressed by the same means is that we need to automate construction of heuristics. Where specialized-to-the-case heuristics are needed yet the number of cases, because of they result as possible combinations of large numbers of events, greatly exceeds the ability of even large teams of developers to explicitly write heuristics for these special cases.)
Because consider the case where ubiquitous that combinations occur that cannot be foreseen. And where we have no intuition in advance for handling most of those combinations.
We need to build tools to be able to write such heuristics.
The correct or incorrect result is less meaningful than "an interesting" or "a useful" result. Key word being "a", one of many.
Such algorithms can do a lot of things that cannot be done otherwise.
(And equally importantly, the specific form we are doing is to a large degree so that certain mathematical tools can be used at to prove correctness of or reason about the possible algorithms we may want to build before we implement any such algorithms. It's to address formal tractability.)
The ability to do very noisy heuristic reasoning, automating the automation of search, where unclear how to process an input in advance, is needed for being able later to implement cognitive science type models of learning. And such that the systems build return something to the end user with a desired probability within a desired time frame.
And it also generates data for such machine learning to improve performance.
We'd also like to reduce dependence and success of a system of problem solving on exact boundary conditions. Boundary conditions that are needed to get good results from a system that depends strong on those being "right" for it to give good results are more often than not simply absent in actual problem solving when it occurs.
The fact that certain operations fail and produce often long paths of backtracking is actually needed to teach the system pragmatic reasoning. Present originally in quite a few systems going back a few decades, starting with Marvin Minsky and Herbert Simon.
Logging and a few other things we'll talk about later, because we need the already discussed capabilities to implement them nicely, capture failure and end up tagging inputs that are not well understood. This is used to improve probability of quickly understanding inputs.
For intuition. Consider a file. We can't look inside. Don't understand it just be looking at it. No amount of analyzing the file in fewer steps than T will reveal what it is. Hmmm.
But a thing "is" how it can be "used", what it can do and what can be validly done to it.
We have a paint program on a desktop and ... exactly ... we spawn a virtual machine with the program and the file and safely try opening the file with the paint program. Suppose it always tries opening it, interpreting it as if it were an image and the correct format. Then if it opens and our "visual inspection", which is a cue based program, sees it is some kind of image, even if it doesn't know what it represent, this tags it, and can be used to select other things to do with it. But if shows garbled noise, or nothing, and can't open it even with generation interpretation, this also reveals something about what the file is. Then, for example, you may take the file and try opening something we know for sure is an image of a cat with the file as a program. Etc. Etc. Etc. Meanwhile we log successes and failure. Which is a part of logical AI with a long history. Two major issues even just a decade ago were that computing systems and memory were an order of magnitude or more too limited and the market absent for such capabilities.
(For example, in prolog type systems, and automatic theorem provers, X "means" ... whatever list of things X can be used to construct a proof. Or more precisely, the list of thing it "failed" to prove. A small part of that list is a first step to tagging X as such and such and not something else. Distinguishing it. "Not"-X is simply failure to be used in that way, given reasonable search and backtracking.)
It should just be noted: very important that exactly the discussed syntax and process logic occurs. Else certain kinds of mathematical reasoning we'll like to use later, for developing the actual end user system will not be valid. The fact that temporary actors must be created, or that something that can be done directly is broken into complex objects or partial computations passed around as messages, etc, is not arbitrary ...
The big problem with things like object petri nets, coalitional networks, etc, is that a human find them intractable in the details for all practical purposes, even if a couple dozen pages of description is enough to document them vaguely, and lacks abstract valid methods to design a system that generically and quickly delivers a specific desired capability when the desire arises. Commutativity assumptions, transitivity assumptions, just to start the list, are typically not met.
I'm a scientist who writes science fiction under various names.
◕ ‿‿ ◕ つ
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License Text and images: ©tibra. Disclaimer: This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. No promise to do anything or that anything is done is involved. Treat it all as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. Except if this text happens to be explicitly fantasy or science fiction, in which case it is just that. Then exists another Disclaimer: This is a work of fiction: events, names, places, characters are either imagined or used fictitiously. Any resemblance to real events or persons or places is coincidental.