Still busy as the moment.
:_( ... OR ... : )
(Which depends on how you look at being busy. And busy with what?)
Will be back to actively writing here nearer to the end of February or early March, I think.
Will post some more science fiction; most that I've written has been for print. Also some speculation and science.
Will also write more about development. Something that I would like to write more about is the perspective I suggest for judging code. In one sentence, is how easily it can be decomposed, while meanwhile everything can be composed in any order if needed. Had a paper about that recently.
(For example, can we add new features to an existing web app by writing another operation .erl and just putting on the server with the others, and making trivial changes to the existing definitions. This will also make it much easier to understand and develop quickly when there are many, many more than a few small, simple operations.)
Will write about how this relates to AI.
Speaking of which, instead of Python developing with Erlang. (If you recall some of my earlier posts.)
Consider the following intuition. You have a desktop with some programs installed. You get a file. No idea about type. The type of the file is opaque. You just have the data of the file, which you don't fully understand. Yet. And you may know in some cases when getting such a file where it came from. Not always. Sometimes. Sometimes it is just what the file says. You don't know what the file "is about".
Imagine you know what you programs do, the ones you have installed. You installed them. Selected them.
Randomly try opening the file with Draw. It doesn't open. Backtrack. Randomly try opening it with Document. It opens.
So you know it's not an image. It's a text document. Otherwise it would have been Draw that opened it and gave a valid output (displayed the image) and not Document.
We define a thing by a list of things it is not, by differences, even if such a list is strictly infinite or else very large taken as amount of information compared to the thing taken as amount of information [HUT94, WAT69, POP74, WAT85]. And a bag of rice is not a penguin and also not a pencil and also not a house. It is not many, many things. So when data is labeled may be more interested in what it is [COE17].
If however we are learning, feasibly we learn more and more "about" a thing, what it "is", by learning a growing subset of what it is not. Which may change monotonically but all the same grow, for learning it is not V, W, ... may suggest that it may actually be Z, in another form Z*, even if we already thought (because of how it seemed) is it also not Z. We would like machines to be able to infer about reality that generates data as information, the fibers which got projected when we only see their point projection [MCC97,99,07].
A prolog like search down a tree of a few options with backtracking can tag.
If new methods can be added to the tree at a reasonable rate, and backtracking is more complex, still feasible. (See for example papers of Hewitt and Minsky.)
This allows growing functionality rapidly and plays a role in any AI because operations have different type. When a connection between neurons gets a lower weight, say probability with which it signals in that link in the incidence matrix, this is just another number. Same type. But when success rate of operation X on data is greater than of operation Y on that data, we also get a different probability. A number. But we have also tacitly tagged that data. In pragmatic meaning; so far as X and Y are not the same type. We infer something about the type of data. What operation yields a meaningful output on it is not arbitrary and not independent of inferences regarding what that data "is about". The record of success rates is now tagging data in unsupervised learning. Can be passed around from neuron to neuron as data. Besides being a bug report. Related to searching for a proof of proposition P such that we know type or meaning of P because we know meaning or type of X and X is useful for proving P but Y is not useful for proving P.
It seems not a whole lot of popular discussion about this and I need to write popular summaries of this literature anyway for another reason.
There is a vast and fun and old literature and not a whole lot of discussion outside of conferences. Involving many agents loaded with such functions. Not much implemented either. Should be implemented I think. Can do real unsupervised inference and learning. Less superficial learning.
Maybe we'll see some interesting apps in the future. Real bottleneck in the 70's and 80's and 90's and 00's was ... RAM! Not anymore. So perhaps AI technology may have more play room on blockchains in the future. Besides more ram in the go to machine of a representative consumer, a blockchain makes backtracking based learning with nonmonotonic reasoning much, much easier. Would like to see that! Hmmm.
I'm a scientist who writes science fiction under various names.
◕ ‿‿ ◕ つ
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License . . . . . . . . . . . . . . . Text and images: ©tibra. Disclaimer: This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. Except if this text happens to be explicitly fantasy or science fiction, in which case it is just that. Then exists another Disclaimer: This is a work of fiction: events, names, places, characters are either imagined or used fictitiously. Any resemblance to real events or persons or places is coincidental.