PRACTICAL THINKING. — TECHNOLOGY USER. — COMMENT. — Games in PyLogTalk?... [ Word count: 1500 ~ 6 PAGES | Revised: 2018.10.10 ]

 

BLOG

 

Digression regarding games that can be build inside simple intelligent systems.

Continuing the discussion in terms of the PyLogTalk model. [Best to read that first.]

 

      Word count: 1500 ~ 6 PAGES   |   Revised: 2018.10.10

 

 

— 〈  1  〉—

BUILDING CAPACITY

 

Correct, once we have the proper actor framework, we can start work on reusable functions and just adding them in, combining them as we like. And that without extra labor.

I mean: the extra labor that is required for assembling together and making coherent the parts of a complex system whose parts all or almost all interact.

Summarizing.

(1) We can work on each function separately.

One or two developers can work on method A. Meanwhile one or two others on method B. And one doesn't need to know at all, ever, what anybody else is doing. So far as adding features.

Which reduces complexity.

So things get done much faster. — The time something now takes to get built depends only on what it is in isolation. — Not so much what else is in the system. Also therefore not on the history or order in which we add features to the system. Nor do developers need to do any further coding to make A work with B in the same system.

Meaning they can ad hoc drop in or remove functionality. The code itself for each feature should end up being standardized and as simple as possible. By default. Nice.

(2) We can set many actors, if desired, as complex neurons, if needed, to get machine learning. But each neuron is much smarter. [We've allowed neurons to run arbitrary programs and process complex input into complex output.]

(3) We get a large amount of standardized data about what the system is doing because we have statistics of messages passed. Things like X many messages with tag Y crossed from this part of the system to that part of the system. In "short" time T. Which catches our attention.

Machine learning by the same system can be used to catch bugs and generally unanticipated behavior. Because we have a lot of data being generated simply because everything is an actor doing something that leaves a messages trail. The data is per se labeled and standardized, even before we add special tags.

(4) And messages are smarter. [We can encapsulate exceptions and context specific options into messages really easily. Exceptions like that are really powerful.]

(5) We can prove theorems. Including correctness, safety. Just overall set a goal and prior to coding too much, know if its going to work. And have some precise idea about what our system is probably going to do, regardless of how complex it gets. Which mean we can decide how to allocate our labor among features.

We get the following form: N -- f [a,b,c, ...] --> M where there are many behaviors and N is a set and M is a set. (They can be the same. If recursive.) And f is set of functions. [If we remain in the statistical sense we can always satisfy the functional requirement of same input same output, so we're really working with functions.]

They are functions of all possible behaviors in N. Messages quickly give us the probabilities with which each behavior occurs and results in a message, on which some behavior from the next set operates. We also don't always know which one or several that will be. However we get another list of probabilities.

We get vectors of probabilities. Even if, at that point we stick to machine learning with this as data, and phrase theorems in terms of that, we can prove theorems about behavior after a long chain of such processes. And automate "test for combines nicely or not" and based on the result have the system disable, in certain contexts, some message passing.

Say be giving a cost of "marks" to a message to be processed in some cases, and not marking. (Petri net style.) Or by just rejecting a message. (Smalltalk style.) Except we shouldn't have to script that explicitly. Unlike in Smalltalk. And depend on that in theorems to assume any combinations that happens are, more often than not, safe and valid and useful. Else combination yield no output. Meaning the development just consists of dropping in features build more or less independently. And the design side can make progress on theorems of the form: A, B, C need to be added to E, F, G to get X to happen in case Y with Z probability or more often. So we know what we need to make to get what we want to happen. Or limit what we don't want to happen. In systems without the above format, that's harder to predict accurately or ultimately much less knowable.

... Regarding games, each actor has many methods. Developers just slot them in as files in a folder. Then build up the actor by passing it those names as methods whose names it should know then. The methods operate contextually. So we can easily gameify. And it doesn't affect other functionality. But it's the same object. Meaning value can be added to objects as such things are done. Unlike complexity, 1 object that does 10 things is easier to get higher value for than the sum of 10 objects doing 1 thing each, because 1 thing is probably not valuable enough to exceed transaction and other hassle costs, overcome friction, and have value other than nil.

With value comes incentives and these can be adjusted. Instead of arbitrarily changing "price-like" numbers like most token based systems are doing (but which are not actually prices, because coming from a centralized top), we can balance incentives by reallocating utility by moving around features. Which means the economics should actually work.

And with all users having intuitive access to AI there will quickly appear some equilibrium based on everyone having some satisfactory response to automation and AI. [We can prove theorems about out, to predict this sort of game, thanks to the standard framework and the assumptions it allows us to make.]

This would certainly improve trust in a social network. By definition. Considering that trust is primarily a coarse grained picture of a bundle of predictions of what others will do in a context.

 

— 〈  2  〉—

LEARNING CAPACITY

 

Learning capacity: we can create actors/agents that will be sent mail or which function as forwarders, and construct a database to function as training data; this gives us some options.

If the individual actors have machine learning methods, they can each learn, and contribute to weights databases built in the same way. [Before they terminate, they send a message to an aggregating actor/agent.]

Other actors have symbolic methods only, and don't get new weight, but can still get parameters to change and to learn that way.

Half will probably have both.

The network as a whole, because it is message passing, can also learn as a whole. Because it would also be a system of connected, smart neurons. For example, if whether or not a message intended ultimately for another actor receives a mark by an actor that does something or nothing and forwards it can change, based on the results of the automated testing the system does, then it can learn as whole.

And the frequencies with which N sends any message of type T to M can be affected directly by evaluation of timing or results of some chains of operations, then the whole system could be, at one level, exactly an ordinary neural network.

 

— 〈  3  〉—

GAMES

 

An interesting possibility. Not only can one monetize instances of agents/actors, and each learn, have it's own little set of databases. [Possibly with some service fee or transaction fee, if they are running on systems of others.]

Making each unique and capable of being traded.

But there is some privacy and therefore users can have the option and may want to earn rewards by contributing what their actors privately learned into larger databases.

This creates an exciting, game like economy. A larger, proper game can be built around it. Especially because one actor can have many methods.

In one use case, it formats a post into a report style. In another breaks it apart based on content into smaller chunks based on latest statistics regarding what length of post users most read. In yet another it also fights dragons and as a reward gets more requests to a large database, allowing to perform its primary operations better. And each such heterogeneous piece of software can be exchanged for others and for other value.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License  . . .   . . .   . . .   . . .   . . .  Text and images: ©tibra.

H2
H3
H4
3 columns
2 columns
1 column
6 Comments
Ecency