Practical Machine Sentience 4: Causality

The article explores machine sentience, focusing on how machines can process and associate sensory inputs to understand causality. It categorizes noumenal tokens and emphasizes the significance of functional output in distinguishing important sensory inputs from irrelevant noise. The piece includes definitions of association and causation as foundational to developing machine sentience.

It’s quite simple, straightforward, and easy to understand causality, friend. All you need to do is connect the dots. – DruidPeter

In the previous 3 articles, we went over basic definitions of Machine Sentience. We discussed foundational architecture needed to implement said definition, and we discussed what could be considered layer 1 processing of integrated sensory field devices on said architecture. As a result, we currently have a machine that is capable of associating sensory inputs with noumenal tokens, which were defined in the previous articles as well. For those who wish a very quick refresher on what noumenal tokens are, simply remove the word noumenal and just take the word token at face value: A thing that stands for something else.

Having traveled through these prerequisites, we now discuss the last prerequisite needed for the development of a machine understanding of causality.

System Functional Output as Group Operation

The key thing to take away here is that the machine treats its collection of and assortment of noumenal tokens as members of a civilization of mathematical groups. The definition and full understanding of a mathematical group is well beyond the scope of this article. Nonetheless, in layman’s terms, one can think of a mathematical group as a collection of objects that have certain properties, and some sort of action/operation that affects the properties of the collection.

To put it another way, our machine does not simply recognize and record sense impressions into sense tokens. But rather, it organizes and changes how it organizes and/or represents these tokens over time. The exact specification for how our machine does this is not of absolute importance for our purposes of creating a sentient machine, and in fact, according to how we defined machine sentience in the first article, can vary quite considerably.

Hence, let us for now simply construct a simple baseline classification system for our immediate purposes.

Baseline Classification of Noumenal Tokens

Let us first divide the collection of Noumenal Tokens into three groups:

Current Active Tokens

Current Active Tokens are simply tokens which can be said to correspond to currently recognized sensory input that is actively being sensed.

Non-current Active Tokens

These are tokens which are recognizable to the system, but which are not currently active within the sensory input system.

Current Associative Tokens

These are non-current tokens that are nonetheless associated with current active tokens. For example, if we see a bird from behind, then we may not see its beak. Nonetheless, we often associate beaks with birds, and as such, the token for a beak might be in this group.


Now, regarding current associative tokens, it is important to note that the exact mechanism for deciding what tokens are considered associated with current tokens is not important now. And indeed, at any given time, the collection of current associative tokens may be determined by algorithm and/or circumstance. Overall, this is a very simple system, but it will serve our purposes nicely.

The Role of Functional Output

In mathematical groups, there is often defined one or more operations, which, when applied to the group, affect either its properties or the properties of elements of the group. An integrated sensory field device is limited in its ability to provide means for recognizing such an operation outside of artificial means. This is because there is no natural dividing point between one sense impression and another. Between any two sense impressions, there exists only a continuum of other potential sense impressions.

We argue, then, that functional output of the system creates the natural boundary that is needed. Recall that functional output is simply some output from the system that, as a side effect of its operation, is capable of changing the sense impressions impressed upon the integrated sensory field device. Again, a very simple example would be that of muscular movement in the case of human beings. If you move your head, light will enter your retinas in a different way and produce different visual impressions upon you.

Why Functional Output is so Important

We choose functional output because of how it plays a role in determining what subsets of sensory inputs are functionally important to the system, and what are irrelevant and considered noise. Functional output, by definition of the term “function” is a function of input. And the input comes from none other than the integrated sensory field device.

Consider, then, for example, the sensory input of a threat, say, a bear, alongside additional sensory input of something nonthreatening, say, a tree. Upon first impression, the machine might associate both the bear and the tree together as a noumenal token serving as a functional input, associated with the interpretation of “threat”.

But let the machine now encounter the same threat, a bear, next to something else decided un-tree-like. After repeated exposure, the machine learns to interpret precisely what subset of the sensory integrated field, ‘e.g. the bear’ is actually to be associated with what functional output is chosen, ‘e.g. running away’.

We will for the time being ignore the process of how functional outputs are, over time, consistently chosen from functional inputs, as that is a complex subject in itself. All we need to know is that over time, the machine learns to associate significance from functional input, and map such significance to a consistent behavioral (e.g. ‘functional’) output.

Association, and then Finally, Causation

Under the current system as we have described it, we have a machine capable of recognizing subsets of sense impression, and consistently reacting in some manner upon encountering said aforementioned sense impressions. We now define a very important concept, Association, as the following:

Association := Some subset of the integrated sensory field input, operating as a functional input, which may be consistently encountered as a result of some consistent functional input or output of the machine.

For example, consider that we see a bird from behind. By consistent behavioral output, we may also encounter the bird’s beak, either because we move around to get a better look, or perhaps because the bird chose to move around on its own.

But this is what association is in a nutshell. Where we find sense impression ‘a’, we will also find not too far removed sense impression ‘b’, somewhere.

This leads us almost immediately and definitely to our practical definition of machine causality:

Causality := An association of associations; A group of subsets of the integrated sensory field input, recognized as an association, which may be consistently encountered as a result of some consistent set of functional inputs (an association) or set of functional outputs (the resulting functional inputs of which becoming an association), or some combination of the two.
  • If the hand lets go of the ball, the ball will fall.
  • First associative group: Hand, Ball, letting go.
  • Second associative group: Ball, falling through space.

What the machine interprets as a causal connected event, is in reality a very large number of nested sense impressions, any number or combination of which may be utilized as a functional input for any number of the machine’s systems and resultant functional outputs.

Think of the totality of the sense impression of letting go of the ball as a very deeply nested JSON data structure, encoding all manner of aspectual data regarding the situation, from the texture of the ball to it’s weight to the color of the sky on the day of its release, and everything in-between. Likewise, the effect of the cause is another JSON data structure that encodes for all aspective properties of the effect.

Machine understanding of causality, then, is simply an association of a specific path within the first JSON data structure with a well-defined path within a possible future JSON data structure, in this analogy. And if one association may lead to another, we now have a machine capable of forming sequences of causal chains, albeit still unable to form branching causal logical reasoning. But such capabilities are currently outside the scope of our discussion, as we still have to formally construct the special causal sequence that leads to the development of machine sentience.

But we are close. In the next article, we will build upon everything we have built so far, and will finally illustrate the mechanism that leads our machine finally becoming, by prior definition, sentient.