Practical Machine Sentience 4: Causality

The article explores machine sentience, focusing on how machines can process and associate sensory inputs to understand causality. It categorizes noumenal tokens and emphasizes the significance of functional output in distinguishing important sensory inputs from irrelevant noise. The piece includes definitions of association and causation as foundational to developing machine sentience.

It’s quite simple, straightforward, and easy to understand causality, friend. All you need to do is connect the dots. – DruidPeter

In the previous 3 articles, we went over basic definitions of Machine Sentience. We discussed foundational architecture needed to implement said definition, and we discussed what could be considered layer 1 processing of integrated sensory field devices on said architecture. As a result, we currently have a machine that is capable of associating sensory inputs with noumenal tokens, which were defined in the previous articles as well. For those who wish a very quick refresher on what noumenal tokens are, simply remove the word noumenal and just take the word token at face value: A thing that stands for something else.

Having traveled through these prerequisites, we now discuss the last prerequisite needed for the development of a machine understanding of causality.

System Functional Output as Group Operation

The key thing to take away here is that the machine treats its collection of and assortment of noumenal tokens as members of a civilization of mathematical groups. The definition and full understanding of a mathematical group is well beyond the scope of this article. Nonetheless, in layman’s terms, one can think of a mathematical group as a collection of objects that have certain properties, and some sort of action/operation that affects the properties of the collection.

To put it another way, our machine does not simply recognize and record sense impressions into sense tokens. But rather, it organizes and changes how it organizes and/or represents these tokens over time. The exact specification for how our machine does this is not of absolute importance for our purposes of creating a sentient machine, and in fact, according to how we defined machine sentience in the first article, can vary quite considerably.

Hence, let us for now simply construct a simple baseline classification system for our immediate purposes.

Baseline Classification of Noumenal Tokens

Let us first divide the collection of Noumenal Tokens into three groups:

Current Active Tokens

Current Active Tokens are simply tokens which can be said to correspond to currently recognized sensory input that is actively being sensed.

Non-current Active Tokens

These are tokens which are recognizable to the system, but which are not currently active within the sensory input system.

Current Associative Tokens

These are non-current tokens that are nonetheless associated with current active tokens. For example, if we see a bird from behind, then we may not see its beak. Nonetheless, we often associate beaks with birds, and as such, the token for a beak might be in this group.


Now, regarding current associative tokens, it is important to note that the exact mechanism for deciding what tokens are considered associated with current tokens is not important now. And indeed, at any given time, the collection of current associative tokens may be determined by algorithm and/or circumstance. Overall, this is a very simple system, but it will serve our purposes nicely.

The Role of Functional Output

In mathematical groups, there is often defined one or more operations, which, when applied to the group, affect either its properties or the properties of elements of the group. An integrated sensory field device is limited in its ability to provide means for recognizing such an operation outside of artificial means. This is because there is no natural dividing point between one sense impression and another. Between any two sense impressions, there exists only a continuum of other potential sense impressions.

We argue, then, that functional output of the system creates the natural boundary that is needed. Recall that functional output is simply some output from the system that, as a side effect of its operation, is capable of changing the sense impressions impressed upon the integrated sensory field device. Again, a very simple example would be that of muscular movement in the case of human beings. If you move your head, light will enter your retinas in a different way and produce different visual impressions upon you.

Why Functional Output is so Important

We choose functional output because of how it plays a role in determining what subsets of sensory inputs are functionally important to the system, and what are irrelevant and considered noise. Functional output, by definition of the term “function” is a function of input. And the input comes from none other than the integrated sensory field device.

Consider, then, for example, the sensory input of a threat, say, a bear, alongside additional sensory input of something nonthreatening, say, a tree. Upon first impression, the machine might associate both the bear and the tree together as a noumenal token serving as a functional input, associated with the interpretation of “threat”.

But let the machine now encounter the same threat, a bear, next to something else decided un-tree-like. After repeated exposure, the machine learns to interpret precisely what subset of the sensory integrated field, ‘e.g. the bear’ is actually to be associated with what functional output is chosen, ‘e.g. running away’.

We will for the time being ignore the process of how functional outputs are, over time, consistently chosen from functional inputs, as that is a complex subject in itself. All we need to know is that over time, the machine learns to associate significance from functional input, and map such significance to a consistent behavioral (e.g. ‘functional’) output.

Association, and then Finally, Causation

Under the current system as we have described it, we have a machine capable of recognizing subsets of sense impression, and consistently reacting in some manner upon encountering said aforementioned sense impressions. We now define a very important concept, Association, as the following:

Association := Some subset of the integrated sensory field input, operating as a functional input, which may be consistently encountered as a result of some consistent functional input or output of the machine.

For example, consider that we see a bird from behind. By consistent behavioral output, we may also encounter the bird’s beak, either because we move around to get a better look, or perhaps because the bird chose to move around on its own.

But this is what association is in a nutshell. Where we find sense impression ‘a’, we will also find not too far removed sense impression ‘b’, somewhere.

This leads us almost immediately and definitely to our practical definition of machine causality:

Causality := An association of associations; A group of subsets of the integrated sensory field input, recognized as an association, which may be consistently encountered as a result of some consistent set of functional inputs (an association) or set of functional outputs (the resulting functional inputs of which becoming an association), or some combination of the two.
  • If the hand lets go of the ball, the ball will fall.
  • First associative group: Hand, Ball, letting go.
  • Second associative group: Ball, falling through space.

What the machine interprets as a causal connected event, is in reality a very large number of nested sense impressions, any number or combination of which may be utilized as a functional input for any number of the machine’s systems and resultant functional outputs.

Think of the totality of the sense impression of letting go of the ball as a very deeply nested JSON data structure, encoding all manner of aspectual data regarding the situation, from the texture of the ball to it’s weight to the color of the sky on the day of its release, and everything in-between. Likewise, the effect of the cause is another JSON data structure that encodes for all aspective properties of the effect.

Machine understanding of causality, then, is simply an association of a specific path within the first JSON data structure with a well-defined path within a possible future JSON data structure, in this analogy. And if one association may lead to another, we now have a machine capable of forming sequences of causal chains, albeit still unable to form branching causal logical reasoning. But such capabilities are currently outside the scope of our discussion, as we still have to formally construct the special causal sequence that leads to the development of machine sentience.

But we are close. In the next article, we will build upon everything we have built so far, and will finally illustrate the mechanism that leads our machine finally becoming, by prior definition, sentient.

Practical Machine Sentience 3: Truth Values & Noumenality

Having defined machine sentience in practical terms based on integrated sensory systems as a natural starting point, this article expands into machine concepts of truth and noumenality. The machine must recognize clusters of sense data and transform them into uri descriptor-like data structures called noumenal tokens, which is a foundational step leading to causality, deduction, inference, and ultimately, sentience.

In the previous 2 articles, I discussed a basic, foundational definition of machine sentience and elaborated upon how integrated sensory systems allow practical implementations of sentient systems to arise. The definition aimed to provide a hard-boiled, nuts-and-bolts definition with immediate accessibility being of primary concern. I do believe the definition as given succeeded. However, the cost of practicality strips the basic definition of much of the more mystical elements that are commonly associated with sentience. We try to rectify this discrepancy in this article, whereby we reintroduce concepts that traditionally have been considered part & parcel of any sentient organism’s repertoire: An understanding of truth and falsehood, and the beginnings of Noumenality.

What is Truth?

What is Truth?

Pontius Pilate, in questioning of the Nazarene

For our purposes, we define truth as the result of a look-up operation between some formal system and an integrated sensory field device. As discussed in a previous article, a sensory field device is simply a mechanism that reflects some aspect of reality in a uniquely identifying way. The human eye, for example, will ostensibly recreate (more-or-less) the same internal image given reasonably identical light inputs. The same goes for the human auditory system, olfactory system, sense of balance, touch, and so on and so forth. Each of these systems interact with the world in some consistent manner. e.g. identical interactions will produce identical sense impressions upon the human “sense system”.

We choose this definition of truth for the simple reason that integrated sensory field devices can not upon themselves generate their own data. All input into sensory field devices is impressed upon them from en external source beyond the sensing system itself. (For simplicity sake, let us ignore for now the possibility of loop-back inputs, whereby the system generates models that are then fed back into the sensing circuitry) Furthermore, it is currently outside the known laws of physics for anyone to build any sort of sensory field devices that produce materially different sense impressions from identical interactions with physical phenomena.

Or rather, to put it another way: Even though all eyes, ears, noses, cameras, microphones, etc, produce slightly different variations of any given phenomena, all sense devices nevertheless still produce a sense-impression that is identifiably and physically consistent with all other sense impressions of the same phenomena.

We do not have the ability or understanding to create any sensor devices that operates on the same principles as all other similar sensor devices, yet produces materially different sense impressions, and this lack of ability appears to be, at least to the knowledge of all humans, universal.

Hence, in order to avoid delving too deeply into the muck of philosophy, this definition of “truth” will have to suffice.

However…

This definition of truth has some advantages. First, we are trying to build a machine that operates at the same level of human sentience. Hence, absolute rigor is not something that is necessarily needed. Furthermore, much of the sophistication of human behavior is actually predicated on the fact that different people “see” the world slightly differently. Building a system that takes this slight incongruency into consideration allows us to create mitigation algorithms that are inherently much more practical and usable than if we were trying to find and utilize some definition of absolute truth.

Now then. Let us turn from perception to conceptualization.

A Foundational Definition of Noumenality as Association

So far, let us assume that we have constructed the following components of our soon to be sentient machine:

  • An integrated sensory field device.
  • A frame and housing for the various components of said integrated sensory field device.
  • Various other components attached to the housing and frame which are necessary for continuous functioning of the sensory field device, and also necessary for some output operation of the mechanism.
  • An “output operation” of the mechanism. This must be something which at the very least alters the input from the external world into the sensory field device. For example, movement of the human neck muscles may produce a change in what light may enter through the retina. This produces a change in vision.

Given all of this, the machine must then enter into a process of interacting with the input from the sensory field device. In so doing, various processes are to be undertaken:

Sense Discrimination

Subsets of the sensory field input must be recognized as naturally associating with other subsets of the same sensory field input throughout a given timestamp duration.

Consider, for example, visual input of a bird. No visual input is going to ever provide solely the input data of the bird and nothing else. Instead, there will always be the sense data of the bird embedded within the sense data of some other external environment.

Our machine does not yet know what the concept of a bird is. However, it is capable of, over time, recognizing that certain sense data tends to cluster with other sense data in groups. The color of the beak of a bird, for example, also shows up in the sense data every time the color of, say, the feathers of a bird, is also in view.

This would make sense. After all, every time we look at the same picture of the same bird from the same angle, we should see a similar cluster of sense data in our visual field. This must occur, not just for the sense data of a single bird, but for the sense data of countless other things.

Over time, our machine must learn to recognize what collections of similar sense data clusters exist within its integrated sensory field.

Time Out: A Rationale for Simplification

Our machine is capable of some output which changes its sensory input. Say, motion. e.g. moving it's body will change the orientation of its cameras. In so doing, the machine is co-performing another process which is necessary for boot-strapping the process of deduction. However, we will not be discussing this process quite yet, as we have a complicated enough road ahead of us at the moment. Bare with us, dear reader.

From Association to Noumenation

After our mechanism has created an internal catalog of sense impressions which are recognized as clustering together, the machine is now capable of transforming these sense impression entries into what might be considered “Noumenal Tokens,” or rather, “Object Concepts”. It should be stressed that this is not an automatic process. The actual mechanism for how this occurs, and why, requires more knowledge of the mechanism’s use of its functional output. e.g. The ability of the machine to produce some output which alters the input from the integrated sensory field device.

We will not be discussing this processes in this article, as it would take us quite off the route of the current topic of discussion. For simplicity’s sake, assume that the machine has created a Noumenal Token from the myriad sense impressions in its sense impression catalog. What is the precise form of a Noumenal Token?

Practical Definition of a Noumenal Token

For our purposes, a Noumenal Token is simply some data structure which allows our machine to identify some subset of the integrated sensory field device, along with the state values of the sensory field device within and outside of that subset.

So what do we mean by this? Consider a computer monitor that is showing a picture of an apple on screen. A noumenal token would simply be some URI Descriptor data structure that is capable of rebuilding the apple in some meaningful sense, even when the apple is not actually on the screen. It is important to understand that a Noumenal Token is not simply a label which is assigned to the image data. It actually is a kind of encapsulating data structure that references and contains the image data itself, along with the addresses of the pixels on the screen where it was displayed. It is, essentially, “Sense Data” + “Sense Context”. e.g. What was sensed, and what parts of the sense faculties actually did the sensing, and to what degree, etc.

I so far have made reference to the “sense data” because I wanted to impress upon the reader that the data is actually being recorded and stored somewhere in the mechanism’s own internal memory. However, when it comes to the precise form of said “sense data”, it must be stressed that the actual data format of a noumenal token on disk is going to be very different than a raw recording of sense states from specific sense receptor sites. The “concept” of an apple, internally, is going to end up containing a very compressed representation of the original sense data. Much like with human beings, we do not record perfect sense impressions within ourselves(Generally speaking). And indeed, humans often do not store much more than vague generalities of the sense impressions which ultimately form our conceptualized internal representations of what we behold.

Nonetheless and to Summarize:

Our mechanism has attached to it sensory devices which form unified sense impressions. The machine records a catalog of clumps of sensory data which tends to co-occur. These clumps of sensory data are transformed into Noumenal Tokens via a process which is not automatic, but which nonetheless will be discussed in greater detail in another article.

At last, we are now ready to take our first steps towards a machine conception of causality, which will lead us to deduction, inference, and finally, self-referential conceptualization. e.g. Sentience… the next step of which we shall resume in the next article.