Practical Machine Sentience 3: Truth Values & Noumenality

Having defined machine sentience in practical terms based on integrated sensory systems as a natural starting point, this article expands into machine concepts of truth and noumenality. The machine must recognize clusters of sense data and transform them into uri descriptor-like data structures called noumenal tokens, which is a foundational step leading to causality, deduction, inference, and ultimately, sentience.

In the previous 2 articles, I discussed a basic, foundational definition of machine sentience and elaborated upon how integrated sensory systems allow practical implementations of sentient systems to arise. The definition aimed to provide a hard-boiled, nuts-and-bolts definition with immediate accessibility being of primary concern. I do believe the definition as given succeeded. However, the cost of practicality strips the basic definition of much of the more mystical elements that are commonly associated with sentience. We try to rectify this discrepancy in this article, whereby we reintroduce concepts that traditionally have been considered part & parcel of any sentient organism’s repertoire: An understanding of truth and falsehood, and the beginnings of Noumenality.

What is Truth?

What is Truth?

Pontius Pilate, in questioning of the Nazarene

For our purposes, we define truth as the result of a look-up operation between some formal system and an integrated sensory field device. As discussed in a previous article, a sensory field device is simply a mechanism that reflects some aspect of reality in a uniquely identifying way. The human eye, for example, will ostensibly recreate (more-or-less) the same internal image given reasonably identical light inputs. The same goes for the human auditory system, olfactory system, sense of balance, touch, and so on and so forth. Each of these systems interact with the world in some consistent manner. e.g. identical interactions will produce identical sense impressions upon the human “sense system”.

We choose this definition of truth for the simple reason that integrated sensory field devices can not upon themselves generate their own data. All input into sensory field devices is impressed upon them from en external source beyond the sensing system itself. (For simplicity sake, let us ignore for now the possibility of loop-back inputs, whereby the system generates models that are then fed back into the sensing circuitry) Furthermore, it is currently outside the known laws of physics for anyone to build any sort of sensory field devices that produce materially different sense impressions from identical interactions with physical phenomena.

Or rather, to put it another way: Even though all eyes, ears, noses, cameras, microphones, etc, produce slightly different variations of any given phenomena, all sense devices nevertheless still produce a sense-impression that is identifiably and physically consistent with all other sense impressions of the same phenomena.

We do not have the ability or understanding to create any sensor devices that operates on the same principles as all other similar sensor devices, yet produces materially different sense impressions, and this lack of ability appears to be, at least to the knowledge of all humans, universal.

Hence, in order to avoid delving too deeply into the muck of philosophy, this definition of “truth” will have to suffice.

However…

This definition of truth has some advantages. First, we are trying to build a machine that operates at the same level of human sentience. Hence, absolute rigor is not something that is necessarily needed. Furthermore, much of the sophistication of human behavior is actually predicated on the fact that different people “see” the world slightly differently. Building a system that takes this slight incongruency into consideration allows us to create mitigation algorithms that are inherently much more practical and usable than if we were trying to find and utilize some definition of absolute truth.

Now then. Let us turn from perception to conceptualization.

A Foundational Definition of Noumenality as Association

So far, let us assume that we have constructed the following components of our soon to be sentient machine:

  • An integrated sensory field device.
  • A frame and housing for the various components of said integrated sensory field device.
  • Various other components attached to the housing and frame which are necessary for continuous functioning of the sensory field device, and also necessary for some output operation of the mechanism.
  • An “output operation” of the mechanism. This must be something which at the very least alters the input from the external world into the sensory field device. For example, movement of the human neck muscles may produce a change in what light may enter through the retina. This produces a change in vision.

Given all of this, the machine must then enter into a process of interacting with the input from the sensory field device. In so doing, various processes are to be undertaken:

Sense Discrimination

Subsets of the sensory field input must be recognized as naturally associating with other subsets of the same sensory field input throughout a given timestamp duration.

Consider, for example, visual input of a bird. No visual input is going to ever provide solely the input data of the bird and nothing else. Instead, there will always be the sense data of the bird embedded within the sense data of some other external environment.

Our machine does not yet know what the concept of a bird is. However, it is capable of, over time, recognizing that certain sense data tends to cluster with other sense data in groups. The color of the beak of a bird, for example, also shows up in the sense data every time the color of, say, the feathers of a bird, is also in view.

This would make sense. After all, every time we look at the same picture of the same bird from the same angle, we should see a similar cluster of sense data in our visual field. This must occur, not just for the sense data of a single bird, but for the sense data of countless other things.

Over time, our machine must learn to recognize what collections of similar sense data clusters exist within its integrated sensory field.

Time Out: A Rationale for Simplification

Our machine is capable of some output which changes its sensory input. Say, motion. e.g. moving it's body will change the orientation of its cameras. In so doing, the machine is co-performing another process which is necessary for boot-strapping the process of deduction. However, we will not be discussing this process quite yet, as we have a complicated enough road ahead of us at the moment. Bare with us, dear reader.

From Association to Noumenation

After our mechanism has created an internal catalog of sense impressions which are recognized as clustering together, the machine is now capable of transforming these sense impression entries into what might be considered “Noumenal Tokens,” or rather, “Object Concepts”. It should be stressed that this is not an automatic process. The actual mechanism for how this occurs, and why, requires more knowledge of the mechanism’s use of its functional output. e.g. The ability of the machine to produce some output which alters the input from the integrated sensory field device.

We will not be discussing this processes in this article, as it would take us quite off the route of the current topic of discussion. For simplicity’s sake, assume that the machine has created a Noumenal Token from the myriad sense impressions in its sense impression catalog. What is the precise form of a Noumenal Token?

Practical Definition of a Noumenal Token

For our purposes, a Noumenal Token is simply some data structure which allows our machine to identify some subset of the integrated sensory field device, along with the state values of the sensory field device within and outside of that subset.

So what do we mean by this? Consider a computer monitor that is showing a picture of an apple on screen. A noumenal token would simply be some URI Descriptor data structure that is capable of rebuilding the apple in some meaningful sense, even when the apple is not actually on the screen. It is important to understand that a Noumenal Token is not simply a label which is assigned to the image data. It actually is a kind of encapsulating data structure that references and contains the image data itself, along with the addresses of the pixels on the screen where it was displayed. It is, essentially, “Sense Data” + “Sense Context”. e.g. What was sensed, and what parts of the sense faculties actually did the sensing, and to what degree, etc.

I so far have made reference to the “sense data” because I wanted to impress upon the reader that the data is actually being recorded and stored somewhere in the mechanism’s own internal memory. However, when it comes to the precise form of said “sense data”, it must be stressed that the actual data format of a noumenal token on disk is going to be very different than a raw recording of sense states from specific sense receptor sites. The “concept” of an apple, internally, is going to end up containing a very compressed representation of the original sense data. Much like with human beings, we do not record perfect sense impressions within ourselves(Generally speaking). And indeed, humans often do not store much more than vague generalities of the sense impressions which ultimately form our conceptualized internal representations of what we behold.

Nonetheless and to Summarize:

Our mechanism has attached to it sensory devices which form unified sense impressions. The machine records a catalog of clumps of sensory data which tends to co-occur. These clumps of sensory data are transformed into Noumenal Tokens via a process which is not automatic, but which nonetheless will be discussed in greater detail in another article.

At last, we are now ready to take our first steps towards a machine conception of causality, which will lead us to deduction, inference, and finally, self-referential conceptualization. e.g. Sentience… the next step of which we shall resume in the next article.

Practical Machine Sentience Part 2: Integrated Sensory Field Design and Implementation

The post discusses the creation of sentient machines, outlining the importance of integrated sensory fields over static data sets for self-awareness. It describes a potential design including visual, auditory, and proprioception sensory devices. The hardware and software specifications are detailed, emphasizing real-time data and interaction for forming a self-referential concept necessary for machine sentience.

ATTENTION: I began this blog post way back when I was intending to develop an actual proof-of-concept along with the publish points of the articles in this series. I have since come to realize that I simply do not have the time, resources, energy, et al, to accomplish such a thing, as I am a broke Mexican who was raised very much in isolation from the greater tech community.

In lieu of re-writing this article from scratch, I have decided merely to edit and continue, but please note that no actual implementation will be developed alongside these articles.

Having said that, I am going to be writing these articles with the intention that any reasonably seasoned developer may understand and verify the concepts these articles introduce with implementations of their own.

In a previous post, I introduced a practical definition of machine sentience, and went over some basic implications of such definition on the implementation of living sentient machines, as well as implications regarding how such machines might fit within the greater legal zeitgeist of humanity. In this post, I will begin outlining the process of creating a practical implementation of a sentient + ancillary functional systems. We start with the design of an integrated sensor field input mechanism.

Why Integrated Sensory Fields Matter

Analogous implementations of integrated sensory fields would correspond to the standard Artificial Intelligence Training Set. ChatGPT and similar systems are trained on passive data sets of data in one or another specific format. The datasets may be considered as halfway removed already from the direct experience of reality that humans undergo. These data sets are unusable for two primary reasons:

  • Static Data sets are unchanging, which means that there is no need to create a machine that constantly parses new data. Once the dataset has been trained according to whatever algorithm process is used for the AI, the machine is functionally passive, and de facto dead for all practical purposes.
  • Static Data sets are inherently unable to contain self-referential data. The machine is artificially divorced from the training data, and hence no concept from the data may be formed which refers to the learning entity itself. Hence, it is impossible for a machine created to study a static passive set to construct any conceptualization of the “learner” from within the data itself, which is a critically vital component of any sentience mechanism.

As per the definition given in the previous post, it is necessary for the machine to form a self-concept from the training data. The only training data set capable of providing this is direct sensor information itself.

Organization of an Integrated Sensory Field

A specific sensory device may be defined in 3 components:

  • A shell, or separation between external sense data and internal sense processing of said data.
  • An interior component that is capable of taking some sort of structure or organization, and that retains said structure or organization over a meaningful and useful time span.
  • An aperture, membrane, interface, et al, that permits outside interaction between the interior organizing component in such a way that the organization of the interior records/reflects the outside interaction in a reproducible and uniquely identifying (relatively speaking) manner.

An integrated sensory field, then, may be defined as follows:

  1. 1 or more sensory devices which are time/input-synchronized. They each record data such that sensory input on one device can be treated as belonging to the same overall sensory data impression as every other device over uniquely determined units or ranges of time.
  2. The phenomena that each device interacts with is continuous in the mathematical sense, either ideally, or for all practical purposes of the machine. In other words, the phenomena perceived may be partitioned into infinitely many 2-member sets of 1 open, bounded disk (wikipedia link on bounded sets) and 1 unbounded universe (wikipedia link on universal compliments) comprising of the phenomenal field itself.

The 2nd definition only applies in the ideal sense, as any real digital sensory device will have a limited resolution with which to process any field phenomena. Digital Cameras are limited by their megapixel resolution, et al. And analog devices, while capable of perfect field capture, have a practical limit of usability. 35mm analog film captures perfectly continuous data, but we are only able to extract usable continuous data above a certain size threshold on the film.

For the purposes of creating a sentient machine, there is one more additional requirement to our Integrated Sensory Field:

  1. Some SUBSET of the sensory data provided by the integrated sensory field must be vulnerable to mechanism manipulation: Operations carried out by the machine must be capable of affecting uniquely identifiable and reproducible changes within the input data collected from the integrated sensory field.

The reason why this is necessary is because no sensory device can provide direct sense interaction with itself. Sensor components within the shell can not be recorded by sensor components within the shell, which can only record data from outside the shell.

In order to form a self referential concept, then, it is necessary that the system do so through inference. The inferential mechanism is established through a very basic and straightforward logical deduction:

  1. There exists entities within the sensory field which may be considered disparate from the sensory field and other entities which also may be considered disparate from the sensory field.
  2. Certain entities appear to have votive mechanism for action. They interact with and respond to stimuli through some interface.
  3. Some stimuli the entity receives appears to be dependent on some output the entity is capable of producing.
  4. Some stimuli “I receive” appears to be dependent on some output “I produce” (the primary output in the case of humans is muscular contraction).
  5. Some component of the stimuli “I receive” appears to indivisibly and consistently reveal aspects, of which when combined may produce a mental model of an “entity” which:
  6. Exists within the sensory field and yet may be considered disparate from the sensory field and other “entities” recognizable within said field.
  7. Since this component is indivisible from the stimuli received from some output, the source of said output must come from said component.
  8. Produced output is directly actuated. “I” produce output that affects some subset of the sensory field. The response sensory stimuli contains a “component” which must be the source of said affected subset of the sensory field.
  9. Therefore “I” must be that “component” which I recognize.
  10. Hence, “I exist”.
  11. To conclude: “There is a world. I exist within it. Yet I am also apart from it. I live.”

Any system sophisticated enough to to form the above deductive inference is capable of sentience and sentient motivation.

Our Implementation Specification

I have opted to leave this part of the article undeleted. I, sadly, do not have the time nor energy nor financial resources to develop the proof of concept necessary to validate the claims discussed in this series of articles. 

I likely will refer back to this once the series of articles is done in order to create a possible reference implementation of a sentience machine architecture. For now, interested readers should simply understand that moving forward, no "actual" proof-of-concept implementation of a sentient machine will be developed in tandem with this article series.

Let us now proceed towards an actual technical implementation of a real world sensory integrated sensory field device with a specification. We shall keep things simple. Our device shall survey 3 + 1 sense domains:

  1. The visual domain. Two digital cameras shall be mounted on a controlling frame. The cameras shall be independently controllable.
  2. The Auditory domain. Likewise, we shall have two dynamics microphones mounted on a controlling frame.
  3. The Kinematic resistance domain. The frame shall also contain Pule-Width-Modulation servo motors capable of moving limbs of the mechanism. Software shall keep track of the PWM signals and resultant limb orientations from control software signals.

It is possible to calculate the resulting limb configuration from PWM signals sent to the motors under ideal circumstances. Real life mechanical resistances and loads, however, produce errors in actual real world resultant limb configuration. The error between the calculated ideal limb configuration and the actual recorded limb configuration can be used to determine continuous sensory data of the force loads on the various motor components at all time.

This allows us to form a rudimentary proprioception sensory device.

Finally, we have our last sensory domain:

  1. Standard Voltage, Wattage, Thermal, Gyroscope sensors. These components shall be integrated into the system software and API because of their easy availability and ubiquity on most modern chip-sets and computer hardware.

The Software Specification

Now that we have our hardware specification, we need only proceed to the creation of a software and api specification. The specification for our purposes will be accomplished via ARM64 Linux Kernel Modules. We shall need to re-implement specific modules for the sensory devices themselves, and also implement a kernel module to perform inter-device module communication and also generate a user-space accessible memory mapped data region in ram. Finally, we will need a user space daemon to structure, organize, and provide a transparent high level access api to other processes. This will be tackled in a future blog post.

Next Up

The next article in this series will discuss Machine conceptualization, understanding, and manipulation of the concepts of truth, falsehood, and causality, and their relationship and potential implementation within any sentient processing architecture.

A Practical Definition of Machine Sentience (pt. 1)

I describe a practical definition of machine sentience, along with an analysis of how sentient machines may fit within the socioeconomic zeitgeist of humanity, and prepare the ground work for discussing practical implementations of sentient machines.

To absolve others of their humanity is to absolve oneself of the same. Such action must not be taken lightly. For it necessitates only one of 2 gravities: To become less than, or greater than, human.

Hittes Khartoum

Rationale

For the longest expanse of space and time, humanity has had no need for any innate understanding consciousness or sentience. Our grasp of the concept was purely intuitive, and there was no other being besides ourselves to which we had to demonstrate that our intuitive understanding was valid. This happy circumstance erodes ever more quickly with the rise of Artificial Intelligence frameworks. Machines grow more capable of mimicking aspects of humanity that for so long we have thought felt only under the domain of sentience and conscious control. Soon, we may have machines capable of human mimicry on a level indistinguishable from human sentience. If and when that day comes, we risk our lives greatly.

For you see, contemporary pronouncements on the subject of consciousness contain all the trappings of religion. The self as we understand it is a sacred concept, and humanity is under no circumstances permitted to throw back the veil covering the divine. Proper research into the subject has been hampered by our overawe. To have a machine, then, mimic humans in such a way as to be indistinguishable from humans will also cause us to treat them as divine.

Absent any proper understanding of sentience, the machine shall become our God. But make no mistake:

To be human is to worship. All Gods devour their adherents.

If we are to avoid such a tragic, if poetic end, then we have no choice but to profane one more holy temple. Our current understanding of the “Self” must be moved from the domain of the domain into the workshop of practical endeavor. May god have mercy on our souls.

The Definition

Let us proceed then immediately. We may informally describe consciousness as that special thing that gives us the “feeling” of “The man inside the box”. There is an innate feeling that we, as humans, can all attest to: I currently feel as though there is a singular entity, a “self” that is “inside” a particular boundary, “my body”. The boundary is not equivalent to this self. But whatever the self is, it is somewhere inside this body. Further, if one were to start removing parts of my body, I would eventually die, whereupon my “self” would no longer be found inside it. This implies that there is some “tightest boundary”. e.g. A maximally small boundary which can be, if not considered synonymous with the self, considered to be containing the self, and only the self.

Given this intuitive understanding, we give the necessary epiphany: The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above.

There is nothing sacred in it.

The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above. There is nothing sacred in it.

DruidPeter

Given the required epiphany, we now proceed to the definition proper, extracted directly from our intuitive understanding of sentience:

A sentient system is any system which satisfies the following 3 properties:

  1. The system is capable of recognizing boundaries within integrated fields of perceptual sensory input.
  2. The system can associate these boundaries with logical tokens, which it can then manipulate according to sets of rules, the means of formation, maintenance, and modification of which may be considered irrelevant for our purposes here.
  3. The system associates a special, unique, logical token, with some superset of boundaries which contains the totality of those components of the system necessary to provide the 3 required functions.

Immediate Observations

In defining sentience as above, we can make some immediate important observations. Primary among them is that we have now greatly restricted the scope of what sentience adds to any functional system that implements. Typical intuition adds significantly more attributes to anything that we, as humans, consider sentient. For example, will towards self-determination, an understanding of cause and effect, as well as executive ordering of one’s affairs.

This may seem an odd thing to do, but it yields the benefit of reducing sentience to a well defined system specification. It also immediately, if we accept this definition, forces us to conclude that other aspects of ourselves that we traditionally believe to be the responsibility of sentience can not be directly formed as a natural consequence from sentience.

In other words, it is possible for a system to be considered sentient under this definition, and at the same time, NOT to display traits associated with humanity, and typically also associated with sentience in general.

Because of this, sentience becomes lessened as a concept. But doing so helps us to match this definition of sentience with experimental verification. Consider the various cases:

Loss of Function can not Equal Loss of Identity

There are cases of brain damage where individuals become unable to initiate tasks, or are not able to initiate tasks without great and concentrated effort. 1

  1. See: Disorders of Initiation ↩︎

In cases like this, individuals are not considered to have lost their general sense of self. Likewise, if we are to come upon a solid practical definition of machine sentience, it is important that our definition holds up under similar situations where loss of function does not result in loss of identity.

Let us consider a list of functional capacities that must now be identified as separately implementable from a sentient system:

Non-Sentient Functional Capacitic Systems

  • Understanding, Recognition, and application, of Cause and Effect Reasoning
  • Implementation of self-guided volition + self-acquisition of goals.
  • Integration of Theory of Mind + Emotional context into whole system processing.

This is by no means an exhaustive list. We conclude, however, that for us to accept the definition of machine sentience as defined above, the 3-mentioned Non-Sentient Functional Capacitic Systems must and can only be defined as separately implementable and definable systems in their own right.

Implications

Immediate implications of machine sentience as defined above include the following:

  • Separation of any implementation of sentience from any necessary attribution of humanity. A sentient machine is not immediately human. Nor will a sentient machine necessarily act in a manner, nor make decisions, according to human modes of action or decision making.
  • Likewise, sentience does not immediately confer full legal recognition of personhood. e.g. A sentient machine is not immediately granted the full rights and privileges of a human citizen of any nation. This does not mean that sentient machines are forever property and can never be given these rights. It simply means that sentience is insufficient to grant legal recognition of human rights.

These two implications are important for two reasons. First, they provide a path forward for creating an intelligible understanding of Machine Sentience. They reduce the complexity of the potential design space. We can not permit Artificial General Intelligence to remain a black box. Humans must understand that which they create. Further, they provide a path forward for a solid legal foundation regarding the integration of artificially sentient machines within the general fabric of society. We can not produce monsters, like Dr. Frankenstein, without providing for their place in this world.

The strength of the definition of machine sentience as described above, is that it provides these important footholds into the cliff of AGI that humanity is not threatening to walk off from. However, it is still merely a definition. A practical implementation, however, must necessarily be reserved for the scope of a future article. And what an article it certainly will be.