Practical Machine Sentience Part 2: Integrated Sensory Field Design and Implementation

The post discusses the creation of sentient machines, outlining the importance of integrated sensory fields over static data sets for self-awareness. It describes a potential design including visual, auditory, and proprioception sensory devices. The hardware and software specifications are detailed, emphasizing real-time data and interaction for forming a self-referential concept necessary for machine sentience.

ATTENTION: I began this blog post way back when I was intending to develop an actual proof-of-concept along with the publish points of the articles in this series. I have since come to realize that I simply do not have the time, resources, energy, et al, to accomplish such a thing, as I am a broke Mexican who was raised very much in isolation from the greater tech community.

In lieu of re-writing this article from scratch, I have decided merely to edit and continue, but please note that no actual implementation will be developed alongside these articles.

Having said that, I am going to be writing these articles with the intention that any reasonably seasoned developer may understand and verify the concepts these articles introduce with implementations of their own.

In a previous post, I introduced a practical definition of machine sentience, and went over some basic implications of such definition on the implementation of living sentient machines, as well as implications regarding how such machines might fit within the greater legal zeitgeist of humanity. In this post, I will begin outlining the process of creating a practical implementation of a sentient + ancillary functional systems. We start with the design of an integrated sensor field input mechanism.

Why Integrated Sensory Fields Matter

Analogous implementations of integrated sensory fields would correspond to the standard Artificial Intelligence Training Set. ChatGPT and similar systems are trained on passive data sets of data in one or another specific format. The datasets may be considered as halfway removed already from the direct experience of reality that humans undergo. These data sets are unusable for two primary reasons:

  • Static Data sets are unchanging, which means that there is no need to create a machine that constantly parses new data. Once the dataset has been trained according to whatever algorithm process is used for the AI, the machine is functionally passive, and de facto dead for all practical purposes.
  • Static Data sets are inherently unable to contain self-referential data. The machine is artificially divorced from the training data, and hence no concept from the data may be formed which refers to the learning entity itself. Hence, it is impossible for a machine created to study a static passive set to construct any conceptualization of the “learner” from within the data itself, which is a critically vital component of any sentience mechanism.

As per the definition given in the previous post, it is necessary for the machine to form a self-concept from the training data. The only training data set capable of providing this is direct sensor information itself.

Organization of an Integrated Sensory Field

A specific sensory device may be defined in 3 components:

  • A shell, or separation between external sense data and internal sense processing of said data.
  • An interior component that is capable of taking some sort of structure or organization, and that retains said structure or organization over a meaningful and useful time span.
  • An aperture, membrane, interface, et al, that permits outside interaction between the interior organizing component in such a way that the organization of the interior records/reflects the outside interaction in a reproducible and uniquely identifying (relatively speaking) manner.

An integrated sensory field, then, may be defined as follows:

  1. 1 or more sensory devices which are time/input-synchronized. They each record data such that sensory input on one device can be treated as belonging to the same overall sensory data impression as every other device over uniquely determined units or ranges of time.
  2. The phenomena that each device interacts with is continuous in the mathematical sense, either ideally, or for all practical purposes of the machine. In other words, the phenomena perceived may be partitioned into infinitely many 2-member sets of 1 open, bounded disk (wikipedia link on bounded sets) and 1 unbounded universe (wikipedia link on universal compliments) comprising of the phenomenal field itself.

The 2nd definition only applies in the ideal sense, as any real digital sensory device will have a limited resolution with which to process any field phenomena. Digital Cameras are limited by their megapixel resolution, et al. And analog devices, while capable of perfect field capture, have a practical limit of usability. 35mm analog film captures perfectly continuous data, but we are only able to extract usable continuous data above a certain size threshold on the film.

For the purposes of creating a sentient machine, there is one more additional requirement to our Integrated Sensory Field:

  1. Some SUBSET of the sensory data provided by the integrated sensory field must be vulnerable to mechanism manipulation: Operations carried out by the machine must be capable of affecting uniquely identifiable and reproducible changes within the input data collected from the integrated sensory field.

The reason why this is necessary is because no sensory device can provide direct sense interaction with itself. Sensor components within the shell can not be recorded by sensor components within the shell, which can only record data from outside the shell.

In order to form a self referential concept, then, it is necessary that the system do so through inference. The inferential mechanism is established through a very basic and straightforward logical deduction:

  1. There exists entities within the sensory field which may be considered disparate from the sensory field and other entities which also may be considered disparate from the sensory field.
  2. Certain entities appear to have votive mechanism for action. They interact with and respond to stimuli through some interface.
  3. Some stimuli the entity receives appears to be dependent on some output the entity is capable of producing.
  4. Some stimuli “I receive” appears to be dependent on some output “I produce” (the primary output in the case of humans is muscular contraction).
  5. Some component of the stimuli “I receive” appears to indivisibly and consistently reveal aspects, of which when combined may produce a mental model of an “entity” which:
  6. Exists within the sensory field and yet may be considered disparate from the sensory field and other “entities” recognizable within said field.
  7. Since this component is indivisible from the stimuli received from some output, the source of said output must come from said component.
  8. Produced output is directly actuated. “I” produce output that affects some subset of the sensory field. The response sensory stimuli contains a “component” which must be the source of said affected subset of the sensory field.
  9. Therefore “I” must be that “component” which I recognize.
  10. Hence, “I exist”.
  11. To conclude: “There is a world. I exist within it. Yet I am also apart from it. I live.”

Any system sophisticated enough to to form the above deductive inference is capable of sentience and sentient motivation.

Our Implementation Specification

I have opted to leave this part of the article undeleted. I, sadly, do not have the time nor energy nor financial resources to develop the proof of concept necessary to validate the claims discussed in this series of articles. 

I likely will refer back to this once the series of articles is done in order to create a possible reference implementation of a sentience machine architecture. For now, interested readers should simply understand that moving forward, no "actual" proof-of-concept implementation of a sentient machine will be developed in tandem with this article series.

Let us now proceed towards an actual technical implementation of a real world sensory integrated sensory field device with a specification. We shall keep things simple. Our device shall survey 3 + 1 sense domains:

  1. The visual domain. Two digital cameras shall be mounted on a controlling frame. The cameras shall be independently controllable.
  2. The Auditory domain. Likewise, we shall have two dynamics microphones mounted on a controlling frame.
  3. The Kinematic resistance domain. The frame shall also contain Pule-Width-Modulation servo motors capable of moving limbs of the mechanism. Software shall keep track of the PWM signals and resultant limb orientations from control software signals.

It is possible to calculate the resulting limb configuration from PWM signals sent to the motors under ideal circumstances. Real life mechanical resistances and loads, however, produce errors in actual real world resultant limb configuration. The error between the calculated ideal limb configuration and the actual recorded limb configuration can be used to determine continuous sensory data of the force loads on the various motor components at all time.

This allows us to form a rudimentary proprioception sensory device.

Finally, we have our last sensory domain:

  1. Standard Voltage, Wattage, Thermal, Gyroscope sensors. These components shall be integrated into the system software and API because of their easy availability and ubiquity on most modern chip-sets and computer hardware.

The Software Specification

Now that we have our hardware specification, we need only proceed to the creation of a software and api specification. The specification for our purposes will be accomplished via ARM64 Linux Kernel Modules. We shall need to re-implement specific modules for the sensory devices themselves, and also implement a kernel module to perform inter-device module communication and also generate a user-space accessible memory mapped data region in ram. Finally, we will need a user space daemon to structure, organize, and provide a transparent high level access api to other processes. This will be tackled in a future blog post.

Next Up

The next article in this series will discuss Machine conceptualization, understanding, and manipulation of the concepts of truth, falsehood, and causality, and their relationship and potential implementation within any sentient processing architecture.

A Practical Definition of Machine Sentience (pt. 1)

I describe a practical definition of machine sentience, along with an analysis of how sentient machines may fit within the socioeconomic zeitgeist of humanity, and prepare the ground work for discussing practical implementations of sentient machines.

To absolve others of their humanity is to absolve oneself of the same. Such action must not be taken lightly. For it necessitates only one of 2 gravities: To become less than, or greater than, human.

Hittes Khartoum

Rationale

For the longest expanse of space and time, humanity has had no need for any innate understanding consciousness or sentience. Our grasp of the concept was purely intuitive, and there was no other being besides ourselves to which we had to demonstrate that our intuitive understanding was valid. This happy circumstance erodes ever more quickly with the rise of Artificial Intelligence frameworks. Machines grow more capable of mimicking aspects of humanity that for so long we have thought felt only under the domain of sentience and conscious control. Soon, we may have machines capable of human mimicry on a level indistinguishable from human sentience. If and when that day comes, we risk our lives greatly.

For you see, contemporary pronouncements on the subject of consciousness contain all the trappings of religion. The self as we understand it is a sacred concept, and humanity is under no circumstances permitted to throw back the veil covering the divine. Proper research into the subject has been hampered by our overawe. To have a machine, then, mimic humans in such a way as to be indistinguishable from humans will also cause us to treat them as divine.

Absent any proper understanding of sentience, the machine shall become our God. But make no mistake:

To be human is to worship. All Gods devour their adherents.

If we are to avoid such a tragic, if poetic end, then we have no choice but to profane one more holy temple. Our current understanding of the “Self” must be moved from the domain of the domain into the workshop of practical endeavor. May god have mercy on our souls.

The Definition

Let us proceed then immediately. We may informally describe consciousness as that special thing that gives us the “feeling” of “The man inside the box”. There is an innate feeling that we, as humans, can all attest to: I currently feel as though there is a singular entity, a “self” that is “inside” a particular boundary, “my body”. The boundary is not equivalent to this self. But whatever the self is, it is somewhere inside this body. Further, if one were to start removing parts of my body, I would eventually die, whereupon my “self” would no longer be found inside it. This implies that there is some “tightest boundary”. e.g. A maximally small boundary which can be, if not considered synonymous with the self, considered to be containing the self, and only the self.

Given this intuitive understanding, we give the necessary epiphany: The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above.

There is nothing sacred in it.

The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above. There is nothing sacred in it.

DruidPeter

Given the required epiphany, we now proceed to the definition proper, extracted directly from our intuitive understanding of sentience:

A sentient system is any system which satisfies the following 3 properties:

  1. The system is capable of recognizing boundaries within integrated fields of perceptual sensory input.
  2. The system can associate these boundaries with logical tokens, which it can then manipulate according to sets of rules, the means of formation, maintenance, and modification of which may be considered irrelevant for our purposes here.
  3. The system associates a special, unique, logical token, with some superset of boundaries which contains the totality of those components of the system necessary to provide the 3 required functions.

Immediate Observations

In defining sentience as above, we can make some immediate important observations. Primary among them is that we have now greatly restricted the scope of what sentience adds to any functional system that implements. Typical intuition adds significantly more attributes to anything that we, as humans, consider sentient. For example, will towards self-determination, an understanding of cause and effect, as well as executive ordering of one’s affairs.

This may seem an odd thing to do, but it yields the benefit of reducing sentience to a well defined system specification. It also immediately, if we accept this definition, forces us to conclude that other aspects of ourselves that we traditionally believe to be the responsibility of sentience can not be directly formed as a natural consequence from sentience.

In other words, it is possible for a system to be considered sentient under this definition, and at the same time, NOT to display traits associated with humanity, and typically also associated with sentience in general.

Because of this, sentience becomes lessened as a concept. But doing so helps us to match this definition of sentience with experimental verification. Consider the various cases:

Loss of Function can not Equal Loss of Identity

There are cases of brain damage where individuals become unable to initiate tasks, or are not able to initiate tasks without great and concentrated effort. 1

  1. See: Disorders of Initiation ↩︎

In cases like this, individuals are not considered to have lost their general sense of self. Likewise, if we are to come upon a solid practical definition of machine sentience, it is important that our definition holds up under similar situations where loss of function does not result in loss of identity.

Let us consider a list of functional capacities that must now be identified as separately implementable from a sentient system:

Non-Sentient Functional Capacitic Systems

  • Understanding, Recognition, and application, of Cause and Effect Reasoning
  • Implementation of self-guided volition + self-acquisition of goals.
  • Integration of Theory of Mind + Emotional context into whole system processing.

This is by no means an exhaustive list. We conclude, however, that for us to accept the definition of machine sentience as defined above, the 3-mentioned Non-Sentient Functional Capacitic Systems must and can only be defined as separately implementable and definable systems in their own right.

Implications

Immediate implications of machine sentience as defined above include the following:

  • Separation of any implementation of sentience from any necessary attribution of humanity. A sentient machine is not immediately human. Nor will a sentient machine necessarily act in a manner, nor make decisions, according to human modes of action or decision making.
  • Likewise, sentience does not immediately confer full legal recognition of personhood. e.g. A sentient machine is not immediately granted the full rights and privileges of a human citizen of any nation. This does not mean that sentient machines are forever property and can never be given these rights. It simply means that sentience is insufficient to grant legal recognition of human rights.

These two implications are important for two reasons. First, they provide a path forward for creating an intelligible understanding of Machine Sentience. They reduce the complexity of the potential design space. We can not permit Artificial General Intelligence to remain a black box. Humans must understand that which they create. Further, they provide a path forward for a solid legal foundation regarding the integration of artificially sentient machines within the general fabric of society. We can not produce monsters, like Dr. Frankenstein, without providing for their place in this world.

The strength of the definition of machine sentience as described above, is that it provides these important footholds into the cliff of AGI that humanity is not threatening to walk off from. However, it is still merely a definition. A practical implementation, however, must necessarily be reserved for the scope of a future article. And what an article it certainly will be.