Practical Machine Sentience Part 2: Integrated Sensory Field Design and Implementation

The post discusses the creation of sentient machines, outlining the importance of integrated sensory fields over static data sets for self-awareness. It describes a potential design including visual, auditory, and proprioception sensory devices. The hardware and software specifications are detailed, emphasizing real-time data and interaction for forming a self-referential concept necessary for machine sentience.

ATTENTION: I began this blog post way back when I was intending to develop an actual proof-of-concept along with the publish points of the articles in this series. I have since come to realize that I simply do not have the time, resources, energy, et al, to accomplish such a thing, as I am a broke Mexican who was raised very much in isolation from the greater tech community.

In lieu of re-writing this article from scratch, I have decided merely to edit and continue, but please note that no actual implementation will be developed alongside these articles.

Having said that, I am going to be writing these articles with the intention that any reasonably seasoned developer may understand and verify the concepts these articles introduce with implementations of their own.

In a previous post, I introduced a practical definition of machine sentience, and went over some basic implications of such definition on the implementation of living sentient machines, as well as implications regarding how such machines might fit within the greater legal zeitgeist of humanity. In this post, I will begin outlining the process of creating a practical implementation of a sentient + ancillary functional systems. We start with the design of an integrated sensor field input mechanism.

Why Integrated Sensory Fields Matter

Analogous implementations of integrated sensory fields would correspond to the standard Artificial Intelligence Training Set. ChatGPT and similar systems are trained on passive data sets of data in one or another specific format. The datasets may be considered as halfway removed already from the direct experience of reality that humans undergo. These data sets are unusable for two primary reasons:

  • Static Data sets are unchanging, which means that there is no need to create a machine that constantly parses new data. Once the dataset has been trained according to whatever algorithm process is used for the AI, the machine is functionally passive, and de facto dead for all practical purposes.
  • Static Data sets are inherently unable to contain self-referential data. The machine is artificially divorced from the training data, and hence no concept from the data may be formed which refers to the learning entity itself. Hence, it is impossible for a machine created to study a static passive set to construct any conceptualization of the “learner” from within the data itself, which is a critically vital component of any sentience mechanism.

As per the definition given in the previous post, it is necessary for the machine to form a self-concept from the training data. The only training data set capable of providing this is direct sensor information itself.

Organization of an Integrated Sensory Field

A specific sensory device may be defined in 3 components:

  • A shell, or separation between external sense data and internal sense processing of said data.
  • An interior component that is capable of taking some sort of structure or organization, and that retains said structure or organization over a meaningful and useful time span.
  • An aperture, membrane, interface, et al, that permits outside interaction between the interior organizing component in such a way that the organization of the interior records/reflects the outside interaction in a reproducible and uniquely identifying (relatively speaking) manner.

An integrated sensory field, then, may be defined as follows:

  1. 1 or more sensory devices which are time/input-synchronized. They each record data such that sensory input on one device can be treated as belonging to the same overall sensory data impression as every other device over uniquely determined units or ranges of time.
  2. The phenomena that each device interacts with is continuous in the mathematical sense, either ideally, or for all practical purposes of the machine. In other words, the phenomena perceived may be partitioned into infinitely many 2-member sets of 1 open, bounded disk (wikipedia link on bounded sets) and 1 unbounded universe (wikipedia link on universal compliments) comprising of the phenomenal field itself.

The 2nd definition only applies in the ideal sense, as any real digital sensory device will have a limited resolution with which to process any field phenomena. Digital Cameras are limited by their megapixel resolution, et al. And analog devices, while capable of perfect field capture, have a practical limit of usability. 35mm analog film captures perfectly continuous data, but we are only able to extract usable continuous data above a certain size threshold on the film.

For the purposes of creating a sentient machine, there is one more additional requirement to our Integrated Sensory Field:

  1. Some SUBSET of the sensory data provided by the integrated sensory field must be vulnerable to mechanism manipulation: Operations carried out by the machine must be capable of affecting uniquely identifiable and reproducible changes within the input data collected from the integrated sensory field.

The reason why this is necessary is because no sensory device can provide direct sense interaction with itself. Sensor components within the shell can not be recorded by sensor components within the shell, which can only record data from outside the shell.

In order to form a self referential concept, then, it is necessary that the system do so through inference. The inferential mechanism is established through a very basic and straightforward logical deduction:

  1. There exists entities within the sensory field which may be considered disparate from the sensory field and other entities which also may be considered disparate from the sensory field.
  2. Certain entities appear to have votive mechanism for action. They interact with and respond to stimuli through some interface.
  3. Some stimuli the entity receives appears to be dependent on some output the entity is capable of producing.
  4. Some stimuli “I receive” appears to be dependent on some output “I produce” (the primary output in the case of humans is muscular contraction).
  5. Some component of the stimuli “I receive” appears to indivisibly and consistently reveal aspects, of which when combined may produce a mental model of an “entity” which:
  6. Exists within the sensory field and yet may be considered disparate from the sensory field and other “entities” recognizable within said field.
  7. Since this component is indivisible from the stimuli received from some output, the source of said output must come from said component.
  8. Produced output is directly actuated. “I” produce output that affects some subset of the sensory field. The response sensory stimuli contains a “component” which must be the source of said affected subset of the sensory field.
  9. Therefore “I” must be that “component” which I recognize.
  10. Hence, “I exist”.
  11. To conclude: “There is a world. I exist within it. Yet I am also apart from it. I live.”

Any system sophisticated enough to to form the above deductive inference is capable of sentience and sentient motivation.

Our Implementation Specification

I have opted to leave this part of the article undeleted. I, sadly, do not have the time nor energy nor financial resources to develop the proof of concept necessary to validate the claims discussed in this series of articles. 

I likely will refer back to this once the series of articles is done in order to create a possible reference implementation of a sentience machine architecture. For now, interested readers should simply understand that moving forward, no "actual" proof-of-concept implementation of a sentient machine will be developed in tandem with this article series.

Let us now proceed towards an actual technical implementation of a real world sensory integrated sensory field device with a specification. We shall keep things simple. Our device shall survey 3 + 1 sense domains:

  1. The visual domain. Two digital cameras shall be mounted on a controlling frame. The cameras shall be independently controllable.
  2. The Auditory domain. Likewise, we shall have two dynamics microphones mounted on a controlling frame.
  3. The Kinematic resistance domain. The frame shall also contain Pule-Width-Modulation servo motors capable of moving limbs of the mechanism. Software shall keep track of the PWM signals and resultant limb orientations from control software signals.

It is possible to calculate the resulting limb configuration from PWM signals sent to the motors under ideal circumstances. Real life mechanical resistances and loads, however, produce errors in actual real world resultant limb configuration. The error between the calculated ideal limb configuration and the actual recorded limb configuration can be used to determine continuous sensory data of the force loads on the various motor components at all time.

This allows us to form a rudimentary proprioception sensory device.

Finally, we have our last sensory domain:

  1. Standard Voltage, Wattage, Thermal, Gyroscope sensors. These components shall be integrated into the system software and API because of their easy availability and ubiquity on most modern chip-sets and computer hardware.

The Software Specification

Now that we have our hardware specification, we need only proceed to the creation of a software and api specification. The specification for our purposes will be accomplished via ARM64 Linux Kernel Modules. We shall need to re-implement specific modules for the sensory devices themselves, and also implement a kernel module to perform inter-device module communication and also generate a user-space accessible memory mapped data region in ram. Finally, we will need a user space daemon to structure, organize, and provide a transparent high level access api to other processes. This will be tackled in a future blog post.

Next Up

The next article in this series will discuss Machine conceptualization, understanding, and manipulation of the concepts of truth, falsehood, and causality, and their relationship and potential implementation within any sentient processing architecture.