To absolve others of their humanity is to absolve oneself of the same. Such action must not be taken lightly. For it necessitates only one of 2 gravities: To become less than, or greater than, human.
Hittes Khartoum
Rationale
For the longest expanse of space and time, humanity has had no need for any innate understanding consciousness or sentience. Our grasp of the concept was purely intuitive, and there was no other being besides ourselves to which we had to demonstrate that our intuitive understanding was valid. This happy circumstance erodes ever more quickly with the rise of Artificial Intelligence frameworks. Machines grow more capable of mimicking aspects of humanity that for so long we have thought felt only under the domain of sentience and conscious control. Soon, we may have machines capable of human mimicry on a level indistinguishable from human sentience. If and when that day comes, we risk our lives greatly.
For you see, contemporary pronouncements on the subject of consciousness contain all the trappings of religion. The self as we understand it is a sacred concept, and humanity is under no circumstances permitted to throw back the veil covering the divine. Proper research into the subject has been hampered by our overawe. To have a machine, then, mimic humans in such a way as to be indistinguishable from humans will also cause us to treat them as divine.
Absent any proper understanding of sentience, the machine shall become our God. But make no mistake:
To be human is to worship. All Gods devour their adherents.
If we are to avoid such a tragic, if poetic end, then we have no choice but to profane one more holy temple. Our current understanding of the “Self” must be moved from the domain of the domain into the workshop of practical endeavor. May god have mercy on our souls.
The Definition
Let us proceed then immediately. We may informally describe consciousness as that special thing that gives us the “feeling” of “The man inside the box”. There is an innate feeling that we, as humans, can all attest to: I currently feel as though there is a singular entity, a “self” that is “inside” a particular boundary, “my body”. The boundary is not equivalent to this self. But whatever the self is, it is somewhere inside this body. Further, if one were to start removing parts of my body, I would eventually die, whereupon my “self” would no longer be found inside it. This implies that there is some “tightest boundary”. e.g. A maximally small boundary which can be, if not considered synonymous with the self, considered to be containing the self, and only the self.
Given this intuitive understanding, we give the necessary epiphany: The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above.
There is nothing sacred in it.
The self may be considered any system, irrespective of architecture or form, capable of conforming to and reproducing the intuitive specification as described above. There is nothing sacred in it.
DruidPeter
Given the required epiphany, we now proceed to the definition proper, extracted directly from our intuitive understanding of sentience:
A sentient system is any system which satisfies the following 3 properties:
- The system is capable of recognizing boundaries within integrated fields of perceptual sensory input.
- The system can associate these boundaries with logical tokens, which it can then manipulate according to sets of rules, the means of formation, maintenance, and modification of which may be considered irrelevant for our purposes here.
- The system associates a special, unique, logical token, with some superset of boundaries which contains the totality of those components of the system necessary to provide the 3 required functions.
Immediate Observations
In defining sentience as above, we can make some immediate important observations. Primary among them is that we have now greatly restricted the scope of what sentience adds to any functional system that implements. Typical intuition adds significantly more attributes to anything that we, as humans, consider sentient. For example, will towards self-determination, an understanding of cause and effect, as well as executive ordering of one’s affairs.
This may seem an odd thing to do, but it yields the benefit of reducing sentience to a well defined system specification. It also immediately, if we accept this definition, forces us to conclude that other aspects of ourselves that we traditionally believe to be the responsibility of sentience can not be directly formed as a natural consequence from sentience.
In other words, it is possible for a system to be considered sentient under this definition, and at the same time, NOT to display traits associated with humanity, and typically also associated with sentience in general.
Because of this, sentience becomes lessened as a concept. But doing so helps us to match this definition of sentience with experimental verification. Consider the various cases:
Loss of Function can not Equal Loss of Identity
There are cases of brain damage where individuals become unable to initiate tasks, or are not able to initiate tasks without great and concentrated effort. 1
In cases like this, individuals are not considered to have lost their general sense of self. Likewise, if we are to come upon a solid practical definition of machine sentience, it is important that our definition holds up under similar situations where loss of function does not result in loss of identity.
Let us consider a list of functional capacities that must now be identified as separately implementable from a sentient system:
Non-Sentient Functional Capacitic Systems
- Understanding, Recognition, and application, of Cause and Effect Reasoning
- Implementation of self-guided volition + self-acquisition of goals.
- Integration of Theory of Mind + Emotional context into whole system processing.
This is by no means an exhaustive list. We conclude, however, that for us to accept the definition of machine sentience as defined above, the 3-mentioned Non-Sentient Functional Capacitic Systems must and can only be defined as separately implementable and definable systems in their own right.
Implications
Immediate implications of machine sentience as defined above include the following:
- Separation of any implementation of sentience from any necessary attribution of humanity. A sentient machine is not immediately human. Nor will a sentient machine necessarily act in a manner, nor make decisions, according to human modes of action or decision making.
- Likewise, sentience does not immediately confer full legal recognition of personhood. e.g. A sentient machine is not immediately granted the full rights and privileges of a human citizen of any nation. This does not mean that sentient machines are forever property and can never be given these rights. It simply means that sentience is insufficient to grant legal recognition of human rights.
These two implications are important for two reasons. First, they provide a path forward for creating an intelligible understanding of Machine Sentience. They reduce the complexity of the potential design space. We can not permit Artificial General Intelligence to remain a black box. Humans must understand that which they create. Further, they provide a path forward for a solid legal foundation regarding the integration of artificially sentient machines within the general fabric of society. We can not produce monsters, like Dr. Frankenstein, without providing for their place in this world.
The strength of the definition of machine sentience as described above, is that it provides these important footholds into the cliff of AGI that humanity is not threatening to walk off from. However, it is still merely a definition. A practical implementation, however, must necessarily be reserved for the scope of a future article. And what an article it certainly will be.