ESPAÑOL | ENGLISH | DEUTSCH | 日本語 |

Share this post

The definition of “robot” has evolved over time. Traditionally and up until the digital revolution, it was understood to be a mechanical genius capable of carrying out certain functions or tasks in accordance with the instructions programmed by a human being.

The definition is partly correct these days, although incomplete. The main difference between the previous and current concept of the robot is the ability of the most modern and sophisticated ones to take decisions according to formulas and algorithms, depending on the circumstances with which they are faced. An example of this is an autonomous vehicle which takes decisions on when to accelerate, brake, stop or dodge an obstacle or distinguishes between running somebody over or causing material damage. These decisions are taken constantly and automatically, without the direct intervention of the human being, based upon previously programmed instructions. Moving one step ahead, we may consider robots incorporating artificial intelligence, understood as a concept  similar to the human ways of perceiving, discerning and deciding. It follows that the robot equipped with artificial intelligence will also be able to “learn” based upon acquired experience and the examples it has been provided with, in order to thus generate patterns of behaviour depending on the given circumstances. This empirical and statistical system enables the robot to “discern” between different possibilities and take a decision regarding one of them, a process in which the person who programmed the robot does not participate. Artificial intelligence also furnishes them with a great ability for “abstraction” very similar to human emotion, but  devoid (for the moment) of ethical and moral elements. In summary, artificial intelligence   allows the robot to learn from itself and take decisions autonomously and even independently, which brings a characteristic of independence that could be equated with human “free will”.

A legal or moral entity is defined as an individual with rights and obligations, which exists as a being created by one or more natural persons. Although an “intelligent” robot meets the characteristics of a legal entity, it does not exactly fit in with the definition thereof, basically due to the attribute of independence between machine and its owner or programmer. The intelligent robot

  • May enjoy autonomy from an energy supply, that is to say, it has means of subsistence and self-repair.
  • has the capacity to act.
  • is able to make decisions for itself.

It is probable that in a few years humanoid robots with artificial intelligence shall be available, which are equipped with free-will and a capacity for ethical discernment, created by virtue of the initial instructions of the programmer. Obviously, it may never acquire the status of a natural person because it has a different nature from that of a human being.

These particular characteristics of  the humanoid or intelligent robots pose the legal problem of defining them, if they are not natural or legal persons, neither can we identify them as an asset, or a simple machine inasmuch as they are capable of deciding or discerning largely in the same way as a human and they may act independently from their owner or programmer. This being so, we are inclined to create a new concept of a person, with its own legal status and with rights and obligations, a “homo machina” or “man machine”.

The humanoid robot is usually thought of as an ingenuity with mobility and interaction skills like the common skills humans have, which are at the service of humans. Ultimately,   it is a modern sort of slave, in the same way as natural persons  were slaves at certain moments in history. Just as a human slave could rebel and act against its master if he had the discernment and free-will to do so, the robot slave may also rebel or ignore the instructions of its owner if he is equipped with pseudo-human artificial intelligence. For now, the life and actions of the robot are determined by its owner, however, if the intelligent robot acquires the ability to operate, in the sense of taking decisions beyond the framework prescribed by the initial programming (albeit as a result thereof) and independently to the owner, the acts carried out by society in general and in commercial traffic in particular will essentially have legal consequences.

Intelligent robots seem to be called upon to integrate into daily social, labour and commercial relations in the not-too-distant future, and this interaction between humans and machines shall most probably generate rights and obligations among humans, and among robots, when, within the general programming framework, the intelligent robot may conclude on-line contracts with other robots irrespective of the owner’s knowledge or specific instructions, we must ponder whether these independent actions fall solely within the sphere of the natural or legal person responsible for the robot. ¿May these acts be assimilated to those of a commercial factor or a representative? It is certain that for the moment the capabilities of a robot with artificial intelligence do not match those of the average human being, but it is only a question of time before this occurs. The intelligent robot must have “homo machina” status, different to that of a natural person or legal entity, and also we must consider whether we may grant them rights and obligations when their behaviour responds to mere directives and basic programmed objectives, but their actions are based on algorithmic calculations created by the robot, independent from their programmers or owners, and thanks to the process of self-learning.

Eduardo Vilá

For further information, please contact:

va@vila.es

December 21st, 2018

Print Friendly, PDF & Email