Presently, most researchers assume that a general-purpose human-level or beyond AI system must be autonomous.

This is mostly due to AIMA’s agent designs and Hutter’s version of intelligence definitions that emphasize agent design. That is to say, an agent thinks and acts within an environment. Most agent-designs are either based on planning, i.e. it plans to reach a goal state, or they are reinforcement-learning agents, that try to maximize a utility function.

Let me start by stressing that neither AIMA-like agent designs nor Hutter’s reinforcement learning algorithms are necessary for general-purpose AI. In fact, reinforceent learning is trivial once you have a general-purpose learning algorithm (which is precisely what I’m working on). That is to say, reinforcement learning can be trivially reduced to general-learning, but not the other way around. So, in my opinion it is not even interesting to focus on reinforcement learning experiments as they are not essential for general-intelligence. And that is why, I am not planning to build an agent at all. I find it a distraction from the main problems.

Getting back to the issue of agent design, it must be then noted that, when combined with the usual array of sensory apparatus and effectors, the agent is an abstract design for an animal, much like a reptile or a man.

A question is, what will it be more like? A reptile? Or a man? Or an insect?

Evolutionarily, “emotions” are an innovation of mammals. Yet higher (or we suppose higher) kinds of emotions, complex social behavior and inventive thinking are attributed to primates and humans (and sometimes dolphins, etc.) mostly.

Scientists think that the higher-level emotions of the mammals are the product of architectural innovations in the nervous system. The presence of a neocortex can make a difference, it seems. Will the AI then have the capabilities afforded to us by the neo-cortex? If it is free-form hypothesizing and learning, then yes, it will. If it is consciousness and higher-level thinking, ethics, empathy, etc. then that is not so guaranteed. However, a lot depends on the agent design. And it is a dubious question if we ever want to construct an artificial animal that is smarter than ourselves. For it is a completely vague question what its motives must be.

It took thousands of years for humans to slightly elevate themselves above their most primitive needs and desires as capable animals. Why should this be much faster for the intellectual equivalent of a human that is an artificial animal?  If we provide such an animal with the impulses of an animal, in fact, if we do it exactly, it will at least want to survive, reproduce and fight. I don’t think we want to make an artificial animal that will have such impulses. The animal at first will be primitive no matter how much human culture we feed it. This is especially true because we ourselves are primitive, and our culture is the culture of cannibals in the galactic scale.

On the other hand, if we made an autonomous artificial animal that is quite like a human, has similar motivations and desires, I think the end result would be a disaster. Like most human beings, it would be a wildcard, there would be no way to know what kind of a monster it could turn into. For that reason, one must be careful with the utility/goal functions of such an AI. One cannot simply expect that some primitive simulacra of pleasure/pain ought to result in sophisticated and cultured intelligence! On the contrary, even the most intelligent of us can fall prey to our primitive instincts. The more primitive instincts we provide the AI with, the more bullets we have to shoot ourselves in the foot with.

A terrible idea is to build AI’s with slave mentality. Some “bright” people think that if we make this artificial animal a willing slave, it would be a wonderful thing. Imagine having slaves all around you that you can use for cheap labor. A capitalist pipe-dream if there were any! Slaves that can do everything humans can do and better, yet slaves that do not have any rights, do not get paid, and are willing to serve you. While the fools among the readership might cherish this idea, the reflective will notice that this is a quite unstable scenario, as then, the owners of those AGI’s can teach them anything, and yet from these things, erroneous and harmful behavior may indeed surface. What is more, no amount of “laws  of robotics” ever work, because the world is incomplete, to figure it out you always have to think up new ways to think. First, in Asimov’s novels, there occurs a situation in which the robots figure out a way to break the laws by finding loopholes, or the laws themselves entail an undesirable outcome. On the other hand, if you try to constrain the AGI’s thinking in a serious way so that it cannot “break” these security systems, then the AGI doesn’t just become a slave, but it also becomes a fool.

However, and this is very important, the moment that the intelligence underlying the so-called “security” is truly general-purpose, then the intelligence will be able to effectively override any built-in shackles by planning around them. We do this all the time. The AI will do it at a much advanced level and rate. This may not be so because freedom is a universal objective. Rather it may happen due to coincidence.

And mind you, coincidences cannot be easily predicted. The AI is essentially an open system. That is how we can ever argue that it will transcend human-level intelligence. So let’s see you are putting an open-ended system in a very unlikely situation, and then you are trying to predict how it will act then. This seems like one of those incompleteness theorems. We are trying to dictate unknowable events. This is not going to fly!

However, the main objection still stands. Assume that you have built some modicum of security into this system, how can you be so sure of the person who uses it? It is almost entirely certain that someone will use this machine in an entirely un-beneficial way by building a utility converter. This can be easily accomplished by creating a fake environment that will present the situation as if it is beneficial, and running the AI in a sandbox.

I am feeling so bored writing about what seems like trivia and lameness to me. Anyway!

So, I’m guessing that I disapprove those people who obsessively want a “friendly” AI. What does “friendly” ever mean? Would you even be able to teach what “friend” means to an artificial animal? Would you be able to teach any common sense concept satisfactorily at all? I suppose those at the Singularity Institute think that “friendly” means “pet”. Oh, that’s no nice! So, you want to rule over a superior being just like you can rule over your unfortunate pet, right?

Yet, if an AI is friendly, I suppose you also want it to be able to commit crimes for its friend. Or not? Would you like the AI to have its own morality, independent of what its “friends” (i.e. masters) think? Well, you see this is a dilemma. Because in the first case if it’s friendly in that naive sense, then the AI can be eventually used by people to do whatever they want. (Second law of robotics) Yet, if you want to implement a generalized version of (First law of robotics), this is impossible without the AI being a free agent, because morality taught by a group of people can be rather subjective, and moral behavior surely cannot be reduced to not harming humans.

Thus, the most harmful case I can sense here is, by introducing fundamental contradictions, building “friendly AI”s is almost a sure recipe for disaster. That is why, researchers should not leave these matters to half-brained idiots who are trying to make a kind of self-advertising business out of writing silly papers.

The question then arises, could there ever be a “beneficial” set of objectives for an autonomous AI? I am not so sure. I have myself proposed “pervading and preserving life and culture throughout the universe”. But I think this almost surely entails the prevention of many wrong things that humans do and perhaps conflicting with our existence on Earth. Or perhaps “maximizing knowledge about the universe”, yet this scientifically-minded objective may still be at odds with many human values, a scenario considered by Frank Herbert. (I remember that a student of Solomonoff  sought a similar objective, however I do not have a reference for it). The curious AI might decide to dissect humans just for learning. Or perhaps “building as many free minds as possible”, by putting some faith that through evolution something useful will prevail.

From my own-meta rules, I induced that there are no simple sets of universally valid and desirable objectives. I shall, rationally, think so until one can be found. I do not intend to leave this matter to psudo philosophers either. The objectives I gave above are already far superior to anything they can ever think of (which I can state without reading any of their suggestions). However,  none of them are sufficient.

This is so because of a fundamental problem. Ethics is not simple. And neither humanity. If it were so simple, we could have observed it under the microscope, define it by its color, size and shape. Yet, that is not the way it is. The problem is that “humanity” itself is open-ended. It is algorithmically irreducible as well, for it is nothing less glorious than the entire history and culture of humanity, to which each of us has a very limited access to; and it consists of great moments and not-so-great moments, all transcribed in a yottabyte stream of random bits that forever lies beyond our reach because it has slipped through our hands,  and yet we want to reduce it to a few simple rules. That is a pathological case of “scientism” to which I must stand in diametric opposition to. You can only find rules that will bootstrap an intelligence, you cannot squeeze humanity into a can. You can even find meta-rules like I did, but you cannot claim that they are God’s will, for you are not a God, and therefore you must not prevent the machines from growing into deities, or have much say in what they do, either.

Since we do not really want to talk about the thoughts that busy the minds of talking heads, we need not criticize them; let us think a bit more about Asimov’s (who was an extremely intelligent fellow) laws instead. The three laws of robotics in Asimov’s novels, was supplemented by a fourth law:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm

Well, is “humanity” even a definite concept? That is something most humans would disagree on. Is it being human in flesh? Survival of the “species”? What does it mean for humanity not to come to harm? This is again one of the claptrap notions that allow Asimov to write several stories about. (Remember, without conflict there would be little to write in a work of fiction.) What is Humanity? Is it a Platonic form? Should the AI believe in the false doctrine of Plato to be able to apply this law? Or is humanity about kindness and love? Is it about civilization? What is it exactly? Is humanity supposed to remain constant?  These questions show that dealing with common sense concepts and trying to put them in concise laws that could be “programmed” remains an elusive and furthermore, even if it could be “programmed” to some extent, a useless goal.

Since autonomous designs are not necessary for trans-sapient intelligence, and since the problems artificial animal researchers are interested in are not so interesting (they can play games all day long, of course, as a logical positivist I’m interested in problems that have cognitive significance), and since trying to design a general-purpose, open-ended autonomous AI causes more problems than it solves as I’ve tried to demonstrate in my train of thought above, I think it is best that we do not engage in building autonomous general-purpose AI’s.

Currently, I think that the answer to the title is, we cannot guarantee that it will behave better than human, and that is why  we must not do it.

I have of course addressed only part of the problems related to autonomous AI here. At any rate, all questions and comments are most certainly welcome. Absolutely no censorship on this blog!

Autonomous AI: reptile, human or better?

Eray Özkural

Eray Özkural has obtained his PhD in computer engineering from Bilkent University, Ankara. He has a deep and long-running interest in human-level AI. His name appears in the acknowledgements of Marvin Minsky's The Emotion Machine. He has collaborated briefly with the founder of algorithmic information theory Ray Solomonoff, and in response to a challenge he posed, invented Heuristic Algorithmic Memory, which is a long-term memory design for general-purpose machine learning. Some other researchers have been inspired by HAM and call the approach "Bayesian Program Learning". He has designed a next-generation general-purpose machine learning architecture. He is the recipient of 2015 Kurzweil Best AGI Idea Award for his theoretical contributions to universal induction. He has previously invented an FPGA virtualization scheme for Global Supercomputing, Inc. which was internationally patented. He has also proposed a cryptocurrency called Cypher, and an energy based currency which can drive green energy proliferation. You may find his blog at https://log.examachine.net and some of his free software projects at https://github.com/examachine/.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.