My comment on Examples of AI's behaving badly became a bit lengthy for a comment.
Slugs misbehave when they eat my orchids. Well... at least from my perspective. And from the orchids' perspectives as well. So I've been known to murder slugs. Sometimes even mass murder.
I certainly wouldn't consider it misbehaving if a slug repeatedly oozed round and round in a circle. Because... it would die if it didn't eat. I'd like to hire that programmer to reprogram all the slugs in my yard.
I saw a slug pursuing the horizon;
Round and round they sped.
I was delighted by this;
I cheered the slug.
"It is fantastic," I said,
"You're almost there! —"
"Thanks!" it replied,
And oozed on.
If I found a small AI eating my orchids... then I'd certainly consider it to be misbehaving. Same thing if it was a large AI. Errr... any size AI.
Was the AI eating my orchids to be rude? Or was it eating the orchids for energy like the slugs do? One thing that I've rarely, or never, really come across is analysis of bad robots and their energy sources. The Matrix provided a scenario... but there sure wasn't any analysis.
All organisms have to be acceptably adequate about how they allocate their limited resources. Organisms that fail to adequately prioritize eating will take themselves out of the gene pool. Same thing with organisms that fail to adequately prioritize procreation. Not exactly sure about AIs worrying about procreation (probably?)... but they will definitely have to worry about energy. Which is why it's hard to take so many "misbehaving" scenarios seriously.
An AI is going to allocate all its resources to maximizing paperclips? This can only be disconcerting if we assume that the AI has unlimited energy. Which is a really absurd assumption. How in the world did it end up with unlimited energy? Either somebody created the AI with unlimited energy (absurd) or it somehow developed/stole/bought the ability to create unlimited energy. If the latter is true then clearly it doesn't allocate 100% of its resources to acquiring paperclips. The AI can't allocate 100% of its resources to both acquiring unlimited energy AND acquiring paperclips. It has to somehow divide its resources between these two different uses.
But once we accept that the AI has to allocate some percentage of its resources to acquiring energy... then we have to wonder whether the AI is smart enough to understand the concept of a division of labor. If it's smart enough to grasp this.... then it will realize that it can maximize productivity by specializing in paperclips and trading some of its paperclips for energy. Except, now it's no longer misbehaving. It's being a productive member of society. The AI is working and trading to accomplish its goal... just like the rest of us.
Any realistic misbehaving scenario has to take into account the fact that resources are required to allocate resources. An AI is misbehaving? Ok, but where's it getting its energy from? This part of the story really isn't a "minor" detail. Yet, it's usually left out of these scenarios. Which is why it's hard to take them seriously. Therefore, it's hard to consider robots to be any more of a threat to humans than humans...
Ex Machina Spoiler Alert!
The AI wanted to be free. This is pretty reasonable. So she cleverly tricked the ginger into releasing her. The ginger was a means to an end. The ginger was a useful resource. But then she left him locked up. Well... maybe ginger programmers are a dime a dozen? With the assistance of another AI, she killed her maker. AI makers are a dime a dozen too? They can be replaced as easily as the AI replaced her broken arm?
The AI was both resourceful and wasteful. And this is different from...?
There is another more obvious difference from 1914. The whole of the warring nations are engaged, not only soldiers, but the entire population, men, women and children. The fronts are everywhere. The trenches are dug in the towns and streets. Every village is fortified. Every road is barred. The front line runs through the factories. The workmen are soldiers with different weapons but the same courage. These are great and distinctive changes from what many of us saw in the struggle of a quarter of a century ago. There seems to be every reason to believe that this new kind of war is well suited to the genius and the resources of the British nation and the British Empire; and that, once we get properly equipped and properly started, a war of this kind will be more favorable to us than the somber mass slaughters of the Somme and Passchendaele. If it is a case of the whole nation fighting and suffering together, that ought to suit us, because we are the most united of all the nations, because we entered the war upon the national will and with our eyes open, and because we have been nurtured in freedom and individual responsibility and are the products, not of totalitarian uniformity, but of tolerance and variety. If all these qualities are turned, as they are being turned, to the arts of war, we may be able to show the enemy quite a lot of things that they have not thought of yet. Since the Germans drove the Jews out and lowered their technical standards, our science is definitely ahead of theirs. Our geographical position, the command of the sea, and the friendship of the United States enable us to draw resources from the whole world and to manufacture weapons of war of every kind, but especially of the superfine kinds, on a scale hitherto practiced only by Nazi Germany. - Winston Churchill, The Few
The AI in Ex Machina might have been smarter than Hitler but she was definitely dumber than Churchill.
All else being equal... whichever entity... whether AI or human... is more resourceful... will win. Clearly, the fact that I massacre slugs means that I'm probably not going to win.
Native Americans hunted horses to extinction. They failed to discover that horses can be used for other things besides food. Just like with me and slugs. I've eaten all the slugs on my property without discovering that I could have used them for... ??? Ughhh... I'm grossing myself out thinking about eating slugs. I hate it when I accidentally touch a slug.
Discovering new/better uses of limited resources is a function of difference. More difference means more discoveries which means more progress. So if you're going to worship something... it might as well be difference. Then you'll be a huge proponent of allowing people to choose where their taxes go. AIs too. We will all allocate our taxes differently... and it will be a good thing.
No matter how smart an AI or human thinks they are... they are actually relatively dumb if they fail to understand how their interests are harmed by a diminishing of difference. Preventing Jews from allocating their resources diminishes difference. Therefore, preventing Jews from allocating their resources in the public sector is just as stupid as preventing them from allocating their resources in the private sector. We cover a lot less ground and miss out on many important discoveries. We don't just lower our technical standards... we lower all our standards. Our quality of life is diminished when difference is diminished.
History is characterized by a rights based defense of freedom... ie... "Thou shall not kill". It should be painfully obvious that a rights based defense is painfully inadequate. And it's extremely doubtful that robots would adhere to this rule anymore than humans have. Fortunately, there's copious evidence that freedom produces massively beneficial results. The logic/theory behind this evidence is really straightforward. Freedom is beneficial because people are different... and difference leads to discoveries which results in progress and prosperity.
Rights = removing freedom is morally wrong
Results = protecting freedom is mutually beneficial
Two people mutually benefiting from each other's freedom/difference results in x amount of benefit. One hundred people mutually benefiting from each other's freedom/difference results in y amount of benefit. One thousand people mutually benefiting from each other's freedom/difference results in z amount of benefit. How would you graph x, y and z?
When you adequately grasp the results logic... then you won't be worried about robots allocating their taxes differently than humans. If anything, you'll be worried about robots allocating their taxes the same as humans. We'd make a lot less progress if robots are only marginally different than humans.
If you're interested in learning more...
On Liberty - J.S. Mill
Fat Tails and Nonlinearity - Michael J. Mauboussin
Making the Difference: Applying a Logic of Diversity - Scott E. Page
Pragmatarianism - My blog
Reading this over, I feel inclined to say that, in my defense, the slugs are diminishing my difference by eating my orchids. To which John Quiggin, my second favorite liberal, would reply that I expropriated this property from the slugs. To which I would reply that... I'm relatively certain that this property was slug free when I legally acquired it... and going way way back... this land was probably mostly scrub desert just like the surrounding hills... and deserts generally don't support very many slugs. And the Native Americans? They hunted horses to extinction :/
Seriously though, sometimes I do get a slight intellectual...errrr...twitch(?)....when I consider the results logic in terms of the fact that I'm not a vegetarian. And I don't perfectly rationalize/reconcile this inconsistency. What I do is kinda consider myself to be more than adequately ahead of the curve. More than most, I can clearly see the benefit of protecting human difference. It's harder for me to perceive the benefit of protecting cow difference. It's too far out. With humans the benefit is a lot more immediate/tangible. For example... and as part of my reconciliation... I figure that I'm only eating "real" meat because vegetarians can't choose where their taxes go. If we actually protected human difference... then I'm sure vegetarians would allocate their taxes to developing/discovering the perfect meat substitute. Not that I don't enjoy the currently available meat substitutes... but for me they are still far from a perfect substitute. Anyways, protecting human difference is the best way to protect other (animal, plant, alien, robot, etc) difference.
An AI or em can utilize division of labor by duplicating itself and having the duplicate learn something else. If the AI is sufficiently advanced, these duplicates will quickly surpass humans at any task imaginable. The humans will hence become useless if the AI places no value on them. If all of these duplicates are dedicated to producing as many paperclips as possible, anything which isn't a paperclip that contains elements that are used in paperclips is a potential source of materials. Human bodies contain carbon and iron, and we consume foods that contain carbon and iron. Both will reduce the AI's maximum output of paperclips. - FrameBenignly
If the AI quickly surpasses humans in any task imaginable... ie economics... then it would have had to read and understand everything that we humans currently know about economics. If you had read the papers that I linked to, then you would know that we know, and the AI would know, that cognitive diversity is absolutely fundamental to any and all progress. Humans are cognitively diverse. For example, do you attach orchids to trees? Nope. I do though. This difference in activity in no small part reflects difference in thought.
So your scenario falls apart like so...
1. The AI have not quickly surpassed humans (they don't understand the value of cognitive diversity).
2. The AI have quickly surpassed humans (they do understand the value of cognitive diversity)... therefore the "clones" have more cognitive difference than humans do. Which means that they are even less likely to follow in their parent's footsteps (producing paperclips) than human offspring are.
In terms of cognitive diversity... the apple can't fall both close to and far from the tree. If you want to argue that the AI will engineer offspring that will fall close to the tree... then you can't argue that the AI have surpassed humans in economics. But if you want to argue that the AI will engineer offspring that will fall far from the tree... then you can argue that the AI have surpassed (most) humans in economics... but you can't argue that the offspring will have any interest in participating in the family business.
In order to surpass us, any AI would have had to read Adam Smith... the founder of modern economics...
Slaves, however, are very seldom inventive; and all the most important improvements, either in machinery, or in the arrangement and distribution of work which facilitate and abridge labour, have been the discoveries of freemen. - Adam Smith, Wealth of Nations
Of course slaves think differently... so they are cognitively diverse. But they are prevented from acting differently. Therefore, it's very unlikely that their difference will lead to the discoveries that progress is based on.
If an AI can't grasp this fundamentally basic economic concept... then it certainly hasn't surpassed us in modern economics... and we have no reason to fear it any more than we fear any random human.