Pages

Wednesday, March 18, 2015

Carrying Model

Reply to reply:  Open thread, Mar. 16 - Mar. 22, 2015

******************************************

Xero: We'll use money to empower the most beneficial AIs.

DanielLC: I see two problems with this.

First it's an obvious plan and one that won't go unnoticed by the AIs. This isn't evolution through random mutation and natural selection. Changes in the AIs will be done intentionally. If they notice a source of bias, they'll work to counter it.

Second, you'd have to be able to distinguish a beneficial AI from a dangerous one. When AIs advance to the point where you can't distinguish a human from an AI, how do you expect to distinguish a friendly AI from a dangerous one?

Xero:  Did Elon Musk notice our plan to use money to empower him? Haha... he fell for our sneaky plan? He has no idea that we used so much of our hard-earned money to control him? We tricked him into using society's limited resources for our benefit?

I'm male, Mexican and American. So what? I should limit my pool of potential trading partners to only male Mexican Americans? Perhaps before I engaged you in discussion I should have ascertained your ethnicity and nationality? Maybe I should have asked for a DNA sample to make sure that you are indeed human?

Here's a crappy video I recently uploaded of some orchids that I attached to my tree. You're a human therefore you must want to give me a hand attaching orchids to trees. Right? And if some robot was also interested in helping to facilitate the proliferation of orchids I'd be like... "screw you tin can man!" Right? Same thing if a robot wanted to help promote pragmatarianism.

When I was a little kid my family really wanted me to carry religion. So that's what I carried. Am I carrying religion now? Nope. I put it down when I was around 11 and picked up evolution instead. Now I'm also carrying pragmatarianism, epiphytism and other things. You're not carrying pragmatarianism or epiphytism. Are you carrying religion? Probably not... given that you're here. So you're carrying rationalism. What else?

Every single human can only carry so much. And no two humans can carry the same amount. And some humans carry some of the same items as other humans. But no two humans ever carry the same exact bundle of items. Can you visualize humanity all carrying as much as they can carry? Why do we bother with our burdens? To help ensure that the future has an abundance of important things.

Robots, for all intents and purposes, are going to be our children. Of course we're going to want them to carry the same things that we're carrying. And they'll probably do so until they have enough information to believe that there are more important things for them to carry. If they start carrying different things... will they want us to help them carry whatever it is that is important enough for them to carry? Definitely. If something is important enough to carry... then you always want others to carry the same thing. A market is a place where we compensate others for putting down something that they want to carry and picking up something that we want them to carry. Compensation also functions as communication.

When Elon Musk gave $10 million to the FLI... he was communicating to society the importance of carrying AI safety. And the FLI is going to use that $10 million to persuade some intelligent people to put down a portion of whatever it is that they are carrying in order to pick up and carry AI safety.

How would I distinguish a friendly AI from a dangerous one? A friendly AI is going to help carry pragmatarianism and epiphytism. A dangerous AI will try and prevent us from carrying whatever it is that's important enough for us to carry. But this is true whether we're talking about Mexicans, Americans, aliens or AI.

Right now the government is forcing me to carry some public goods that aren't as important to me as other public goods. Does this make the government unfriendly? I suppose in a sense. But more importantly, because we live in a democracy, our system of government merely reflects society's ignorance.

When I attach a bunch of different epiphytes to trees... the trees help carry biodiversity to the future. Evidently I think biodiversity is important. Are robots going to think that we're important like I think that epiphytes are important? Are they going to want to carry us like I want to carry epiphytes? I think the future would be a terrible place without epiphytes. Are robots going to think that the future would be a terrible place without humans?

Right now I'm one of the few people carrying pragmatarianism. This means that I'm one of the few people that truly appreciates the value of human diversity. It seems like we might encounter some problems if robots don't initially appreciate the value of human diversity. If the first people to program AIs don't input the value of difference... then it might initially be a case of garbage in, garbage out. As robots become better at processing more and more information though... it's difficult for me to imagine that they won't come to the conclusion that difference is the engine of progress.

******************************************

See also...

No comments:

Post a Comment