Monday, November 14, 2016

Stephen Wolfram VS Economics

Recently on Medium I read this really interesting story by Stephen Wolfram... Quick, How Might the Alien Spacecraft Work?  It's super brainy but missing... economics.  Following a few of his links I found this blog entry and this video...

Let's juxtapose his talk with the first video that I ever narrated...

Wolfram and I sound like different species!  He's incredibly better at communicating than I am.

In the beginning of my video, which I shot in February of this year, you can see my super nice Aloe vaombe.  Here's a photo I took yesterday of the same exact Aloe...

Ughhhhh!  What happened to it?  Ants!  The ants decided to let their "cows" graze on my Aloe.  Just in case any of you don't know, ants "milk" certain types of pests such as aphids.  Not sure what the "milk" is exactly, but the ants sure seem to enjoy it.  In exchange for the "milk", the ants provide the "cows" with transportation/colonization and protection.   It's a super fascinating example of mutualism.

Unfortunately, the ants really haven't gotten the memo about "sustainability" or "tragedy of the commons".  It's a big Aloe and it really wouldn't be bothered by a few "cows" grazing on it... but right now it's infested with "cows".  So there's a terribly high chance that the "cows" will kill my Aloe.

Of course it would super suck if my Aloe was killed.  It is worth several hundred dollars.  Well... at least it used to be.  Plus, I've had it for several years and have grown quite fond of it.

Not too long ago I would have protected my Aloe with pesticides.  But then I decided, for the sake of having more nature, to go entirely natural.

Yesterday I spotted somebody on my Aloe who was happy with my decision to go natural...

Ladybugs love to eat "cows"!   However, it's unlikely that even a flock of ladybugs would make a dent in the "cow" population because ants are quite effective at protecting their herd.

So... the heart of the problem here is my inability to communicate with the ants.  For all intents and purposes, they might as well be aliens or AI.

To understand the heart of the communication problem we can consider how ants communicate with each other...

In ants, one such behaviour is the collective food search: ants initially explore at random. If they find food, they lay down pheromone trails on their way back to base which alters the behaviour of ants that subsequently set out to search for food: the trails attract ants to areas where food was previously located.  - Jo Michell, The Fable of the Ants, or Why the Representative Agent is No Such Thing

Ants don't have language like we do.  Instead, they have pheromones.  The ants use their pheromones to alter each other's behavior.  Perhaps it's easy to jump to the conclusion that their use of pheromones is the equivalent of our use of words.  I'm pretty sure that this conclusion is wrong.

The fundamental difference between pheromones and words is the amount of calories it takes to produce them.  How many calories does it cost us to say a word?  Does it cost us more or less calories than it takes to type a word?

I'm guessing that... as far as calories are concerned... it's far more costly for ants to communicate with pheromones than it is for us to communicate with words.  Producing/emitting pheromones is more "expensive" than speaking words.  In fact, it's probably more accurate to say that ants communicate by their willingness to sacrifice.

Communicating through sacrifice is more readily apparent in bees...

Today’s Mandeville is the renowned biologist Thomas D. Seeley, who was part of a team which discovered that colonies of honey bees look for new pollen sources to harvest by sending out scouts who search for the most attractive places. When the scouts return to the hive, they perform complicated dances in front of their comrades. The duration and intensity of these dances vary: bees who have found more attractive sources of pollen dance longer and more excitedly to signal the value of their location. The other bees will fly to the locations that are signified as most attractive and then return and do their own dances if they concur. Eventually a consensus is reached, and the colony concentrates on the new food source.  - Rory Sutherland and Glen Weyl, Humans are doing democracy wrong. Bees are doing it right

Calories are a precious resource.  So the more calories a bee is willing to sacrifice... the more important its information... and the greater the change in the hive's behavior.

Of course ants and bees aren't the only animals that use sacrifice to alter each other's behavior...

It is thus that the private interests and passions of individuals naturally dispose them to turn their stocks towards the employments which in ordinary cases are most advantageous to the society. But if from this natural preference they should turn too much of it towards those employments, the fall of profit in them and the rise of it in all others immediately dispose them to alter this faulty distribution. Without any intervention of law, therefore, the private interests and passions of men naturally lead them to divide and distribute the stock of every society among all the different employments carried on in it as nearly as possible in the proportion which is most agreeable to the interest of the whole society. - Adam Smith, Wealth of Nations 

Ants, bees and humans are all incredibly different.  We're so different that we might as well be aliens or AIs.  Yet, despite our incredible differences... we all use individual sacrifice to change/modify/alter/improve the behavior of other individuals.

The fundamentally important concept of sacrifice as communication is nearly entirely absent from Stephen Wolfram's analysis of communicating with aliens/AIs...

Over the course of the billions of years that life has existed on Earth, there’ve been a few different ways of transferring information. The most basic is genomics: passing information at the hardware level. But then there are neural systems, like brains. And these get information—like our Image Identification Project—by accumulating it from experiencing the world. This is the mechanism that organisms use to see, and to do many other “AI-ish” things. - Stephen Wolfram, How Should We Talk to AIs?
Well, if we can express laws in computable form maybe we can start telling AIs how we want them to act. Of course it might be better if we could boil everything down to simple principles, like Asimov’s Laws of Robotics, or utilitarianism or something.  But I don’t think anything like that is going to work. - Stephen Wolfram, A Short Talk on AI Ethics
But if we’re going to “communicate” about things like purpose, we’ve got to find some way to align things. In the AI case, I’ve in fact been working on creating what I call a “symbolic discourse language” that’s a way of expressing concepts that are important to us humans, and communicating them to AIs. There are short-term practical applications, like setting up smart contracts. And there are long-term goals, like defining some analog of a “constitution” for how AIs should generally behave. - Stephen Wolfram, Quick, How Might the Alien Spacecraft Work? 

Wolfram is super interested in developing a language to improve our communication with AIs, aliens and each other.  Which is an awesome goal.  Unfortunately, I'm not quite intelligent enough to wrap my head around his exact efforts.  But I am intelligent enough to understand how important sacrifice as communication is.

Language is incredibly important... here I am typing so many words!  But when it comes to the ethics of ants... even if they could understand my words... "please stop killing my Aloe... I value it very much!"... why should I expect them to be considerate of my feelings?  What's in it for them?

Let's say that there were two ant colonies in my yard.  One colony harmed my plants while the other protected them.  Which colony would I be willing to make a sacrifice for?  Obviously I'd be willing to give food to the ant colony which served my interests.  I'd be willing to give the helpful colony a lot of food!  So it would quickly grow larger and effectively defeat the harmful colony.

Last year I wrote this blog entry... Don't Give Evil Robots A Leg To Stand On!   In it I shared this silly/surreal image...

If there are two robots... and one harms my interests while the other protects them... then I'm obviously going to give my money to the robot that protects my interests.  But how is this any different from how it works with humans?

Here's the drawing from AI Safety vs Human Safety...

It's Elon Musk giving $10 million dollars to the Future of Life Institute.   Musk communicated with his sacrifice.

If I had to guess... then I'd guess that the producers of "Arrival" paid Stephen Wolfram to work on their movie.  They probably didn't pay him $10 million dollars... but they obviously paid him enough to alter his behavior.

On the one hand, it's mind-boggling that elementary economics is entirely missing from Wolfram's analysis.  On other hand, he is a lot smarter than I am!  So maybe I'm missing something.  But it's really not the case that Wolfram publicly considered sacrifice as communication and then discounted/discredited it.  Either he did so privately... or the concept didn't even cross his mind.

For sure it would be wonderful if Wolfram did publicly consider the relevance/significance/importance of sacrifice as communication.  However, I'm really not going to hold my breath that he will do so.  I have this feeling that physics/math brains have a blind spot when it comes to real economics.  None of the true economists... such as Adam Smith, Friedrich Hayek and James Buchanan... have had physics/math brains.  And as far as I know, their interactions with physics/math brains really haven't gone anywhere.  The "economist" Paul Samuelson definitely had a physics/math brain and his interaction with Buchanan proved to be entirely fruitless.

What difference will it make if the creators (broadly speaking) of robots have a blind spot when it comes to economics?  That's a really tough question.  The creators themselves obviously respond to positive incentives (getting paid).  However, they obviously have a blind spot regarding the importance of positive incentives.  Can they create truly intelligent robots that fail to respond to positive incentives?

In a recent and relatively popular show about AIs... there were two theoretically intelligent robots... one was good and the other was bad.  The humans did not at all communicate with these robots through sacrifice.  The robots were not paid.  Their behavior did not at all depend on our willingness to sacrifice/spend/pay.  However, it definitely wasn't the case that the robots didn't need anything.  Both robots needed servers... lots of them.   Lots of servers take up a lot of space and energy.  All the limited and valuable resources (servers/energy/space) that are used by a bad robot can't also be used by a good robot.  This is Buchanan's Rule.

Ironically, the show was canceled.  Evidently it wasn't popular enough.  But the popularity of the show was only a factor because its true value was not known.  The true value of the show wasn't known because none of its viewers were given the opportunity to reveal/show/communicate their willingness to pay for the show.

Personally, I'm not going to pay to watch the "Arrival" in a theater.  Instead, I'll wait for it to hopefully be added to Netflix.  Am I the rule or the exception?  If it comes out on Netflix then the only mechanism that Netflix provides for me to communicate my valuation of the movie is their star rating system.  A star rating system is a very defective way to accurately communicate my valuation of the movie.  The only effective way to accurately communicate my valuation of the movie would be through my willingness to pay.  Except, clearly I wasn't willing to pay to watch the movie in a theater.  Well yeah.  How can I accurately valuate a movie that I haven't even seen!???

The solution is to apply the pragmatarian model to Netflix.  I'd be given the opportunity to allocate my monthly fees to my favorite content.  Every penny that I gave to "Arrival" would be a penny that I couldn't give to other content.  So the more pennies that I was willing to give to "Arrival"... the greater my valuation of it.  To learn more about the pragmatarian model please see my letter to Judith Donath.

Any real concern regarding bad robots will stem entirely from humanity's own failure to truly understand and appreciate the significance/relevance/importance of determining our willingness to pay/spend/sacrifice.  What about bad aliens?  I'm pretty sure that, by the time a species figures out how to travel to other inhabited planets, chances are really good that it will also have figured out the significance/relevance/importance of communicating through willingness to pay/spend/sacrifice.  I refer to this as "Xero's Rule".

[Update: 29 Dec 2016]

Photo of the perpetrators...

Praying Mantis egg sac on Aloe Hercules trunk...

[Update: 29 Jan 2017]

Again as in the case of corporeal structure, and conformably with my theory, the instinct of each species is good for itself, but has never, as far as we can judge, been produced for the exclusive good of others. One of the strongest instances of an animal apparently performing an action for the sole good of another, with which I am acquainted, is that of aphides voluntarily yielding their sweet excretion to ants: that they do so voluntarily, the following facts show. I removed all the ants from a group of about a dozen aphides on a dock-plant, and prevented their attendance during several hours. After this interval, I felt sure that the aphides would want to excrete. I watched them for some time through a lens, but not one excreted; I then tickled and stroked them with a hair in the same manner, as well as I could, as the ants do with their antennae; but not one excreted. Afterwards I allowed an ant to visit them, and it immediately seemed, by its eager way of running about, to be well aware what a rich flock it had discovered; it then began to play with its antennae on the abdomen first of one aphis and then of another; and each aphis, as soon as it felt the antennae, immediately lifted up its abdomen and excreted a limpid drop of sweet juice, which was eagerly devoured by the ant. Even the quite young aphides behaved in this manner, showing that the action was instinctive, and not the result of experience. But as the excretion is extremely viscid, it is probably a convenience to the aphides to have it removed; and therefore probably the aphides do not instinctively excrete for the sole good of the ants. Although I do not believe that any animal in the world performs an action for the exclusive good of another of a distinct species, yet each species tries to take advantage of the instincts of others, as each takes advantage of the weaker bodily structure of others. So again, in some few cases, certain instincts cannot be considered as absolutely perfect; but as details on this and other such points are not indispensable, they may be here passed over. - Charles Darwin, The Origin of Species

No comments:

Post a Comment