Sunday 20 May 2018

A Royal Wedding. May Asserts Control. Value Alignment in Artificial intelligence. Asimov's Robots.


Finally, the UK media and news are getting over the anticipation of the royal wedding of Harry & Meghan, the pre-wedding news on the day, the wedding and the post wedding analysis. Even we watched the wedding on TV and I can recount interesting facts such as the bride's veil being embroidered with floral motifs from all the commonwealth countries, that the couple left in an electric e-type jaguar, and that the bridal bouquet would be laid on the tomb of the unknown soldier in Westminster Abbey, a tradition begun with the Queen Mother in 1923. You could have stuck your head in a rabbit hole in the middle of the woods and a passing stranger would have somehow conveyed a facet of the event to you.  From my perspective, the most impressive part was the oratory and body language of the reading by Reverend Michael Curry.

Mind you, the wedding was a pleasant diversion from the general news. President Trump's potential trade-war with China was apparently averted by a Chinese delegation travelling to the US and giving assurances that that more purchases from the US would be made (to redress the trade imbalance).

A bit of late news to arrive out of Prime Minister Theresa May's talks with her parliamentary MP's on the brexit solutions was her dressing down of arch brexiteer Jacob Rees-Mogg. Apparently he asked her why she could not forget any deal and just keep open the border after Brexit - take the hard Brexit. According to the sources feeding back to the Guardian, the Prime Minister "spelled out in no uncertain terms the serious problems and costs that would result from having to resort to World Trade Organisation rules, while also stressing the potentially grave security dangers that would follow if and when the Republic of Ireland had to reimpose border controls on the orders of the EU in order to preserve the integrity of the single market."

On a warm sunny Sunday, we actually took to doing some gardening. I made a fruit flan in time for afternoon tea and also continued with preparations for Cambridge Open Studios. The previous week had been quite busy with some interviews of Cambridge Open Studios organisers as research for the COS history book. I finally finished reading the Cryptonomicon by Neal Stephenson, a complicated tale that jumped between separate individuals and their descendants, the history of wartime cryptography and a tale of amassed gold to be recovered.

Last Thursday's Cambridge Enterprise and Technology Meeting was on Artificial Intelligence. Rupert Thomas covered the topic of the complexity of processors recognising emotions. Professor José Hernández-Orallo took a longer term view of AI, and the message that I took away was that with the development of ever more intelligent AI, it is essential to ensure that there is a value alignment between the objectives of humans and developing AI. However, the more fascinating insight that came to me during the talks was that currently we see humans and machines developing separately, but as humans we are already adapting and evolving with technology. Just think how the first reaction today is to use your smartphone to communicate with people or to gain access to information and memories that are stored in your external memory on the web.

The topic of value alignment between humans and technology is now regularly featured in SciFi, - just look at Westworld or Humans. It reminded me of two older works, the Cyteen trilogy by C. J Cherryh and of course Isaac Asimov's Robot stories. The latter deal with the issues of giving robots a value system (the three laws of robotics, later extended to four.

The original three were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Azimov later added a preceding or overriding zeroth law, so that the laws are now:
  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  2. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  3. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I've dug out my Asimov 'The Complete Robot' to read and may go through the following series too.

No comments:

Post a Comment