The Road to Conscious Machines The Story of AI

1.jpeg

The author is Head of Department and Professor of Computer Science at Oxford.  The author tried his hardest to make the book palatable to the general public and left all weird symbols in mathematical equations behind. It’s not very easy to read and understandably so. To understand and appreciate all these things, I believe high level mathematical understanding is required. Unfortunately mine has long long gone since uni days.

Anyhow, AI is all the rage in technology news these days. There hasn’t been a day gone by without the headline about AI so I’d like to understand what is all about. That’s basically it why I picked up the book and there’s no disappointment. It’s interesting to learn that a book about computer science, AI needs to discuss philosophy of existence, meaning of life etc. 

The followings are small bits and pieces that I like and new things I learn reading the book.

AI appeals to fundamental questions about the human condition, and our status as homo sapiens – what it means to be human, and whether humans are unique.

How dow we know that machines have intelligence

1 Turing test

Through questions and answers with machines if we can’t tell if we are talking (it could be through typing on screen like a chat) to a person or a machine then the machines have intelligence. The test is all about distinguishability.

2 Winograd schemas. These are short questions, perhaps best illustrated by examples:

Statement 1a: The city councillors refused the demonstrators a permit because they feared violence.

Statement 1b: The city councillors refused the demonstrators a permit because they advocated violence.

Question: Who [feared/advocated] violence?

Statement 2a: The trophy doesn’t fit into the brown suitcase because it is too small.

Statement 2b: The trophy doesn’t fit into the brown suitcase because it is too large.

Question: What is too [small/large]?

To be able to answer these questions correctly, machines need to have understanding and appreciation of context. This test will kick chatbots (fake AI) ass, because chatbots do not have understanding of context. They converse with us through imitation, detailed rules and scripts, but with complete lack of understanding what they are talking about. That’s definitely not intelligence.

In the spirit of Winograd schemas, I think of even more complicated example in Thai with a little smile on my face as all my foreigner friends who studied Thai at the time had scratch their heads listening to this sentence.

เสด็จให้มา ทูลถามว่า เสด็จจะเสด็จหรือไม่เสด็จ ถ้าเสด็จจะเสด็จ เสด็จจะเสด็จด้วย

The loose translation goes something along the line of (Person 1 asked me to asked you-Person2-, if you are going, if you are going, person 1 is going, too). In essence of Winograd schemas, เสด็จ can mean three different things depending on the context. It can mean person 1, person 2 or go. Non-Thai ‘human’ already have hard time understanding this sentence it would be fun to imagine if/when machines can understand it.

What is AI?

Artificial General Intelligence roughly equates to having a computer that has the full range of intellectual capabilities that a person has – this would include the ability to converse in natural language (cf. the Turing test), solve problems, reason, perceive its environment and so on, at or above the same level as a typical person and having it done very specific tasks 

Different technic in building an AI

Search 

In AI, we don’t mean it in the sense of searching the web (e.g. Google or Baidu). Search in AI is a fundamental problem-solving technique, which involves systematically considering all possible courses of action and then pick the best suitable action. Applying this technique to board games like chess or Go would require massive amount of processing power and time which would render it impractical.

Logical AI

It’s an AI systems that will continually follow a particular loop: perceiving its environment, reasoning about what to do and then acting. But, in a system that operates in this way, the system is decoupled from the environment. Also it would again require massive amount of processing power and time which would render it impractical.

Agent based AI

Firstly, they had to be reactive: they had to be attuned to their environment, and able to adapt their behaviour appropriately when changes in the environment demanded it.

Secondly, they had to be proactive: capable of systematically working to achieve their given task on behalf of their user.

Finally, agents were social: capable of working with other agents when required. 

To sum up

The Golden Age of AI had emphasized proactive behaviour – planning and problem solving; while behavioural AI had emphasized the importance of being reactive – embodied in and attuned to the environment. Agent-based AI demanded both, and in addition threw something new into the mix: the idea that agents would have to work with other agents, and for this, they would need social skills.

Machine learning

It is not like human learning: it is about learning from and making predictions about data. Machine learning is a program that can compute a desired output from a given input, without being given an explicit recipe for how to do this. This is in contrast to Search or Logical based AI where recipe is required.

An example of machine learning is programs that can recognize faces in pictures. The way this is usually done involves providing a program with examples of the things that you are trying to learn. Thus, a program to recognize faces would be trained by giving it many pictures labelled with the names of the people that appear in the pictures. The goal is that, when subsequently presented with solely an image, the program would be able to correctly give the name of the person in the picture.

Neural networks

The networks simulate how brain works hence the name. Neurons are interconnected to a large number of other neurons. Signals/impulse from these other neurons have various weights when sent to main neuron. Signal from main neuron will fire only when signals from other neurons reach an excitatory threshold.

Deep learning

It is a technique developed by DeepMind-a UK startup which was quietly taken over by Google in 2016 for $700millions. It’s a further development of machine learning.  Neural networks used in deep learning have more layers, and more, better-connected neurons. Deep learning doesn’t need recipe like the early techniques e.g. search and logical AI nor does it need training data set like machine learning technique. Just set it a goal, a program learns by observing. A program utilise this technique, AlphaZero beat human world champion chess master and AlphaGo beat human world champion Go master.

Driverless car conundrum

The trolley problem

Wikipedia

Wikipedia

Trolley car runs fast on a track without a brake, you are at the level at the junction. If you do nothing five people will die, if you pull the lever one will die, what would you do, why?

This is the problem driverless car may need to decide. Who to kill? I have read a lot about this issue and driverless car. I find it intriguing and intellectually stimulating learning about it. The author, however, convincingly laid out three reasons why we should stop talking about trolley problems and driverless car. This is a new way of thinking I haven’t heard before and it’s a new thing I learn from the book.

If the greatest philosophical thinkers in the world cannot definitively resolve the Trolley Problem, then is it reasonable of us to expect an AI system to do so? 

Secondly, I should point out that I have been driving cars for decades, and in all that time I have never faced a Trolley Problem. Nor has anyone else I know. Moreover, what I know about ethics, and the ethics of the Trolley Problem, is roughly what you read above: I wasn’t required to pass an ethics exam to get my driving licence. This has not been a problem for me so far. Driving a car does not require deep ethical reasoning – so requiring that driverless cars can resolve the Trolley Problem before we put them on the roads therefore seems to me a little absurd.

Thirdly, the truth is that however obvious the answers to ethical problems might seem to you, other people will have different answers, which seem just as self-evident to them

Chankhrit Sathorn