Sunday, 27 March 2016

Unsupervised learning? (mirror mirror on the wall, warts and all)




Microsoft shut down Tay and apologised for the way it was interacting with people after 16 hours of being online. (http://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot).

When I read about this experiment, I was immediately reminded of Migi, character of ‘Parasyte the Maxim’. Migi was a creature who basically started from a blank slate and had to learn about the world on its own, and decided to access the internet and all its resources. Migi had been called a “demon” by the human Shinichi, and since this concept was alien to Migi, Migi decide to spend a night understanding what it meant. The conclusion as illustrated above is that “humans are the closest to” “demon”.

It was a very interesting thought, and one I felt was not necessarily wrong.

Migi was like an AI engaged in unsupervised learning. Shinichi was asleep and Migi had the whole internet to play with and learn from. It was not said whether Migi entered into chats, but even assuming Migi just browsed, I do not think we can easily reject the conclusion.

Unsupervised learning can show us unexpected things, which we may or may not like.

Also we should bear in mind that the internet is not a random medium, people who ‘have something to say’ are more likely to go online and say it, and these people are not usually proponents of the status quo. http://www.huffingtonpost.com/2011/03/29/internet-polarizing-politics_n_842263.html.

Hence Microsoft decided to make changes to Tay. It decided to go for supervised learning: “it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values” basically Microsoft decided that the AI could not be trusted to learn from interactions with just about anyone. It sounds like the AI needs to go to a controlled environment (school) before growing up and establishing principles and being released into the world again.

In a business environment, I think it makes sense. First of all, we must recognize that despite “Big Data”, the amount of information unsupervised learning models are based on is limited. Secondly, there also is the fact that “correlation does not mean causation” and that humans know to take things with a pinch of salt, that is include some statistical probability on facts. 

For example, imagine a business rule that offers a supplementary card immediately to spouses of approved high end credit card customers. If the business rule is not part of the mass of data available for unsupervised machine learning, all the machine would see is high card issue rates for this segment; and you could end up with a rule such as: ‘offer cards to spouses of high net worth customers’ whereas, in Singapore for example, you need salary information to get cards. It gets worse if the rule is hidden by being refined: ‘offer cards to ladies living in this district and who do not have existing cards’ or ‘offer cards to men of 20 to 40 who use their ATM in the vicinity of these schools between 8 to 10 am or 3 to 5 pm…’.

Hence, there are reasons for supervised learning.

But what is equally interesting is that the full statement from Microsoft makes reference to earlier experiments. (http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/). Tay’s predecessor was “XiaoIce” (https://en.wikipedia.org/wiki/Xiaoice) and was launched in China without such negative impact. Microsoft wanted to replicate the experience in the US, and blames “a coordinated attack by a subset of people exploited a vulnerability in Tay”. Does this make Tay’s experiences and therefore growth via learning any less real? This is what happens in real life, and most of us make it through. 

Microsoft is working hard at improving: “To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process” sounds like supervised learning to me, like most of us went through. 

As a conclusion, I’d urge caution at this stage with regards to unsupervised learning; we first need to make sure the results make sense. Secondly, we need to recognize the ingredients that went into learning, and understand that changes in environment for example will limit applicability of any model.

Thursday, 10 March 2016

‘Big Data’ Analytics can quite easily help preempt calls into the call-centre. Next step, AI to mimic empathy?




This article is a fantastic piece on how technology can be applied. Basically, a bank is introducing AI to answer customer calls; the AI is an attempt at mimicking empathy.

One of the issues with using IVRs is that people used to complain that there was no human warmth, just cold efficiency and sometimes there were no relevant options, wasting precious time, worse still aggravating the caller further.

Technology has moved so much since the early days of IVRs. Initially, a caller was ‘faced with having a menu to navigate through by choosing options read to them, over and over, continually narrowing down the options until the ‘reason for the call’ is isolated and hopefully the issue could immediately addressed.

Simple usage statistics can help make the menu relatively more relevant, from simple counts to assign option priorities based on relative usage, sometimes based on the time or the customer demographics,  to slightly more complicated paths based on costs, expected probability of eventually having to go through to a call agent. 

Usually such menus are designed to try and resolve the issues before the caller resorts to the agent, or at least narrow down the issue to facilitate the work of the human agent. After all, human agents ‘cost more’ than machines.

The next step in the improvements to call agent systems is to preempt the calls. This involves finding out the ‘reasons of call’ and pro-actively contacting the customer with a solution before the customer picks up the phone to call into the call centre. 

While finding the events that usually lead to customer calling into a call-centre is relatively easy, I was initially skeptical whether there would be time to contact the customer before the call is made. But after working this issue with real customers, I was pleasantly surprised to find that even in these days of mobile phones and instant gratification, people would most often not immediately call into a call-centre for less than critical issues. For sure people would immediately call in if their card was swallowed by a machine or physically cut by a cashier, but even if their card was declined, people do not always immediately call to query the issuing bank. Most people having more than one card relationship helps; but this pattern also occurred in non card-related issues. People are more patient than I had expected.



Hence it is possible to anticipate who is likely to call into the call-centre and in many cases propose a solution before the customer calls in as illustrated above. This can be refined by adding an extra filter where the decision is made whether the organization wants to allow the call through, for cross-sell purposes for example.

The technological improvement highlighted in the FT article further minimises the chances of a call reaching a human agent that ‘costs more’ than the machines. The AI can mimic empathy and the customer can have his/her issue resolved without the coldness of a machine.

A combination of anticipating the call, deciding whether to preempt it, and possibly having the AI respond to the customer is a great way to control the customer experience and cost.

My question is whether mimicking emotions will be enough for customers, or would knowing that you are ‘talking to a machine’ albeit one that sounds human make you feel better.
 

Tuesday, 1 March 2016



“Self-driven/Autonomous car hits bus “


This is a headline that is not unexpected; accident happens, and whatever the hype, the autonomous vehicles are still at testing stage.  

“Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane… The vehicle and the test driver believed the bus would slow or allow the Google (autonomous vehicle) to continue…But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus”

However, “our test driver believed the bus was going to slow or stop to allow us to merge into the traffic”; hence google agreed that they “clearly bear some responsibility”.

But what is, to me, scary is what google has learnt from the accident: “From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles”. It sounds like the algorithm will understand the type of vehicle approaching and allocate a different probability of the in-coming vehicle slowing down depending on the vehicle type/size. 

To me that’s not a brilliant idea.

I do not think it’s called safe driving, nor courteous driving, to cause an incoming vehicle to slow down to avoid an accident with you. Instead, you should at most assume the incoming vehicle will not accelerate and entering its lane will be safe for both vehicles (and their occupants). 

Assuming the incoming vehicle will slow down is a recipe for accidents. Refining that assumption based on the size of the incoming vehicle will only encourage people to buy larger vehicles.
This brings me to another question: who bears responsibility for vehicular accidents involving autonomous vehicles, especially one autonomous vehicle and one ‘traditional” human driven vehicle? Will the AI provider pick the tab? In this case it’s certainly based on a decision by the AI. 

So if this happens when autonomous vehicles are in production not just testing, who will pick the tab? If it is say google, would an individual (or an insurance company) try to sue google or would it just settle? In that case, would we end up with a two speed justice system?

I think there is a lot of potential in autonomous vehicles, but more thought has to be put in the legislation and implications (especially in the insurance domain because accidents will happen) around it, and we have to be very careful about what is being tweaked in the models of the autonomous drivers, about the behaviours we are creating.

At the risk of sounding like the NRA: “It’s not the technology, it’s the people using the technology.”

Source article:


http://www.reuters.com/article/us-google-selfdrivingcar-idUSKCN0W22DG