Thursday, 27 July 2017

Chatbots and Virtual Assistants - How can we improve?

In my final year at Nottingham Trent University I had a module on Natural Language Processing (NLP) where we studied and made a report for recommendations on chatbots. I did extensive reading at the time around the history of chatbots including some reading of Joseph Weizenbaum.

I was struck by a review of one of his books by Stanford Professor, Joshua Lederberg who said:
http://profiles.nlm.nih.gov/ps/access/BBBBLN.pdf
The quote itself is from 1979 and there hasn't been much change in all that time. There has been some improvements on virtual assistants but not so much on chatbots.

During my NLP assignment I saw that there were 3 main points that could improve the overall quality of chatbots and virtual assistants:

  • Storing of user information to get the name, location and age of the user
  • Being able to interpret multiple ways of getting the same information i.e. "I'm called X", "My name is X", "I'm X", "I was christened X"
  • Being able to decipher when the user is purposely being random in order to confuse the chatbot
For my final year assignment I went on to create a small project that was an expert chatbot with a knowledge of films, in this project I tried to affect change in three other points that I found existing chatbots couldn't handle:
  • Giving 100% accurate responses
  • Having actual context
  • Be able to handle multiple contextual pieces in interactions
Since I left university there's been huge strides in this field with Watson and AWS services being able to loosely retain some context and be able to handle multiple interactions. Even the responses can be 100% accurate for their specific field. i.e. saying "That's not relevant, ask me a question about X". 

Both are now able to get a user to "login" and thus get their name and potentially age from their profile, and they can use location services to get the area where the user is. They are able to "learn" from their previous conversations to decipher different ways of asking the same questions and can be "taught" alternative forms of the same question. 

More and more they are being created with personas and have the ability to inject a piece of humanism into dialogue in this way either through wit, or through personifying a brand. For example, imagine a Captain Morgan Spice assistant that punctuates sentences with an "argh", "matey" or some for of seaman pun.

There are some key points that still need to improve both in the chatbot and virtual assistants space.

To clarify my position when I talk about chatbots and virtual assistants:
Chatbot: A program that is able to talk to any person about any subject at any time.
Virtual Assistant: A program that emulates a conversation with a sales representative or help desk in order to provide specific information about a specific subject i.e. Films, Cinema, Cars, Trains. 

Virtual Assistants follow a decision path asking questions to guide you to either purchase a product or get information within their knowledge base. They can lead a person along and will seem more intelligent because they are proactive in getting you from A to B.

Chatbots have to be able to handle almost anything that anyone can say at any given time, and act as though they are a human interacting with another human. This is difficult because people are unpredictable, people will try their best to break a system and it is hard to emulate the full workings of the human brain. 
--------------------------------------------------------------------------------------------------------------------------

Virtual Assistant Improvements:
  • User Information: It is no longer enough to just know a users name and location when acting as a virtual assistant. VA's need to be able to remember your previous interactions, what things you like and what things you don't like. If i was your PA and i knew you didn't like getting up before 6am at any circumstance, why would I suggest you get the 6:10am train to London?
  • Yes / No: Whilst it is bad form for a VA to reply with yes or no answers, they should universally be able to decipher that a "yes" or "no" response is in direct correlation to a question they just asked. This is where context comes into it. Any chatbot or VA should be able to keep a conversation going by retaining context of the last few sentences, otherwise you have a programmed script reader.
  • Conversation Stream Ranking: Virtual Assistants should be ranking their entire conversation flow and altering the projects end point (three, four or five) interactions downstream.
Chatbot Improvements: 
  • Intelligent Learning: Chatbots need to learn from human interaction, but to do some wholesale or in bites is not the correct way to progress.  Chatbots need a way of being able to decipher whether a user is speaking nonsense - either maliciously or not. This might mean integrating with services like google.
  • Creating Experiences: With each conversation that they have, chatbots are having an "experience". They need to be able to draw on these experiences on future conversations. If most of it's conversations are about Harry Potter, then its probably a Harry Potter fan and can mould itself to have some opinion one way or another which it can use in future conversations. i.e. "Before we get too close, you need to know I'm a Hufflepuff and I'm proud of it!"
Joint Improvements:
  • Short Term Memory: In my Final Year project I dealt with short term contextual memory by retaining the last said Name to replace; him, her, she, he and they with the said name to add context to the statement i.e. "My brother is called Bill" "That's nice, tell me about him" "He likes football" (Bill likes football), this can be applied to objects, films and locations. 
  • Exploring Understanding: Giving some form of response, even if the chatbot does not understand to try and illicit understanding. Asking leading questions based on a potential conversation stream. i.e. "I hate them" "What do you hate?" "I hate trains" "That's rubbish, you're always taking trains so that can't be fun!"
  • Relationships: When human beings talk they are constantly building relationships with the person they are talking to which will be different for each person. One person might like talking about gardening which wouldn't work with another person. Each person has different sense of humour, others are more driven by different goals. Its important that both chatbots and virtual assistants are able to create these personal relationships.
--------------------------------------------------------------------------------------------------------------------------

So these are some of the things I think we should be heading towards. 
 - Do any existing products already do this?
 - Know how we can achieve these functions? 
 - Disagree with anything I've said?
 - Want to ask a question about my uni projects?

Feel free to leave a comment below! or email me here :)

Wednesday, 19 July 2017

Creating an Apache Solr Client on Node JS for IBM Watson Retrieve and Rank Service

I've been working on a project / playing with IBM Watson to try and understand a little more about the services on offer with Watson.

I have been utilising the IBM Watson Conversation service and at times I would like to output as a Watson response a document, or the title of a document that is within retrieve and rank collections.

Background: A Node JS runtime on IBM Bluemix has a web front end and application server. We get input from a user which we parse into the conversation service. The output goes to the backend server where we do some processing and output the response from the conversation as "Watson Response".

So what will it do next?

Next: Whilst processing the conversation response, we want to search a string i.e. "rarCollectionName". When we see that string in the response we want to begin a process where we invoke the retrieve and rank service and get records back. 

But, this was easier said then done. At first I looked for the documentation that would provide node JS instructions: https://www.ibm.com/watson/developercloud/retrieve-and-rank/api/v1/?node#


This gave pretty clear instructions on how to implement the node side of the service:

However, "Line 3" will cause a failure as the watson-developer-cloud does not have a retrieve_and_rank service but instead has a retrieve-and-rank service.
If we look at the node module documentation here, we see this:

which means we no longer need to declare "v1" when creating a new retrieve and rank instance:


What confused me in the instructions was the part where it said -
// Get a Solr client for indexing and searching documents.
// See https://github.com/watson-developer-cloud/node-sdk/blob/master/services/retrieve_and_rank/v1.js
which implied (to my inexperienced eyes) that we had to install the solr-client npm and create a solr client in this way. In reality, this functionality is already in retrieve and rank which comes with the solr-client npm. 

There are two ways to create your Solr Client using the retrieve and rank service:


I put all this together and my end result of code was something like this:


I haven't posted the full project, but the key differences to the standard example are:

 - Depending on the response from the conversation service, I populate the solrClient parameters to the different collections I would need to search i.e. recipes / journals

 - I take the first document we get from retrieve and rank "searchResponse.response.docs[0]" and put it on a variable so I can replace the conversation tool output with the body of the returned retrieve and rank result. (Lines 43, 44 and 45)


I hope this helps. As ever feel free to leave a comment below if this becomes out of date or you have a question.

Monday, 17 July 2017

Using MQTT on Mac and iOS

About a year ago we wanted to see how we could move the robot car with the phone. One option was to create an Message Queue Telemetry Transport (MQTT) broker on the operating system of the robot (raspberry pi) and then control it using simple commands like "Left" "Right" "Forward" and "Backwards".

To test this I connected my phone to my laptop using the laptop as my robot car operating system. At the time I forgot to do this write up so I'm doing so now so that others can do a similar test / play with MQTT with just their phone and laptop.

So the devices I'm going to be using are a MacBook Pro (macOS Sierra v10.12.5) and an iPhone 6S (v10.3.2). Here are the steps you need to follow:

1. Installing an MQTT Broker
I first installed eclipse Mosquitto open source MQTT, which - at the time of writing - implements MQTT protocol version 3.1 with the Arduino and other small "internet of things" devices in minds.

MQTT works using a publish / subscribe model. Read here to learn more, or get it straight from the Mosquitto's mouth here!
To install on the mac, go to your terminal and open a session.
  > brew install mosquitto

This will take several minutes. The version I am using is "mosquitto-1.4.8_1"
Upon completion the broker will usually start by default using port 1883 on the local network. (In my case my wifi)
For a full guide you can see here, the author also goes into a bit more detail about installing "brew" and the options for starting the broker / editing the configuration.

I simply use:
 > brew services start mosquitto
or if the service is already running, I use:
 > brew services restart mosquitto
to change the default settings i.e. port or use TLS/SSL you need to change the config:
 > /usr/local/etc/mosquitto/mosquitto.conf

2. Find your network IP address
In my case I was using my wifi so I went directly to:
 > system preferences / network / wifi
underneath the "connected" status was my IP Address.

To find your IP on smartphone or other device there's a useful guide here.

3. MQTT for your Phone
When I first tried this last year there was a very good utility called MQTTool for the iPhone but it has since disappeared!

MQTTool
Instead I tried MQTT Probe, MqttClient and StompClient. These simply didn't work on my phone. The connection kept dropping with the mac or the buttons simply didn't work. I finally found Mqttt which worked well and allowed you to view publish and subscribe on the same screen!

Android has a much wider variety of options as can be seen here though Hive MQ suggests MyMQTT or MQTT Client.

Mqttt application
My demonstration will look at using Mqttt though all the apps look and feel much the same so it would be easy to follow these instructions for all of them.

4. Connecting Mqttt to your MQTT broker
To achieve this we simple need to give ip address we had earlier in the "Host" section and set the port to "1883" or whatever you have changed your default to be. The "Client ID" will automatically fill for you. Selecting "Clean Session" will provide a new client view each time. Press "Connect" to connect to the broker.

Connection Page
5. Testing our mobile to mac publish
After we have connected we come to the publish / subscribe screen. The smaller boxes allow you to type the topic strings and the bigger boxes are used for the messages.

For our first test we will use the bottom group (Publish) and give a topic string. This could be
 > anything/You/Like
though I will be using
 > topic/Aiden
for my example. Refer to the Pub / Sub model links for guidance on topic syntax, do's and do nots.

In the message box for our simple test we will type
 > Hello

BEFORE WE DO ANYTHING ELSE!
We need to go to our mac terminal and run the following command:
 > mosquitto_sub -h 192.168.1.66 -p 1883 -v -t 'topic/Aiden'

This command runs an MQTT Client and subscribes to the ip address defined with "-h" and to the port defined with "-p". "-v" says to print any published messages verbosely and "-t" says which topic to subscribe to. This process to will continue to run, waiting for any published messages to arrive to the topic string.

In our iOS Application Mqttt we can now press "Publish". We should now see the "Hello" message on the mac terminal as can be seen below.
Published Output
Subscription Output
Voila.

5a. Publishing from the mac
To achieve the same test from the mac, we would need to simply open a new terminal (leaving the subscribe command running where it currently is) and run the following command:
> mosquitto_pub -t topic/Aiden -m “blah blah”

We would then expect to see the message "topic/Aiden blah blah" appear bellow the "topic/Aiden Hello" we had previously received in the subscription terminal. We can cancel the subscribe process in the terminal now.

6. Event Handling 
I then wanted to test pushing messages back to the Mqttt tool from the broker after it has received a message from the subscribed string. The total flow will go: Phone publish -> broker interpret -> broker publish to response string.

In the Mqttt tool we want to add a Subscription in the Subscription area, we first assign a topic:
 > topic/Fred
and press "Subscribe"

This is now doing what the broker had been doing previously but a different topic and instead from the phone, waiting for published messages to come through on the topic. For this example we will use the phone to publish to "topic/Aiden", the mac will then pick up that message in a shell script and forward the message to "topic/Fred".

In the mac in an open terminal and a dedicated folder we want to complete the following commands:
 > touch MQTT.sh
and:
 > touch mqtt.log

The MQTT shell script will have a constant loop which take the top line of the log file (where we will send published messages). It will then delete the top line that has been processed. If the top line was empty (i.e. no published messages to process) the loop restarts. If the top line held data, we trim the topic string from the message and publish the message only to the "topic/Fred" topic.

----------------------------------------------
MQTT.sh
#!/bin/bash
#get the log file name
filename="mqtt.log"
echo "Check the shell script log file"
#loop
while true; do
        #get the top line of the log file
        line=$(head -n 1 ${filename})
        #delete the top line of the log file
        echo "$(tail -n +2 ${filename})" > ${filename}
        #if the top line was not blank
        if [[ ${line} != "" ]]; then
        #remove the topic string from the message
        newOutput=${line#* }
         #publish the string to topic/Fred
        mosquitto_pub -t topic/Fred -m "\"$newOutput\""
        fi
done
----------------------------------------------

7. Test the Event Handling shell script
On the mac we open two terminals to the location of the MQTT.sh and log file.

Terminal 1:
 > chmod 755 MQTT.sh
 > ./MQTT.sh
Terminal 2:
 > mosquitto_sub -h 192.168.1.66 -p 1883 -v -t 'topic/Aiden' >> mqtt.log

On Mqttt we write a message in the publish box and publish it.
I will try the following:
 > hello
"Publish"
 > hello
"Publish"
 > hello my name is Aiden
"Publish"

Replace "Aiden" with your own name.

The subscription on one terminal will write to the log file to be processed by the running shell script.
If successful we will see the message appear in the "topic/Fred" subscription message box.

Example Response Output
There’s lots we can make the laptop do other then reply. i.e. invoke an API call, move a robot, turn on bedroom lights, run a function to unlock the door for a guest coming around the house, open a garage door.

The reply sent back could be “done” “incomplete” “failed”. This is also important for when you want to audit messages being sent and received as we can send to a dedicated topic which stores the messages in some way.

8. Load Testing
Very briefly, we can load test MQTT using several existing tools. One such tool is MQTT Box which can be found here.

I hope this was helpful, Enjoy using MQTT!

Any questions feel free to email me!