Thursday, 17 August 2017

Selenium Tips - Node JS

So I've recently started making a few web user interface tests that can't be manually achieved with API calls. I was looking forward to practising my Node JS so thought I'd make it using that framework.

Here's a few little functions and commands to both get you going or help me remember what I've done:

Firstly, Selenium is a web browser automation software that can be run on the browser, in an application such as node or as an IDE. find out more HERE

Secondly, Node JS my desired programming service of the day is an open source platform built on a javascript runtime. There's a great youtube introductory video HERE

So... Getting Started. So I had installed node and created a dedicated directory.
I navigated to my new directory and installed selenium-webdriver:
 npm install selenium-webdriver
I then created an application:
echo "" > myNewApp.js
In the application I referenced selenium-webdriver and then used selenium builder to create a new browser driver like so:
const selenium = require('selenium-webdriver'),
    By = selenium.By,
    until = selenium.until;

var driver = new selenium.Builder()
    .forBrowser('chrome')
    .build();

var driver = new selenium.Builder()
    .forBrowser(‘firefox’)
    .build();
You'll either want different variable names for your different browsers or to only call one at a time!
From here on there are a whole bunch of commands we can run using driver.Command you can see a  list of some of the commands HERE.

The next thing to do is to perform a GET function on the webpage you want to navigate to in your automated test.
driver.get('www.myUrl.com').then(function(){
   console.log('Navigating to my page');

});
From this page we can perform a whole bunch of commands:
Populate an object:
driver.findElement(By.name(‘red’)).sendKeys(‘ABC’);
Clear a populated object:
driver.findElement(By.name(‘red’)).clear();
Make a hidden object visible:
driver.executeScript("document.getElementsByName(‘red’)[0].setAttribute('type', 'text');");
Using the drivers .then function will let you wait for something to occur and then perform the next action, such as wait for 2 seconds:
.then(function(){
  driver.sleep(2000);
});
You might want to click a button, all you need to know is the button class name. One way of doing this is to "inspect" the webpage from the browser.
Click:
driver.findElement(By.className('btn btn-lg')).click(); 
To quit the browser:
driver.quit(); 
The one thing I wasn't able to do was make a webpage stop loading to allow me to process a page that didn't stop. Let me know if you've done this before! I'd love to hear how you did it. 

Saturday, 5 August 2017

A Beginners Guide to the Allotment

Louise and I have had the allotment for just over a year now and I thought it would be a good idea to get some thoughts down on our "Lessons Learnt" and some general advice for starting your own allotment.

Getting Started:
  1. Wait times for allotments can be very long! Up to 6 months in some councils as allotments need to be vacated before they can be reallocated.
  2. Allotments are measured in rods where 1 rod is around 5 meters
  3. Cost varies from council to council
  4. The first few tools you'll need are some pots, a spade, a fork and a watering can
  5. You will need a compost bin, pick an out of the way location that is relatively flat
  6. Start small, no need to dig the whole thing over at once.
  7. Ask neighbours for advice, most will be happy to help
  8. You can buy old tools very cheap at weekend markets, and local allotment organisations often sell seeds and compost, your local farmer will sell horse manure very cheap too
The What's What:
Some things look weird can you may find yourself wondering what other people are doing and why they have done them:
  1. Raised Beds - These are wooden borders around a small patch of land, they make it harder for slugs to eat your seeds.
  2. Fire Pits - Not all allotments allow them, and some only allow them between certain times. But here you can burn your weeds (like bind weed). The ash also makes great fertiliser!
  3. Fruit Cages - These keep the birds off the berries and trees
  4. Nets -  There will be nets and tunnels over seedlings, again to stop the birds. Some nets are specially designed to keep off frost, snow and bugs but still allow rain and sunlight.
  5. Tarp - This is usually put over a particularly weedy patch of land, it blocks out sunlight and kills the weeds at the roots. Normally left over the year ready for the new year.
Chemical Romance:
Here's some of the do and do nots around chemicals
  1. Slug Killer - in small doses keeps slugs at bay. However, pellets stay inside the slugs and the poison can have adverse affects on hedgehogs who can die from poisoning as well. There is also the case whereby if slug levels become too low there will be less millipedes, frogs and hedgehogs on your allotment meaning future slug populations will thrive without natural predators! Alternatives include; slug traps, putting them in the compost bin, raised beds, salt walls around plants.
  2. Weed Killer - I'm not adverse to weed killer itself but in the wind it can blow to other allotments killing other peoples vegetables. This is a big no no for obvious reasons. 
  3. Bug Killer - There are lots of home made formulas for killing white fly and aphids but some store bought products work just well.
Lessons Learnt:
We've all made mistakes in our lives, here are some of ours:
  1. Make sure you know what fruit/veg is what! We've pulled out strawberries we thought were weeds and grown raspberry bushes we thought was cauliflower. There's lots of apps for knowing what is what and books too!
  2. Plan what you will grow and when. Veg needs to be seeding pretty early in the year and inside your house is a perfect place whilst you get the allotment ready. Plants grow better in the greenhouse but the house can be good.
  3. Find a friend to agree to water your allotment whenever you're away. Alternatively fellow allotmenteers will sometimes offer, hence why we don't spray our neighbours allotments with weed killer! This is mainly important at the seed stage, less so when they're out in the wild.
  4. If you can't find someone, then there's ways of watering your seeds without human intervention. I've mentioned these before here: and we personally settled for this.
  5. Be careful when digging grassy bits of land, creatures great and small live there. Mice, frogs, hedgehogs etc. Killing a frog by accident is the worst! :(
  6. Composting can be a bit of an art. There's green and brown types of waste that can go in. Green is veg, leftover food and some weeds. Browns are dry leaves, cardboard from your house, tea bags and coffee grounds. The eden project do a good introduction here and a comprehensive video guide here.
That's all I have time for in this sitting! But I'll know doubt be back with a Part 2!

Do you have your own tips for new allotment holders? Are you having any particular problems on your own allotment? 

Leave a comment below to get in touch :)

Thursday, 27 July 2017

Chatbots and Virtual Assistants - How can we improve?

In my final year at Nottingham Trent University I had a module on Natural Language Processing (NLP) where we studied and made a report for recommendations on chatbots. I did extensive reading at the time around the history of chatbots including some reading of Joseph Weizenbaum.

I was struck by a review of one of his books by Stanford Professor, Joshua Lederberg who said:
http://profiles.nlm.nih.gov/ps/access/BBBBLN.pdf
The quote itself is from 1979 and there hasn't been much change in all that time. There has been some improvements on virtual assistants but not so much on chatbots.

During my NLP assignment I saw that there were 3 main points that could improve the overall quality of chatbots and virtual assistants:

  • Storing of user information to get the name, location and age of the user
  • Being able to interpret multiple ways of getting the same information i.e. "I'm called X", "My name is X", "I'm X", "I was christened X"
  • Being able to decipher when the user is purposely being random in order to confuse the chatbot
For my final year assignment I went on to create a small project that was an expert chatbot with a knowledge of films, in this project I tried to affect change in three other points that I found existing chatbots couldn't handle:
  • Giving 100% accurate responses
  • Having actual context
  • Be able to handle multiple contextual pieces in interactions
Since I left university there's been huge strides in this field with Watson and AWS services being able to loosely retain some context and be able to handle multiple interactions. Even the responses can be 100% accurate for their specific field. i.e. saying "That's not relevant, ask me a question about X". 

Both are now able to get a user to "login" and thus get their name and potentially age from their profile, and they can use location services to get the area where the user is. They are able to "learn" from their previous conversations to decipher different ways of asking the same questions and can be "taught" alternative forms of the same question. 

More and more they are being created with personas and have the ability to inject a piece of humanism into dialogue in this way either through wit, or through personifying a brand. For example, imagine a Captain Morgan Spice assistant that punctuates sentences with an "argh", "matey" or some for of seaman pun.

There are some key points that still need to improve both in the chatbot and virtual assistants space.

To clarify my position when I talk about chatbots and virtual assistants:
Chatbot: A program that is able to talk to any person about any subject at any time.
Virtual Assistant: A program that emulates a conversation with a sales representative or help desk in order to provide specific information about a specific subject i.e. Films, Cinema, Cars, Trains. 

Virtual Assistants follow a decision path asking questions to guide you to either purchase a product or get information within their knowledge base. They can lead a person along and will seem more intelligent because they are proactive in getting you from A to B.

Chatbots have to be able to handle almost anything that anyone can say at any given time, and act as though they are a human interacting with another human. This is difficult because people are unpredictable, people will try their best to break a system and it is hard to emulate the full workings of the human brain. 
--------------------------------------------------------------------------------------------------------------------------

Virtual Assistant Improvements:
  • User Information: It is no longer enough to just know a users name and location when acting as a virtual assistant. VA's need to be able to remember your previous interactions, what things you like and what things you don't like. If i was your PA and i knew you didn't like getting up before 6am at any circumstance, why would I suggest you get the 6:10am train to London?
  • Yes / No: Whilst it is bad form for a VA to reply with yes or no answers, they should universally be able to decipher that a "yes" or "no" response is in direct correlation to a question they just asked. This is where context comes into it. Any chatbot or VA should be able to keep a conversation going by retaining context of the last few sentences, otherwise you have a programmed script reader.
  • Conversation Stream Ranking: Virtual Assistants should be ranking their entire conversation flow and altering the projects end point (three, four or five) interactions downstream.
Chatbot Improvements: 
  • Intelligent Learning: Chatbots need to learn from human interaction, but to do some wholesale or in bites is not the correct way to progress.  Chatbots need a way of being able to decipher whether a user is speaking nonsense - either maliciously or not. This might mean integrating with services like google.
  • Creating Experiences: With each conversation that they have, chatbots are having an "experience". They need to be able to draw on these experiences on future conversations. If most of it's conversations are about Harry Potter, then its probably a Harry Potter fan and can mould itself to have some opinion one way or another which it can use in future conversations. i.e. "Before we get too close, you need to know I'm a Hufflepuff and I'm proud of it!"
Joint Improvements:
  • Short Term Memory: In my Final Year project I dealt with short term contextual memory by retaining the last said Name to replace; him, her, she, he and they with the said name to add context to the statement i.e. "My brother is called Bill" "That's nice, tell me about him" "He likes football" (Bill likes football), this can be applied to objects, films and locations. 
  • Exploring Understanding: Giving some form of response, even if the chatbot does not understand to try and illicit understanding. Asking leading questions based on a potential conversation stream. i.e. "I hate them" "What do you hate?" "I hate trains" "That's rubbish, you're always taking trains so that can't be fun!"
  • Relationships: When human beings talk they are constantly building relationships with the person they are talking to which will be different for each person. One person might like talking about gardening which wouldn't work with another person. Each person has different sense of humour, others are more driven by different goals. Its important that both chatbots and virtual assistants are able to create these personal relationships.
--------------------------------------------------------------------------------------------------------------------------

So these are some of the things I think we should be heading towards. 
 - Do any existing products already do this?
 - Know how we can achieve these functions? 
 - Disagree with anything I've said?
 - Want to ask a question about my uni projects?

Feel free to leave a comment below! or email me here :)

Wednesday, 19 July 2017

Creating an Apache Solr Client on Node JS for IBM Watson Retrieve and Rank Service

I've been working on a project / playing with IBM Watson to try and understand a little more about the services on offer with Watson.

I have been utilising the IBM Watson Conversation service and at times I would like to output as a Watson response a document, or the title of a document that is within retrieve and rank collections.

Background: A Node JS runtime on IBM Bluemix has a web front end and application server. We get input from a user which we parse into the conversation service. The output goes to the backend server where we do some processing and output the response from the conversation as "Watson Response".

So what will it do next?

Next: Whilst processing the conversation response, we want to search a string i.e. "rarCollectionName". When we see that string in the response we want to begin a process where we invoke the retrieve and rank service and get records back. 

But, this was easier said then done. At first I looked for the documentation that would provide node JS instructions: https://www.ibm.com/watson/developercloud/retrieve-and-rank/api/v1/?node#


This gave pretty clear instructions on how to implement the node side of the service:

However, "Line 3" will cause a failure as the watson-developer-cloud does not have a retrieve_and_rank service but instead has a retrieve-and-rank service.
If we look at the node module documentation here, we see this:

which means we no longer need to declare "v1" when creating a new retrieve and rank instance:


What confused me in the instructions was the part where it said -
// Get a Solr client for indexing and searching documents.
// See https://github.com/watson-developer-cloud/node-sdk/blob/master/services/retrieve_and_rank/v1.js
which implied (to my inexperienced eyes) that we had to install the solr-client npm and create a solr client in this way. In reality, this functionality is already in retrieve and rank which comes with the solr-client npm. 

There are two ways to create your Solr Client using the retrieve and rank service:


I put all this together and my end result of code was something like this:


I haven't posted the full project, but the key differences to the standard example are:

 - Depending on the response from the conversation service, I populate the solrClient parameters to the different collections I would need to search i.e. recipes / journals

 - I take the first document we get from retrieve and rank "searchResponse.response.docs[0]" and put it on a variable so I can replace the conversation tool output with the body of the returned retrieve and rank result. (Lines 43, 44 and 45)


I hope this helps. As ever feel free to leave a comment below if this becomes out of date or you have a question.

Monday, 17 July 2017

Using MQTT on Mac and iOS

About a year ago we wanted to see how we could move the robot car with the phone. One option was to create an Message Queue Telemetry Transport (MQTT) broker on the operating system of the robot (raspberry pi) and then control it using simple commands like "Left" "Right" "Forward" and "Backwards".

To test this I connected my phone to my laptop using the laptop as my robot car operating system. At the time I forgot to do this write up so I'm doing so now so that others can do a similar test / play with MQTT with just their phone and laptop.

So the devices I'm going to be using are a MacBook Pro (macOS Sierra v10.12.5) and an iPhone 6S (v10.3.2). Here are the steps you need to follow:

1. Installing an MQTT Broker
I first installed eclipse Mosquitto open source MQTT, which - at the time of writing - implements MQTT protocol version 3.1 with the Arduino and other small "internet of things" devices in minds.

MQTT works using a publish / subscribe model. Read here to learn more, or get it straight from the Mosquitto's mouth here!
To install on the mac, go to your terminal and open a session.
  > brew install mosquitto

This will take several minutes. The version I am using is "mosquitto-1.4.8_1"
Upon completion the broker will usually start by default using port 1883 on the local network. (In my case my wifi)
For a full guide you can see here, the author also goes into a bit more detail about installing "brew" and the options for starting the broker / editing the configuration.

I simply use:
 > brew services start mosquitto
or if the service is already running, I use:
 > brew services restart mosquitto
to change the default settings i.e. port or use TLS/SSL you need to change the config:
 > /usr/local/etc/mosquitto/mosquitto.conf

2. Find your network IP address
In my case I was using my wifi so I went directly to:
 > system preferences / network / wifi
underneath the "connected" status was my IP Address.

To find your IP on smartphone or other device there's a useful guide here.

3. MQTT for your Phone
When I first tried this last year there was a very good utility called MQTTool for the iPhone but it has since disappeared!

MQTTool
Instead I tried MQTT Probe, MqttClient and StompClient. These simply didn't work on my phone. The connection kept dropping with the mac or the buttons simply didn't work. I finally found Mqttt which worked well and allowed you to view publish and subscribe on the same screen!

Android has a much wider variety of options as can be seen here though Hive MQ suggests MyMQTT or MQTT Client.

Mqttt application
My demonstration will look at using Mqttt though all the apps look and feel much the same so it would be easy to follow these instructions for all of them.

4. Connecting Mqttt to your MQTT broker
To achieve this we simple need to give ip address we had earlier in the "Host" section and set the port to "1883" or whatever you have changed your default to be. The "Client ID" will automatically fill for you. Selecting "Clean Session" will provide a new client view each time. Press "Connect" to connect to the broker.

Connection Page
5. Testing our mobile to mac publish
After we have connected we come to the publish / subscribe screen. The smaller boxes allow you to type the topic strings and the bigger boxes are used for the messages.

For our first test we will use the bottom group (Publish) and give a topic string. This could be
 > anything/You/Like
though I will be using
 > topic/Aiden
for my example. Refer to the Pub / Sub model links for guidance on topic syntax, do's and do nots.

In the message box for our simple test we will type
 > Hello

BEFORE WE DO ANYTHING ELSE!
We need to go to our mac terminal and run the following command:
 > mosquitto_sub -h 192.168.1.66 -p 1883 -v -t 'topic/Aiden'

This command runs an MQTT Client and subscribes to the ip address defined with "-h" and to the port defined with "-p". "-v" says to print any published messages verbosely and "-t" says which topic to subscribe to. This process to will continue to run, waiting for any published messages to arrive to the topic string.

In our iOS Application Mqttt we can now press "Publish". We should now see the "Hello" message on the mac terminal as can be seen below.
Published Output
Subscription Output
Voila.

5a. Publishing from the mac
To achieve the same test from the mac, we would need to simply open a new terminal (leaving the subscribe command running where it currently is) and run the following command:
> mosquitto_pub -t topic/Aiden -m “blah blah”

We would then expect to see the message "topic/Aiden blah blah" appear bellow the "topic/Aiden Hello" we had previously received in the subscription terminal. We can cancel the subscribe process in the terminal now.

6. Event Handling 
I then wanted to test pushing messages back to the Mqttt tool from the broker after it has received a message from the subscribed string. The total flow will go: Phone publish -> broker interpret -> broker publish to response string.

In the Mqttt tool we want to add a Subscription in the Subscription area, we first assign a topic:
 > topic/Fred
and press "Subscribe"

This is now doing what the broker had been doing previously but a different topic and instead from the phone, waiting for published messages to come through on the topic. For this example we will use the phone to publish to "topic/Aiden", the mac will then pick up that message in a shell script and forward the message to "topic/Fred".

In the mac in an open terminal and a dedicated folder we want to complete the following commands:
 > touch MQTT.sh
and:
 > touch mqtt.log

The MQTT shell script will have a constant loop which take the top line of the log file (where we will send published messages). It will then delete the top line that has been processed. If the top line was empty (i.e. no published messages to process) the loop restarts. If the top line held data, we trim the topic string from the message and publish the message only to the "topic/Fred" topic.

----------------------------------------------
MQTT.sh
#!/bin/bash
#get the log file name
filename="mqtt.log"
echo "Check the shell script log file"
#loop
while true; do
        #get the top line of the log file
        line=$(head -n 1 ${filename})
        #delete the top line of the log file
        echo "$(tail -n +2 ${filename})" > ${filename}
        #if the top line was not blank
        if [[ ${line} != "" ]]; then
        #remove the topic string from the message
        newOutput=${line#* }
         #publish the string to topic/Fred
        mosquitto_pub -t topic/Fred -m "\"$newOutput\""
        fi
done
----------------------------------------------

7. Test the Event Handling shell script
On the mac we open two terminals to the location of the MQTT.sh and log file.

Terminal 1:
 > chmod 755 MQTT.sh
 > ./MQTT.sh
Terminal 2:
 > mosquitto_sub -h 192.168.1.66 -p 1883 -v -t 'topic/Aiden' >> mqtt.log

On Mqttt we write a message in the publish box and publish it.
I will try the following:
 > hello
"Publish"
 > hello
"Publish"
 > hello my name is Aiden
"Publish"

Replace "Aiden" with your own name.

The subscription on one terminal will write to the log file to be processed by the running shell script.
If successful we will see the message appear in the "topic/Fred" subscription message box.

Example Response Output
There’s lots we can make the laptop do other then reply. i.e. invoke an API call, move a robot, turn on bedroom lights, run a function to unlock the door for a guest coming around the house, open a garage door.

The reply sent back could be “done” “incomplete” “failed”. This is also important for when you want to audit messages being sent and received as we can send to a dedicated topic which stores the messages in some way.

8. Load Testing
Very briefly, we can load test MQTT using several existing tools. One such tool is MQTT Box which can be found here.

I hope this was helpful, Enjoy using MQTT!

Any questions feel free to email me!

Thursday, 15 June 2017

Rational Software Architect Designer

Having just installed RSAD version 9.6 on mac OS I tried to open the application from my home screen. However I got an error message pointing me at:
    myHome/.eclipse/org.eclipse.platform_4.6.1_443275834_macosx_cocoa_x86_64/configuration/aNumber.log

 In there I got the following error message:

!ENTRY org.eclipse.osgi 4 0 2017-06-15 10:18:34.787
!MESSAGE Bundle com.ibm.cds not found.

!ENTRY org.eclipse.equinox.app 0 0 2017-06-15 10:18:35.138
!MESSAGE Product com.ibm.rational.rsa.product.v95.ide could not be found.

!ENTRY org.eclipse.osgi 4 0 2017-06-15 10:18:35.335
!MESSAGE Application error
!STACK 1
java.lang.RuntimeException: No application id has been found.
at org.eclipse.equinox.internal.app.EclipseAppContainer.startDefaultApp(EclipseAppContainer.java:242)
at org.eclipse.equinox.internal.app.MainApplicationLauncher.run(MainApplicationLauncher.java:29)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:673)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:610)

at org.eclipse.equinox.launcher.Main.run(Main.java:1519)

The work around I would usually follow is to open the application from the command line:
   /Applications/Eclipse.app/Contents/MacOS/eclipse

Hoever I already have an eclipse environment that I use for Pure Script Package creation. The above command opens my existing eclipse and not the RSAD eclipse workspace. I also don't want to have to open the application via the command line every time!

Instead the answer lies here:
https://www.ibm.com/developerworks/community/forums/html/topic?id=9ed43812-9c67-4fd5-921b-f7acb405e12b

Thanks to "ericlmk" who says to:
go to File: /Applications/IBM/SoftwareDeliveryPlatform/RSA.app/Contents/MacOS/ExecuteScript

Change $ECLIPSE_APP -product com.ibm.rational.rsa.product.v95.ide &
to $ECLIPSE_APP -product com.ibm.rational.rsa.product.v96.ide &

Version mixup in the execute script. oops. 

Monday, 12 June 2017

REVIEW: The Rise of the Robots - Martin Ford

It's just after the 2017 UK General Election, a heated debate which ended with a hung parliament but with more votes for the two main parties than has been seen in my life time. This might have been because the two sides held widely different views on how to deal with Brexit, economic inequality, unemployment, housing, staff shortages and immigration. 

What does this have to do with a book about Robots?

Martin Ford discusses in his book "The Rise of the Robots" how job creation and retention will look in the future. He discusses how the automated world looks set to become and the possible affects it will and is already having on the economy, employment levels and wealth distribution.

The book is insightful and works upon principles introduced in his previous book "The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future". He seeks to answer and reflect on these principles based on reader interaction since 2009 when his first book was released. 

The book itself looks deeply into how we are already seeing examples of automation taking over roles in the current capitalist structures we live in today led from a recession and cut view point where jobs cut aren't adequately replaced.

The solutions to increased automation and loss of jobs are coincidentally highlighted in real life through the 2017 Labour manifesto and the Conservative manifesto. There is a real symmetry between policies being discussed and tactics that Mr Ford highlights. 

For example, in the conservative manifesto there is a resistance to the raising of minimum wage, a continuation of stagnating salary increases and an abundance of zero hour contracts means that humans are staying competitive against their robot automation competitors. However a lack wealth redistribution means that social inequality is growing and the wealthiest are merely getting richer.

In retrospect, the Labour party have announced a raising of the minimum wage which could see more job losses but will mean an increase in consumerism. They also plan to keep the young in education longer with little to no debt - an intelligent population is able to utilise free time for the greater good, or creating new jobs through innovation.

Part of the book describes a potential solution through the introduction of a Basic Income for all, which is greatly described as a "Citizen Dividend" which he describes in depth, very convincingly and with an eye on both sides of the argument. This in itself is a Green Party Policy!

Why should I read it?

I recommend this book if you are hot on politics and want to understand the pros and cons of what is being offered by the different parties. Its a great insight into how the world of tomorrow might look and is both a source of despair (in terms of distribution of wealth and economic calamity of the loss of jobs) and also hope (for the future and how our lives could be vastly improved). 

If you're not the politics sort but like technology this is also a great read as inspiration for the future! You might even find yourself coming up with ideas for a little start up!

My thoughts / Points for discussion... 
  • I think the book fails to link into how robotics and automation will deal with nurse and teacher numbers. Ford discusses how information intensive hospital jobs and doctoring could be aided using super computers such as IBM Watson but not so much how robotics would affect day to day nurses and doctors roles or how we might see a robot teacher teaching a class of 30/40 13 year old student.   In my opinion it would be difficult to automate away from these close and personal positions. Whilst a lack of foresight from governments might leave these frontline services short on numbers it's hard (for me at least) to imagine jobs such as: police, prison officers, social carers, nurses and doctors filled by robots. However the gap between open positions and filled positions is growing, and fears over immigration have caused a deep divide in the UK. Additionally to fill all these positions would cause a need for an increase in government income either through wealth taxes (on corporations and private individuals) or through cuts elsewhere.

  • Retrospectively the book, I believe miscalculates the additional jobs that might be created through our advancement in other fields. In my own field Integration for example, jobs are likely to increase, it also miscalculates the numerous pitfalls encountered on the road to corporate improvement. In my opinion there is likely to be new jobs emerge that we currently couldn't conceive of. For example, it is my belief that should we ever get to a machine capable of self improvement or the "singularity" it would not be inconceivable that we would have approached space exploration. i.e. a facility on the moon or potentially feet on mars.

  • From my own point of view it would have been nice to have looked towards our space exploration techniques for the future and the impact this might have on jobs, job creation and possibly colonisation. (am I being naive?)

  • Ford discusses at one point how in the future cognitive machines might be "conscious" so to speak, or more intelligent then we are ourselves with Moors Law perpetuating this! I have two thoughts towards this;
    • Is it possible that a cognitive machine, aware that History is written by the victors are able to slowly phase out our own history with alternative facts and emotive stories that lead us to follow a path the machine has dreamt up either to bring overall peace to humankind or even our demise. (Several Doctor Who series have followed this thought trend in recent times. "Smile")
    • Alternatively, do we take the singularity which we have contained and imprisoned for our personal gain either at a government level or for entertainment around the big questions of our day such as "What is 0/0" or "Is there a god?" a machine that surpasses human intelligence would surely make short work of such simple questions?

  • What isn't discussed by Ford as well (and possibly rightly as it is somewhat off topic) is around deep learning ethics. He mentions the eventual ability for companies and machines to know more about ourselves than we know ourselves i.e. the example with the lady who is pregnant and the advertising knows prior to her family. There is then a question about what responsibility do companies, programmers and government have to ensure citizens have their human rights to privacy and self determination. This then leads to the philosophical question as to whether humans as random and unpredictable as we would like to believe?

  • My final thought is the discussion around having robotics being used to aid existing workers instead of replacing them. It is my opinion that this is a moot point though one that is correctly covered. Ford does not seem to endorse this view point which I believe would have been contradicted by his opening quote of Milton Freidman "So then, why not give the workers spoons instead of shovels?"
Have you read the book? if not, find it here:

If you have... 

Any thoughts of your own? Feel free to share them in the comments! :)