How will Artificial Intelligence affect the future of work? That was the theme of a combined Tuttle Club and Heroes of Mobile event that I attended on Tuesday in one of the towers of Canary Wharf. It was the second in a series hosted and sponsored by Truphone, a mobile phone service provider that produces a SIM card that operates in many countries – a goodbye to roaming charges they say! I met their founder and now CTO James Tagg at the start of the event and we started talking Physics – my own Applied Physics degree is a distant memory, but we shifted on to the Artificial Intelligence topic and James said something really useful and a little profound. He said that Artificial Intelligence is to Human Intelligence like an artificial (actually he said plastic) flower is to a real flower. Viewed in a certain way it can be as beautiful and look very similar, but it’s actually different. It might perform the same core purpose, an acceptable alternative to the real thing, but it’s still different. But that difference might be very useful – it lasts for years not days and doesn’t need water for example. That put the whole Artificial Intelligence and Machine Learning topic in to a new light for me… I was there at the event to learn more about an emergent technology which has the potential to be massively disruptive. Will it take away jobs? Will the robots take over the World? Is there a HAL 9000 or a Cyberyne in our real future?
The session was introduced by James, and Lloyd Davies of Tuttle was master of ceremonies and moderator. The main speaker was our good friend Benjamin Ellis. He admitted he is an engineer at heart, but soon got on to a key date in history. 25th January 1970 and the name Robert Williams. What is the significance? It was the date and name of the first person killed by a robot at Ford. It meant that from that point on industrial robots were deployed in cages. He talked about how we relate to artificial intelligence and robotics, and how it changes our behaviour. He mentioned how Google has open sourced their AI engine this week. That’s interesting but he believes the smart stuff is how you apply and contextualise the technology (not the AI engine itself, which will just end up as commodity technology). He went on to highlight a basic paradox. More people are being employed with AI solutions rather than less. Going further, we have less leisure time as a generation, even though we are using more technology at work to help get the job done – all surveys around this topic have found productivity hasn’t gone up with newly deployed IT.
Benjamin used a great 1950s picture of an IBM 305 5Mb (first ever) hard drive being loaded on to a plane with a crane, highlighting how far we’ve come. Kryder’s law suggests we might see a 2.5 inch 40 TB drive by the end of the decade – that plus Moore’s Law is driving a hell of an increase in the potential processing power and storage available – will that help make AI more of a reality?
Then Benjamin shifted gears to talk about emulation and simulation and the distinction between the two. We know the brain is made up of neurones. We can emulate what the brain does with things like visual recognition, face recognition and the like. However, there is more to it than that. Benjamin got us to stand and strike a Superman pose. Then he got us to sit timidly, and we discussed how our physiology changes our decision making, and our thinking based on that body language – we are very complex systems. He talked about how 1.73 billion nerve cells connected by 10.4 trillion synapses actually equates to less than 1% of the brain. He quoted this particular set of figures from this piece of research where Japan’s K Computer — a massive array consisting of over 80,000 nodes and capable of 10 petaflops (about 1016 billion operations per second) was put to work to simulate that portion of the brain’s capacity. It took 40 minutes to complete the simulation of 1 second of brain operation. Neurones are phenomenally complicated and not just switches. Neural networks are more complicated than we think, so emulating those may be just too hard. Let’s do simulation instead. Well that works really well for systems that we can describe precisely, where they are well documented. Businesses are more complex than that, barely repeatable processes as my friend Sig calls them – informal processes that are a little different every time through. So much of business works that way day in day out. Then he quoted Gregory House from the TV program – everybody lies. Lots of our behaviour is built around responding in a socially desirable way, to do with social cohesion, instincts that come from the reptilian part of the brain that controls fight or flight – the part that helps us avoid getting killed. We can put together a model of how we think the other person works, but social interactions are phenomenally complicated and how do we factor those in? Try running a simulation of what’s happening in the other person. What do we think that they think when they are saying that. Actually there is a negotiation of meaning here – how long will it take until we can compute that kind of thing as well as the human brain? Well if you start to cost out computers versus people, you soon get to numbers where the annual cost of ownership of even a single well specified laptop is more than the salary of a third of the planet’s population. Compute power is surprisingly expensive, and we humans can be very cheap. Where does it make economic sense?
Benjamin talked about the Hedonometer for measuring happiness, and how we can track the sentiment of tweets. Maybe computers can do the raw pre processing, be used for predictive analytics, or they can run algorithms to analyse data in the medical space. Yes, there are certain things that the compute power available today can do really well, but is AI really going to take all our jobs?
Well we discussed the productivity paradox – some types of jobs are ripe for automation with AI, but there are others where we’re nowhere near, and humans are still very necessary. But Benjamin was asking how do we work with this AI. How do we get inside the cage with it (bringing it back to the robot, Robert Williams and 1970)? Being alongside robots and AI will change our behaviour in business and he cited the Cobra paradox. In the time of Empire in India, there was a cobra problem. The government’s solution was to put a price on their heads to eradicate the cobra. But the entrepreneurs arrived and started cobra farms to make money out of the bounty! If you set an objective, people will find a way of gaming it. Where do you delineate? Who makes the decisions? At what level do you maintain control? How does the use of AI and robotics change our behaviour?
All great food for thought. We then adopted an Open Space Technology approach – people suggested a collection of issues to be discussed, and we split in to groups for some very thought provoking discussion. The whole evening was summarised by each of the 30 or 40 or so attendees by speaking a sentence or two of the key things they’d learned or a highlight of the evening in to a digital recorder that got passed around.
The hashtag for the event had been #FOWAI but we’d all spent so much time listening and talking, that nobody in the group had tweeted. There was just one tweet in that stream that I shared with the group at the end to their amusement. A “bot” of some kind had generated a tweet that said:
Attending Future of Work: Arti #FoWAI event? Here’s the best hotel to book: https://t.co/dDkcdcbvWn
— Magic Manila (@MagicEventDeals) November 9, 2015
You have to laugh at the irony of it!
Some very interesting thinking that has set me on the road to explore this topic some more in follow on posts.
[…] Terrar did a write up for the Agile Elephant Blog. If you were there and have written anything yourself, do let me know and I’ll add a […]