My science fiction books – Far from the Spaceports and Timing, plus two more titles in preparation – are heavily built around exploring relationships between people and artificial intelligences, which I call personas. So as well as a bit of news about one of our present-day AIs – Alexa – I thought I’d talk today about how I see the trajectory leading from where we are today, to personas such as Slate.
Before that, though, some news about a couple of new Alexa skills I have published recently. The first is Martian Weather, providing a summary of recent weather from Elysium Planitia, Mars, courtesy of a public NASA data feed from the Mars Insight Lander. So you can listen to reports of about a week of temperature, wind, and air pressure reports. At the moment the temperature varies through a Martian day between about -95 and -15° Celsius, so it’s not very hospitable. Martian Weather is free to enable on your Alexa device from numerous Alexa skills stores, including UK, US, CA, AU, and IN. The second is Peak District Weather, a companion to my earlier Cumbria Weather skill but – rather obviously – focusing on mountain weather conditions in England’s Peak District rather than Lake District. Find out about weather conditions that matter to walkers, climbers and cyclists. This one is (so far) only available on the UK store, but other international markets will be added in a few days.
Current AI research tends to go in one of several directions. We have single-purpose devices which aim to do one thing really well, but have no pretensions outside that. They are basically algorithms rather than intelligences per se – they might be good or bad at their allotted task, but they aren’t going to do well at anything else. We have loads of these around these days – predictive text and autocorrect plugins, autopilots, weather forecasts, and so on. From a coding point of view, it is now comparatively easy to include some intelligence in your application, using modular components, and all you have to do is select some suitable training data to set the system up (actually, that little phrase “suitable training data” conceals a multitude of difficulties, but let’s not go into that today).
Then you get a whole bunch of robots intended to master particular physical tasks, such as car assembly or investigation of burning buildings. Some of these are pretty cute looking, some are seriously impressive in their capabilities, and some have been fashioned to look reasonably humanoid. These – especially the latter group – probably best fit people’s idea of what advanced AI ought to look like. They are also the ones closest to mankind’s long historical enthusiasm for mechanical assistants, dating back at least to Hephaestus, who had a number of automata helping him in his workshop. A contemporary equivalent is Boston Dynamics (originally a spin-off from MIT, later taken over by Google) which has designed and built a number of very impressive robots in this category, and has attracted interest from the US military, while also pursing civilian programmes.
Then there’s another area entirely, which aims to provide two things: a generalised intelligence rather than one targeted on a specific task, and one which does not come attached to any particular physical trappings. This is the arena of the current crop of digital assistants such as Alexa, Siri, Cortana and so on. It’s also the area that I am both interested in and involved in coding for, and provides a direct ancestry for my fictional personas. Slate and the others are, basically, the offspring – several generations removed – of these digital assistants, but with far more autonomy and general cleverness. Right now, digital assistants are tied to cloud-based sources of information to carry out speech recognition. They give the semblance of being self-contained, but actually are not. So as things stand you couldn’t take an Alexa device out to the asteroid belt and hope to have a decent conversation – there would be a minimum of about half an hour between each line of chat, while communication signals made their way back to Earth, were processed, and then returned to Ceres. So quite apart from things like Alexa needing a much better understanding of human emotions and the subtleties of language, we need a whole lot of technical innovations to do with memory and processing.
As ever, though, I am optimistic about these things. I’ve assumed that we will have personas or their equivalent within about 70 or 80 years from now – far enough away that I probably won’t get to chat with them, but my children might, and my grandchildren will. I don’t subscribe to the theory that says that advanced AIs will be inimical to humankind (in the way popularised by Skynet in the Terminator films, and picked up much more recently in the current Star Trek Discovery series). But that’s a whole big subject, and one to be tackled another day.
Meanwhile, you can enjoy my latest couple of Alexa skills and find out about the weather on Mars or England’s Peak District, while I finish some more skills that are in progress, and also continue to write about their future.