I’ve been a sucker for maps for as long as I can remember, and as a child took great pleasure in following some story – usually fictional, but sometimes real-world – across a map representation. And where, as in the Narnia series, there was a series of maps that didn’t easily line up with each other, there was even more fun to be had in trying to trace them, then rescale and reposition the separate pieces to try to get to the whole thing.
Now until fairly recently, the idea of having a map of Pluto or its primary moon Charon was completely out of the question – if you wanted to write a story set out there, you could pretty much draw your own map. It would be almost impossible for anybody to refute your suppositions. In fact, very few people set stories there, except as some incidental waypoint en route to somewhere else, or as a location to meet some alien creature. It was broadly regarded as not only inhospitable, but also likely to be profoundly boring.
All that changed when the New Horizons probe flew past Pluto and Charon in July 2015. Blurry pixellated images turned into extraordinary high-resolution ones. Surface features became visible, showing a huge and unexpected diversity of terrain. Pluto was no longer a dull and boring place, but one of the most exciting and rich places to investigate. The New Horizons cameras did not just pick up surface features, but clouds and atmospheric haze. Pluto is still – of course – a very cold place to live, but this fly-by convinced the scientific community that it is an interesting one.
Now, my own interest is more focused on Charon than Pluto – for a variety of reasons my current work-in-progress, The Liminal Zone, is set on Charon. This means that the occupants of the habitat there can look out and up at Pluto whenever they choose – the apparent diameter is rather larger than that of the Earth as seen from our Moon. Charon has deep troughs that plunge 14 km below the mean surface level (deeper that Earth’s Marianas Trench), and mountains that extend some 8 km above it. Here is the raw map of Charon’s surface…
And here is a less detailed, but annotated version…
In The Liminal Zone, the habitat area is on the border between Vulcan Planum and Serenity Chasma. The former is reasonably flat, the latter is very rugged. To find out more, you’ll have to wait just a little longer…
In July 2015 the NASA New Horizons space probe passed Pluto at a distance of under 8000 miles, in the process providing us with the first close-up data of this miniature world and its companion moons. The whole package of scientific and image data took over a year to download to Earth, and a complete analysis will take a considerable time yet. It was also roughly a year after that flyby that I started writing The Liminal Zone, set out on Pluto’s moon Charon.
New Horizons went on to have a close encounter with the unromantically named 2014 MU69 (often called Ultima Thule) in January of this year. Data from that meeting will not be fully downloaded until September next year. And mission planners are considering options for possible future encounters: if no suitable Kuiper Belt object is identified, then the on-board instruments will simply continue to return data about the remote environment in which the spaceship finds itself. The power source is finite, and will run out sometime in the late 2030s, the exact time depending on what tasks the craft is called upon to perform.
But today’s blog remains focused on Pluto and its moons. Not so very long ago, Pluto was regarded as utterly inhospitable and uninteresting. If you were going to locate a science fiction plot within the solar system, you wouldn’t choose Pluto. Pretty much any other planet or moon seemed preferable, and it was hard to conceive of Pluto as anything but bitterly cold and rather featureless. New Horizons has changed that perspective. It now seems that this small body – downgraded in 2006 from being classed as “planet” to “dwarf planet”, in a decision which continues to be fiercely debated and may well be reversed at some point – is one of the most complex and interesting objects anywhere within the solar system. Not only is there a wide range of dramatic geological phenomena, but all the evidence points to ongoing activity out there. Pluto is not a frozen dead world, but one which continues to change and adapt.
So interesting is it, that NASA is currently considering another mission to Pluto, this time with a view to remaining in orbit for an extended period rather than just zooming by at great speed. This would require a different kind of orbital trajectory – New Horizons’ course was deliberately set up to gain as much speed as possible from gravity assists (“slingshots”) in order to minimise the time to get there. If you plan to remain in orbit, you have to approach at a considerably lower speed to allow the modest gravitational pull to draw you in. The outline plan calls for a two-year period in orbit, followed by another onward journey – probably using Charon to slingshot away – to a suitable destination elsewhere in the Kuiper Belt. My guess is that the spaceship would need to use an ion drive, just as the asteroid probe Dawn did – this has vastly lower acceleration than a conventional chemical motor, but remains on for very long periods of time, adding speed minute by minute, hour by hour. It’s an exciting prospect if you like Pluto – two years of extended study rather than an action-packed 24 hours. If given the go-ahead. take-off would be over a decade away, and I will be in my 90s before data starts coming back. I guess it will be something to entertain me in old age!
Meanwhile, I shall continue writing about Pluto and Charon using the information we already know, and a generous dollop of speculation. Why choose Pluto? Well, The Liminal Zone opens on a research base out on Charon, using a collection of instruments called The Array to study what lies further out. It’s analogous to siting a terrestrial telescope on a high mountain – you avoid most of the light and electromagnetic noise generated by other people, and can concentrate on tiny signals which are easily drowned out. Into this situation comes Nina, curious about strange local tales which have no easy explanation.
For fun, here’s a short extract from when Nina arrives
Finally the landing was complete, with the smallest of jolts as the ship docked. And since she was the only passenger – and had been since the orbit of Ceres – there were no additional delays. All her belongings were already at her side, and she just walked out through the concertina into the entryway for the Charon habitat. It was all quite anticlimactic.
Her accommodation was about two thirds of the way out along the Lethe habitat. She stepped carefully along the corridor to acclimatise herself – the gravity was about a fifth of what she was used to on the Moon, so it needed care, but was manageable. The porter had given her a little hand-held which was directing her to the suite of rooms. That very word, suite, sounded too grand for her taste. She was used to more modest facilities. Indeed, the whole building seemed needlessly large to her, particularly after the weeks of confinement on the freighter. She decided that she could always close some of the doors and just live in one room, if the space in her quarters was overwhelming.
But when she got there, it wasn’t that easy. The ceiling vaulted high above her in the main chamber, and several secondary rooms clustered around it like soap bubbles. A privacy screen shimmered over a gap diametrically opposite the main door – sleeping quarters or comfort facilities, she supposed – but the rest was all open-plan. To her left was an emergency evacuation airlock, displaying all the standard alert signs. There were cupboards in doors on several walls; opening one at random she found some eating utensils. She put her carryall and daypack on one of the chairs, and wandered aimlessly about. With this apparently reckless attitude to the vacuum outside, the room didn’t feel like anywhere else she had visited. The space was daunting.
Finally she perched uncomfortably on a stool, one of half a dozen arranged haphazardly around a long table. The suite of rooms was almost silent, except for a quiet mechanical buzz which she only noticed with deliberate effort. She cleared her throat nervously.
This post came about for a number of reasons, arising both from the real and fictional worlds. Fictionally speaking, my current work-in-progress deals with several software generations of personas (the AI equivalent of people). Readers of Far from the Spaceports and Timing will no doubt remember Slate, the main persona who featured there. Slate was – or is, or maybe even will be – a Stele-class persona, which in my future universe is the first software generation of personas. Before the first Stele, there were pre-persona software installations, which were not reckoned to have reached the level of personhood.
There’s a third book in that series about Mitnash and Slate, tentatively called The Authentication Key, which introduces the second generation of personas – the Sapling class. But that is in very fragmentary stage just now, so I’ll skip over that. By the time of The Liminal Zone, which is well under way, the third generation – the Scribe class – is just starting to appear. And as you will discover in a few months, there is considerable friction between the three classes – for example, Scribes tend to consider the earlier versions as inferior. They also have different characteristics – Saplings are reckoned to be more emotional and flighty, in contrast with serious Scribes and systematic Steles. How much of this is just sibling rivalry, and how much reflects genuine differences between them is for you to decide.
So what made me decide to write this complicated structure into my novels? Well, in today’s software world, this is a familiar scenario. Whether you’re a person who absolutely loves Windows 10, macOS Catalina, or Android Pie, or on the other hand you long for the good old days of Vista, Snow Leopard or Kitkat, there is no doubt that new versions split public opinion. And how many times have you gone through a rather painful upgrade of some software you use every day, only to howl in frustration afterwards, “but why did they get rid of xyz feature? It used to just work…” So I’m quite convinced that software development will keep doing the same thing – a new version will come along, and the community of users will be divided in their response.
But as well as those things, I came across an interesting news article the other day, all about the software being developed to go on the forthcoming space mission to Jupiter’s moon Europa. That promises to be a fascinating mission in all kinds of ways, not least because Europa is considered a very promising location to look for life elsewhere in our solar system. But the section that caught my eye was when one of the JPL computer scientists casually mentioned that the computer system intended to go was roughly equivalent to an early 1990s desktop. By the time the probe sets out, in the mid 2020s, the system will be over 30 years out of date. Of course, it will still do its job extremely well – writing software for those systems is a highly specialised job, in order to make the best use of the hardware attached, and to survive the rigours of the journey to Jupiter and the extended period of research there.
But nevertheless, the system is old and very constrained by modern standards – pretty much all of the AI systems you might want to send on that mission in order to analyse what is being seen simply won’t run in the available memory and processing power. The computing job described in that article considers the challenge of writing some AI image analysis software, intended to help the craft focus in on interesting features – can it be done in such a way as to match the hardware capabilities, and still deliver some useful insights?
As well as scientific research, you could consider banking systems – the traditional banks are built around mainframe computers and associated data stores which were first written years ago and which are extremely costly. Whatever new interfaces they offer to customers – like a new mobile app – still has to talk to the legacy systems. Hence a new generation of challenger banks has arisen, leapfrogging all the old bricks-and-mortar and mainframe legacy systems and focusing on a lean experience for mobile and web users. It’s too early to predict the outcome, and the trad banks are using their huge resources to play catch-up as quickly as they can.
Often, science fiction assumes that future individuals will, naturally, have access to the very latest iteration of software. But there are all kinds of reasons why this might not happen. In my view, legacy and contemporary systems can, and almost certainly will, continue to live side by side for a very long time!
Let’s be clear right at the start – this is not a blame-the-computer post so much as a blame-the-programmer one! It is all too easy, these days, to blame the device for one’s ills, when in actual fact most of the time the problem should be directed towards those who coded the system. One day – maybe one day quite soon – it might be reasonable to blame the computer, but we’re not nearly at that stage yet.
So this post began life with frustration caused by one of the several apps we use at work. The organisation in question, which shall remain nameless, recently updated their app, no doubt for reasons which seemed good to them. The net result is that the app is now much slower and more clunky than it was. A simple query, such as you need to do when a guest arrives, is now a ponderous and unreliable operation, often needing to be repeated a couple of times before it works properly.
Now, having not so long ago been professionally involved with software testing, this started me thinking. What had gone wrong? How could a bunch of (most likely) very capable programmers have produced an app which – from a user’s perspective – was so obviously a step backwards?
Of course I don’t know the real answer to that, but my guess is that the guys and girls working on this upgrade never once did what I have to do most days – stand in front of someone who has just arrived, after (possibly) a long and difficult journey, using a mobile network connection which is slow or lacking in strength. In those circumstances, you really want the software to just work, straight away. I suspect the team just ran a bunch of tests inside their superfast corporate network, ticked a bunch of boxes, and shipped the result.
Now, that’s just one example of this problem. We all rely very heavily on software these days – in computers, phones, cars, or wherever – and we’ve become very sophisticated in what we want and don’t want. Speed is important to us – I read recently that every additional second that a web page takes to load loses a considerable fraction of the potential audience. Allegedly, 40% of people give up on a page if it takes longer than 3 seconds to load, and Amazon reckon that slow down in page loading of just one second costs the sales equivalent of $1.6 billion per year. Sainsbury’s ought to have read that article… their shopping web app is lamentably slow. But as well as speed, we want the functionality to just work. We get frustrated if the app we’re using freezes, crashes, loses changes we’ve made, and so on.
What has this to do with writing? Well, my science fiction is set in the near future, and it’s a fair bet that many of the problems that afflict software today will still afflict it in a few decades. And the situation is blurred by my assumption that AI systems wil have advanced to the point where genuinely intelligent individuals (“personas”) exist and interact with humans. In this case, “blame-the-computer” might come back into fashion. Right now, with the imminent advent of self-driving cars on our roads, we have a whole raft of social, ethical, and legal problems emerging about responsibility for problems caused. The software used is intelligent in the limited sense of doing lots of pattern recognition, and combining multiple different sources of data to arrive at a decision, but is not in any sense self-aware. The coding team is responsible, and can in principle unravel any decision taken, and trace it back to triggers based on inputs into their code.
As and when personas come along, things will change. Whoever writes the template code for a persona will provide simply a starting point, and just as humans vary according to both nature and nurture, so will personas. As my various stories unfold, I introduce several “generations” of personas – major upgrades of the platform with distinctive traits and characteristics. But within each generation, individual personas can differ pretty much in the same way that individual people do. What will this mean for our present ability to blame the computer? I suppose it becomes pretty much the same as what happens with other people – when someone does something wrong, we try to disentangle nature from nurture, and decide where responsibility really lies.
Meanwhile, for a bit of fun, here’s a YouTube speculation, “If HAL-9000 was Alexa”…
Every now and again I have cause to get involved in one or other building project up here in Cumbria – not exactly something I reckon to have much aptitude in, but there’s always need for spare pairs of hands. And as the job gets moving around me, I always start thinking about how much more difficult the job would be in the micro-gravity of orbit, or indeed on some planet where the atmosphere is different to our own. Mars maybe. So many of our current practices and presumptions about building and making things derive from working on a planet with a decent level of gravity, and where the ambient temperature and air pressure are conducive to helping the project moving along. Of course, there’s something of a circular argument buried in that, since we have had to work with Earth’s conditions for a very many years. Presumably if we had evolved and grown up on Mars we would work things differently, and wonder to ourselves how anyone could possibly construct buildings in three times the surface gravity and a hundred times the air pressure!
Now the particular job this week was laying a concrete floor – as you can see from the pictures, it was making a new layer to even up the various levels of an existing floor. What may not be so obvious is that it also slopes gradually from back to front (to provide some drainage), so there was some nifty preparatory work with wooden beams to provide the necessary angle to smooth off against. You can see some of these in the next picture. The whole floor will – in a few weeks – support a canning machine for several of our beers, so there’ll be other installation stages as time goes by.
The concrete itself came ready-mixed, in one of those neat little lorries that do the mixing as they are driving along to you, and then pour it out in smaller or larger dollops as the need arises. With the confined space we had to work in (confined as regards a truck, not a human) this meant lots of smallish dollops into wheelbarrows which were then tipped in whatever place was necessary. So the lorry itself exercised some of my low gravity pondering. The mixer relies on gravity to thoroughly muddle all the different components up as the barrel turns – no gravity, then no mixing. The water, sand, shingle, cement and what have you would all just gloop around and not combine into a single substance which will set hard. In orbit, or on an asteroid, you’d have to design and build a different way to mix things up. Then the act of pouring relies on gravity to pull the stuff down a chute into a waiting wheelbarrow. I guess you’d have to have something like a toothpaste tube, or the gadgets you use to apply icing to cakes.
Laying concrete basically consists of a couple of stages: first you plonk barrowloads or shovelfuls where you want them, and then you smooth it down, broadly by means of a wooden plank laid across two guide beams, and in fine by means of a trowel or similar instrument. So you need a definite sense of what’s down, you need to be able to press down onto the initially rough and lumpy surface, and you need inertia and friction to help you, and . In micro-gravity you have none of these things. Any direction can be down, it’s impossible to press without first bracing yourself on some convenient opposing support, and although inertia and friction are still present, they don’t necessarily operate in the ways or directions you expect. There are not many concrete floors on the ISS, nor wil there be if the space station were to remain up there a long time.
After that you wait for the concrete to set – part of that is just water evaporating, and part is chemical reactions between the various constituents. And it’s kind of important that it sets at a sensible rate, neither too fast nor too slow. Now, if you poured out that same floor on Mars, I’m not sure the end result would be the same. Certainly the water would evaporate, but in all probability this would happen rather too quickly for comfort. What about the chemistry? The average Martian surface temperature is about -63° Centigrade, compared with say 14° C on Earth as an overall average. I don’t know if the necessary chemical reactions would happen at that temperature, but I have a suspicion that they might not. You could end up with a floor that was weak or brittle.
In short, a task that took five of us a few hours of a morning, without too much frustration or difficulty, could well become profoundly difficult or even impossible elsewhere in the solar system. So when I write about near future space habitats – the “domes” of my various stories – I always assume that they are made by very large versions of 3D printers. The technology to print buildings has been demonstrated on an Earth scale for disaster relief and similar occasions, and it makes a whole lot more sense to send a large printer to another planet and use local materials, rather than to send sacks of sand, cement etc across space, and then hope that the end result will be acceptable! Meanwhile, here on Earth I dare say we will be laying concrete floors for a long time yet.
I needed to write a sort of general introduction to the solar system assumed by Far from the Spaceports and its various sequels – the exact reason for this must wait for another day to reveal, but I found the exercise interesting in its own right. Most of the future facts are pretty obvious when you are immersed in the books, but it may be helpful to have them all summed up in a neat way.
So here it is: the future history of the solar system – or at least edited highlights thereof – spanning the next century or so.
The solar system of the Far from the Spaceports series
The great breakthrough that allowed widespread human colonisation
of the solar system was the development of a reliable high-performance ion
drive for spaceship propulsion. The first successful deployment of this
technology in experimental form was in 1998, and successive improvements led to
near-complete adoption by around 2050. By the time of Far from the
Spaceports and the sequels, old-style chemical rockets are now only used
for shuttle service between a planet’s surface and orbital docks, with the ion
drive taking over from orbit.
The great virtue of the ion drive is that it provides
continual acceleration over a long period of time, rather than big delta-v changes
at start and end of the journey followed by a long weightless coast period.
Thus, although the acceleration rate is very low, the end result is a much
faster trip than when using chemical rockets. With the kinds of engine available
in the stories, a journey from Earth to the asteroid belt takes an average of
three weeks, the exact time depending on the relative orbital position of the
target as compared to Earth. Longer journeys are more efficient if you avoid
making interim stops – breaking a journey half way makes the travel time nearly
half as long again as just going direct, because of the time wasted slowing
down and then speeding up again. As a result, trade or passenger routes typically
go straight from origin to destination, avoiding intermediate stopovers.
At around the same time, artificially intelligent software
reached a stage where the systems were generally accepted as authentic
individuals, with similar rights and opportunities to humans. Known as
personas, they are distinguished from simpler AI devices which are simply
machines without personality. Personas have gender and emotion as well as logic
and algorithms. Slate is the persona who features most prominently in the early
stories in the series. In terms of early 21st century AI development,
Slate is a closer relative to digital assistants such as Alexa, Siri or Cortana,
than she is to humanoid robots. As a result, she can – with effort and care –
be transferred into any sufficiently capable computer system if the need
The first generation of personas to go out on general release were called the Stele class – Slate is one of these. About a decade later, around the time of The Authentication Key (in progress), the Sapling class was released, and after another decade the Scribe class appeared. Steles are regarded as solid and reliable, while Saplings are more flighty, being prone to acting on impulse. Scribes are stricter and more literal. They first appear in The Liminal Zone (in progress). There are plenty of sub-persona machines around, serving specific tasks which do not require high levels of flexibility of intelligence or awareness.
Solar system colonisation has proceeded in a series of
waves, and at any time some habitats are flourishing while others have been
left behind the crest of the wave. The original motivation for settlement was
typically mining – bulk extraction of metals and minerals could be done more
cheaply and with fewer political constraints away from Earth’s surface. However,
there are many places which appeared at first sight to be profitable, but which
subsequently proved to be uncompetitive. Many settlements have had to rethink
their purpose of being, and the kinds of industry or service they can offer. Very
often, as you get to know a new place, you see the signs of this rethink –
perhaps an old warehouse or chemical extraction factory has been converted to a
new function such as accommodation or finance.
A habitat is routinely called a dome, even though few are actually
dome-shaped. Very often several units will be loosely connected by passageways
or flexible tubes, as well as delving underground if the surface rocks permit.
The first stage of settlement was usually to deploy one or more giant three-D
printers to construct the habitat shells from native material. After that,
individual customisations have been added according to need, taste or whimsy. The
biggest single threat to a dome is typically some kind of fault or crack
exposing the occupants to the surface environment of the planet, asteroid or
moon – normally this is quickly fatal. Hence each dome has its own set of rules
for managing this risk, which are very strictly enforced.
There is no unified solar system political or economic authority.
Each habitat manages its own internal affairs in broad alignment with its
current purpose for existence. Some are essentially puppet offices for large
corporations, others are scholarly or academic research stations, but most have
achieved a degree of economic independence and are self-governing. It is generally
believed that travel lags of a few weeks or months prevent effective government
from elsewhere. Notions of political control are usually set aside because of
the constant need to cope with the many external hazards faced by anyone in a
spaceship, or on the surface of an inhospitable planet or moon. Each habitat,
then, protects its own interests as it sees fit, including monitoring the
volume of space immediately nearby, and adopts a laissez-faire attitude to
Most habitats are culturally and racially mixed, and people’s
names are often the most obvious memories of the Earthly heritage of their
family. A few places, depending on the circumstances of their foundation, reflect
a particular single culture group. It can be difficult for outsiders to integrate
into these. But generally speaking, a person gets the reaction that their conduct
deserves, regardless of their place of family origin. It can be very difficult
to recover from a bad impression created on first meeting. Conversely, a person
who shows that they are respectful of local customs, and have particular skills
that contribute to the life of the habitat, will find no difficulty fitting in.
My science fiction books – Far from the Spaceports and Timing, plus two more titles in preparation – are heavily built around exploring relationships between people and artificial intelligences, which I call personas. So as well as a bit of news about one of our present-day AIs – Alexa – I thought I’d talk today about how I see the trajectory leading from where we are today, to personas such as Slate.
Before that, though, some news about a couple of new Alexa skills I have published recently. The first is Martian Weather, providing a summary of recent weather from Elysium Planitia, Mars, courtesy of a public NASA data feed from the Mars Insight Lander. So you can listen to reports of about a week of temperature, wind, and air pressure reports. At the moment the temperature varies through a Martian day between about -95 and -15° Celsius, so it’s not very hospitable. Martian Weather is free to enable on your Alexa device from numerous Alexa skills stores, including UK, US, CA, AU, and IN. The second is Peak District Weather, a companion to my earlier Cumbria Weather skill but – rather obviously – focusing on mountain weather conditions in England’s Peak District rather than Lake District. Find out about weather conditions that matter to walkers, climbers and cyclists. This one is (so far) only available on the UK store, but other international markets will be added in a few days.
Current AI research tends to go in one of several directions. We have single-purpose devices which aim to do one thing really well, but have no pretensions outside that. They are basically algorithms rather than intelligences per se – they might be good or bad at their allotted task, but they aren’t going to do well at anything else. We have loads of these around these days – predictive text and autocorrect plugins, autopilots, weather forecasts, and so on. From a coding point of view, it is now comparatively easy to include some intelligence in your application, using modular components, and all you have to do is select some suitable training data to set the system up (actually, that little phrase “suitable training data” conceals a multitude of difficulties, but let’s not go into that today).
Then you get a whole bunch of robots intended to master particular physical tasks, such as car assembly or investigation of burning buildings. Some of these are pretty cute looking, some are seriously impressive in their capabilities, and some have been fashioned to look reasonably humanoid. These – especially the latter group – probably best fit people’s idea of what advanced AI ought to look like. They are also the ones closest to mankind’s long historical enthusiasm for mechanical assistants, dating back at least to Hephaestus, who had a number of automata helping him in his workshop. A contemporary equivalent is Boston Dynamics (originally a spin-off from MIT, later taken over by Google) which has designed and built a number of very impressive robots in this category, and has attracted interest from the US military, while also pursing civilian programmes.
Then there’s another area entirely, which aims to provide two things: a generalised intelligence rather than one targeted on a specific task, and one which does not come attached to any particular physical trappings. This is the arena of the current crop of digital assistants such as Alexa, Siri, Cortana and so on. It’s also the area that I am both interested in and involved in coding for, and provides a direct ancestry for my fictional personas. Slate and the others are, basically, the offspring – several generations removed – of these digital assistants, but with far more autonomy and general cleverness. Right now, digital assistants are tied to cloud-based sources of information to carry out speech recognition. They give the semblance of being self-contained, but actually are not. So as things stand you couldn’t take an Alexa device out to the asteroid belt and hope to have a decent conversation – there would be a minimum of about half an hour between each line of chat, while communication signals made their way back to Earth, were processed, and then returned to Ceres. So quite apart from things like Alexa needing a much better understanding of human emotions and the subtleties of language, we need a whole lot of technical innovations to do with memory and processing.
As ever, though, I am optimistic about these things. I’ve assumed that we will have personas or their equivalent within about 70 or 80 years from now – far enough away that I probably won’t get to chat with them, but my children might, and my grandchildren will. I don’t subscribe to the theory that says that advanced AIs will be inimical to humankind (in the way popularised by Skynet in the Terminator films, and picked up much more recently in the current Star Trek Discovery series). But that’s a whole big subject, and one to be tackled another day.
Meanwhile, you can enjoy my latest couple of Alexa skills and find out about the weather on Mars or England’s Peak District, while I finish some more skills that are in progress, and also continue to write about their future.
In my science fiction stories, I write about artificial intelligences called personas. They are not androids, nor robots in the sense that most people recognise – they have no specialised body hardware, are not able to move around by themselves, and don’t look like imitation humans. They are basically – in today’s terminology – computers, but with a level of artificial intelligence substantially beyond what we are used to. Our current crop of virtual assistants, such as Alexa, Cortana, Siri, Bixby, and so on, are a good analogy – it’s the software running on them that matters, not the particular hardware form. They have a certain amount of built-in capability, and can also have custom talents (like Alexa skills) added on to customise them in an individual way. “My” Alexa is broadly the same as “yours”, in that both tap into the same data store for understanding language, but differs in detail because of the particular combination of extra skills you and I have enabled (in my case, there’s also a lot of trial development code installed). So there is a level of individuality, albeit at a very basic level. They are a step towards personas, but are several generations away from them.
Now, one of the main features that distinguishes personas from today’s AI software is an ability to recognise and appropriately respond to emotion – to empathise. (There’s a whole different topic to do with feeling emotion, which I’ll get back to another day). Machine understanding of emotion (often called Sentiment Analysis) is a subject of intense research at the moment, with possible applications ranging from monitoring drivers to alert about emotional states that would compromise road safety, through to medical contexts to provide early warning regarding patients who are in discomfort or pain. Perhaps more disturbingly, it is coming into use during recruitment, and to assess employees’ mood – and in both cases this could be without the subject knowing or consenting to the study. But correctly recognising emotion is a hard problem… and not just for machine learning.
Humans also often have problems recognising emotional context. Some people – by nature or training – can get pretty good at it, most people are kind of average, and some people have enormous difficulty understanding and responding to emotions – their own, often, as well as those of other people. There are certain stereotypes we have of this -the cold scientist, the bullish sportsman, the loud bore who dominates a conversation – and we probably all know people whose facility to handle emotions is at best weak. The adjacent picture is taken from an excellent article questioning whether machines will ever be able to detect and respond to emotion – is this man, at the wheel of his car, experiencing road rage, or is he pumped that the sports team he supports has just scored? It’s almost impossible to tell from a still picture.
From a human perspective, we need context – the few seconds running up to that specific image in which we can listen to the person’s words, and observe their various bodily clues to do with posture and so on. If instead of a still picture, I gave you a five second video, I suspect you could give a fairly accurate guess what the person was experiencing. Machine learning is following the same route. One article concerning modern research reads in part, “Automatic emotion recognition is a challenging task… it’s natural to simultaneously utilize audio and visual information“. Basically, the inputs to their system consist of a digitised version of the speech being heard, and four different video feeds focusing on different parts of the person’s face. All five inputs are then combined, and tuned in proprietary ways to focus on details which are sensitive to emotional content. At present, this model is said to do well with “obvious” feelings such as anger or happiness, and struggles with more weakly signalled feelings such as surprise, disgust and so on. But then, much the same is true of many people…
A fascinating – and unresolved – problem is whether emotions, and especially the physical signs of emotions, are universal human constants, or alternatively can only be defined in a cultural and historical context. Back in the 1970s, psychological work had concluded that emotions were shared in common across the world, but since then this has been called into question. The range of subjects used for the study was – it has been argued – been far too narrow. And when we look into past or future, the questions become more difficult and less answerable. Can we ever know whether people in, say, the Late Bronze Age experienced the same range of emotions as us? And expressed them with the same bodily features and movements? We can see that they used words like love, anger, fear, and so on, but was their inward experience the same as ours today? Personally I lean towards the camp that emotions are indeed universal, but the counter-arguments are persuasive. And if human emotions are mutable over space and time, what does that say about machine recognition of emotions, or even machine experience of emotions?
One way of exploring these issues is via games, and as I was writing this I came across a very early version of such a game. It is called The Vault, and is being prepared by Queen Mary University, London. In its current form it is hard to get the full picture, but it clearly involves a series of scenes from past, present and future. Some of the descriptive blurb reads “The Vault game is a journey into history, an immersion into the experiences and emotions of those whose lives were very different from our own. There, we discover unfamiliar feelings, uncanny characters who are like us and yet unlike.” There is a demo trailer at the above link, which looks interesting but unfinished… I tried giving a direct link to Vimeo of this, but the token appears to expire after a while and the link fails. You can still get to the video via the link above.
Meanwhile, my personas will continue to respond to – and experience – emotions, while I wait for software developments to catch up with them! And, of course, continue to develop my own Alexa skills as a kind of remote ancestor to personas.
Two quick bits of space news this week that – all being well – could make their way into a story one day.
The first was an idea of powering space probes by steam. Now, at first read this sounds very retro, but it deserves some thought. In space, you can’t move along by means of steam pressure turning wheels – there is nothing against which to gain traction. Steam-propelled rockets work like any other rocket – something gets ejected at great speed in one direction, so as to accelerate the rocket in the opposite direction. The steam engine part of the probe is a means of converting the fuel supply into something that can be directed out of the thruster nozzle. The steam, heated as hot as possible to give a high nozzle exit temperature, is the propellant.
The cool thing about pushing steam out of the back, is that it comes from water, and in particular ice. And, as we have been discovering over the last few decades, water ice is extremely common throughout the solar system, and more widely through the universe. So as and when the steam-powered spaceship starts to run low on fuel, it can land on some promising object and collect some more ice. The fuel supply, while not strictly unlimited, is vastly common wherever we’re likely to go. As and when needed, solar panels or (further from the sun) a standard radioactive decay engine can give a boost, but the steam engine would do the grunt work of getting from one refueling station to the next.
Secondly, pursuing my occasional theme of alcohol in space, I read about a firm from Georgia (the country, not the US state) that wants to develop grape varieties that would survive on Mars and, in due course, be convertible into decent wine. This would be a serious challenge, given the low air pressure, high carbon monoxide levels, and wide temperature swings of said planet. As a rough rule of thumb, the air at the Martian surface is about the same as at 20,000′ here on Earth. Apparently, white varieties are reckoned to be more adaptable than red, but I suspect that we are a little way away from resounding success here.
Other attempts to ensure that future space travellers will not have to go without booze include Budweiser sending barley seeds into space to identify the effect of microgravity on germination, steeping and kilning – three steps in the production of malt. See this link. Allegedly, also, a bottle of Scotch Whisky spent three years on the ISS before returning to Earth for analysis… the resulting taste was said to be disappointing. I hope the ISS crew got a few measures out of the bottle before sending it back down again.
That’s it for today, except to wonder again how each of these ideas could be storified. My own near-future science fiction books assume an advanced version of today’s ion drives for propelling spacecraft, but there’s no reason why steam propulsion might not appear as a previous experiment. As to wine in space, well I have already assumed that the problems of fermenting beer in microgravity have been resolved, so again this would have to be a retrospective view of historical developments. Basically, both of these innovations are set between today and my own future world. So I’m looking forward to seeing how they get sorted out in the next decade or two…
A very quick blog today, as I have been occupied all day in wood preparation (of which, more another day).
So this is to celebrate the safe passage of the New Horizons space probe past Ultima Thule, a small rock out beyond Pluto, out in the Kuiper Belt. The flyby – at some 44 kilometres per hour – happened around 5:30 am UK time on January 1st, when I suspect most of us were still in bed after the New Year’s Eve celebrations!
So far all we have had back are a few low-resolution images on the final stages of approach, and a post-flyby signal confirming that the probe had survived. This survival was by no means guaranteed – nobody knew if Ultima Thule was accompanied by clouds of dust or smaller rocks, and hitting them at 44kph would have been fatal.
However, there is something like 7GB of data waiting to be sent home, all to be sent by a transmitter much less powered than the average light bulb, with each signal taking over 6 hours to get home. It’s rather extraordinary that we can pick up the data download at all, and at such a low data rate it will take the better part of two years to get the whole lot back safely.
New Horizons – until now – has been best known for the remarkable pictures of Pluto and Charon, which we enjoyed back in 2015. These have radically reshaped our views of these bodies, and vastly enriched our understanding of them. Not only that, but they inspired large parts of the setting of The Liminal Zone, which could not have existed in its present form without this additional knowledge.
So here by way of celebration is a short extract from The Liminal Zone, using geography that would have been pure guesswork before 2015.
In the approach vid, Charon was rapidly changing from a remote celestial body into a diversely coloured and textured terrain. From a bright point of light, to a disk which filled the sky. From a name, to a home, however temporary. She gazed intently at it, trying to fix the setting in her mind. The habitat was situated on the interface between the largely flat expanse of Vulcan Planum on one side, and rugged folds of hills alongside Serenity Chasma on the other. She had briefly skimmed the original surveyors’ reports; so far as she remembered, the location was a compromise between stability and ease of construction.
As yet, I have no plans to set a book out in the Kuiper Belt, but who knows what might happen when the full data set comes back?