Category Archives: Science

“The eye prefers repetition, the ear prefers variety”

I was at the annual Amazon technical summit here in London last week, and today’s blog post is based on something I heard one of the presenters say. On the whole it was a day of consolidating things already developed, rather than a day of grand new breakthroughs, and I enjoyed myself hearing about enhancements to voice and natural language services, together with an offbeat session on building virtual 3d worlds.

Grid design based on thirds (Interaction Design Foundation)
Grid design based on thirds (Interaction Design Foundation)

But I want to focus on one specific idea, contrasting how we build human-computer interfaces quite differently for the eye and the ear. In short, “the eye prefers repetition, the ear prefers variety“. Look at the appearance of your typical app on computer or phone. We have largely standardised where the key elements go – menu, options, title and so on. They are so standardised that we can tell at a glance if something is “in the wrong place“. The text stays the same every time you open it. The icons stay the same, unless they have a little overlay telling you to do something with them. And so on.

Now in the middle of a technical session I just let that statement drift by, but it stuck with me afterwards, and I kept turning it over. Hence this post. At face value it seemed a bit odd – our eyes are constantly bombarded with hugely diverse information from the world around us. But then I started thinking some more. It’s not just to do with the light falling into our eyes, or the biology of how our visual receptors handle that – our image of the world is the end result of a very complex series of processing steps inside our nervous system.

House By Beach - quick sketch
House By Beach – quick sketch

A child’s picture of a face, or a person, is instantly recognisable as such, even though reduced to a few schematic shapes. A sketch artist will make a few straight lines and a curve, and we know we are looking at a house beside a beach, even though there are no colours or textures to help us. The animal kingdom shows us the same thing. Show a toad a horizontal line moving sideways, and it reacts as though it was a worm. Turn the line vertical and move it in the same way, and the toad ignores it (see this Wikipedia article or this video for details). Arrange a dark circle over a mouse and increase its size, and it reacts with fear and aggression, as though something was looming over it (see this article, in the section headed Visual threat cues).

https://78.media.tumblr.com/d9d3e010a958fbd345007c823d3d6580/tumblr_oojho5wyH31udk21ko1_500.jpg
Toad: Mystery Science Theatre 3000

It’s not difficult to see why – if you think you might be somebody’s prey, you react to the first sign of the predator. If you’re wrong, all you’ve lost is some time and adrenalin. If you ignore the first signs and you’re wrong, it’s game over!

So it makes sense that our visual sense, including nervous system as well as eyes, reduces the world to a few key features. We skim over fine detail at first glance, and only really notice it when we need to – when we deliberately turn our attention to it.

Also,there’s something to be learned from how light and sound work differently for us. At a very fundamental level, light adds up to give a single composite result. We mix red and yellow paint to give orange, or red and green light on a computer screen to give yellow. The colour tints, or the light waves, add up to make a single average colour. Not so with sound. Play the note middle C on a keyboard, then start playing the G above it. You end up with a chordyou don’t end up with a single note which is a blend of the two. So adding visual signals, and adding audible ones, give completely different effects.

Finally, the range of what we can perceive is entirely different. The most extreme violet light that we can see has about twice the frequency of the most extreme red. Doubling frequency gives us an octave change, so that means we can see one octave of visible light out of the entire spectrum. But a keen listener under ideal circumstances can hear a range of seven or eight octaves of sound, from about 12 Hz to nearly 30kHz. Some creatures do a bit better than us in both light and sound detection, but the basic message is the same – we hear a much more varied spectrum than we see.

Amazon Dot - Active
Amazon Dot – Active

Now, the technical message behind that speaker’s statement related to Alexa skills. To retain a user’s interest, the skill has to not sound the same every time. The eye prefers repetition, so our phone apps look the same each time we start them. But the ear prefers variety, so our voice skills have to mirror that, and say something a little bit different each time.

I wonder how that applies to writing?

Where would be a good place to live?

Cover - Perelandra (Goodreads)
Cover – Perelandra (Goodreads)

It’s a question which besets many science fiction writers! Now, in the former days of the 20th century, when not nearly so much was known about other star systems, writers were free and easy with their destinations. C.S. Lewis, who anyway had other motivations in his writing than script scientific accuracy, cheerfully placed parts of his science fiction trilogy on Mars and Venus. E.E. (Doc) Smith had alien habitations all over the solar system, with a wild array of biological adaptations to high gravity, strange atmospheres, or whatever. And when writers got their characters out of the solar system into the galaxy at large, the diversity just kept on growing (except for those authors like Asimov, who for various reasons carefully avoided alien life altogether).

But these days we have a vast amount of data to steer our fiction. In some cases this means that environments get excluded – it would be a brave author indeed who would place a novel like Perelandra on the surface of Venus these days (unless they have a back-story of extensive terraforming). On the other hand, new opportunities for life in previously unconsidered places have emerged – like high up in the Venusian atmosphere, or in liquid oceans underneath the ice coatings of various outer system moons. These are not likely to be, as they say, life as we know it…

Schematic of habitable zone sizes (Penn State University)
Schematic of habitable zone sizes (Penn State University)

On a wider scale, we have a good idea what to look for as regards planets that might support life. Most thinking on the subject supposes that liquid water would be necessary – it’s just too useful a chemical in all kinds of ways to see how it wouldn’t participate in life’s chemistry. So we can plot the Goldilocks Zone for any given star (too close in, and water boils and evaporates… too far out, and it freezes)… but we know from our own solar system that this does not cover all the bases. Close-in planets are probably tidally locked to their sun, and so have a cooler side. Far-out planets may well have orbiting moons with sub-surface water, kept from freezing by a variety of factors.

Back in the day, people used to look for stars relatively similar to our own sun, on the grounds that we kind of knew what we were looking for. But these days, following the extraordinary success of planet-hunting space missions like Kepler (soon to be followed by TESS), we know that many planets circle dim red dwarf stars. For sure, the heat output is much less, but that just means that the Goldilocks Zone huddles close in. And red dwarf stars are immensely long-lived, which gives life time to develop. On the other hand, many red dwarfs also go through erratic flare cycles, potentially blasting their associated planets with X-rays. But for my money, the first place we may find life elsewhere is likely to be circling a red dwarf.

So from the writer’s point of view, it’s a great time to be postulating life elsewhere, but also a rapidly-changing one. New data is pouring in, and new ways of analysing and comprehending that data. It all adds up to a wealth of new ideas and imaginative leads…

Artist's impression, planets discovered by TRAPPIST orbiting a red dwarf star about 40 light years from Earth (NASA/JPL)
Artist’s impression, planets discovered by TRAPPIST orbiting a red dwarf star about 40 light years from Earth (NASA/JPL)

There’s a good story here, I think…

The last few days have been vastly busy for me with outside jobs, and I am way behind on blog matters! But I did come across some recent research about the movements of stars which fascinated me, and which has prompted this post. It also has the seeds of what could be a fine prehistoric story, which one day might get written.

If you do a quick search for “what is the closest star to our sun” then you will get the reply “Alpha Centauri” (or perhaps, more precisely, “Proxima Centauri” – if you ask Alexa she will give you quite a detailed response). This multiple star system is situated just over four light years from us – for comparison, Pluto is under 5 light hours from the sun. But Alpha Centauri is very like our sun in terms of size, energy, and so on, and is easily visible from the right locations, so has appeared several times in stories.

The nearest stars to us (Wikipedia)
The nearest stars to us (Wikipedia)

But what if you then consider the movements of stars over time? All stars near us are involved in a vast circling movement around the galaxy’s centre, but this movement is not regular and orderly in the way that the planets’ movement is around our sun. Stars approach each other and move away, potentially having huge effects on the clusters of planets, comets, etc that accompany them. So what happens if we look forward or backward in time?

So as you can see, Proxima Centauri will get steadily closer to us for the next 30,000 years or so, then lose its role to Ross 248. But none of these stars gets closer to us than about 3 light years, which is comfortably far away and is unlikely to cause any serious issues.

Nearest stars to us changing over time (Wikipedia)
Nearest stars to us changing over time (Wikipedia)

Perhaps you are wondering where the story is in this? We will get there…

Now, these stars are mostly fairly bright, and many of them have been known since antiquity. But in recent years, powerful space-based telescopes like Hubble have discovered that far the most numerous stars in our galaxy are not bright ones like our sun, or super-bright ones like Sirius, but small, dim ones called red or brown dwarfs. These burn extremely slowly, conserving their fuel in a miserly way that means they will hugely outlive our sun. They are invisible to the naked eye even at quite close range (astronomically speaking)… but many of them have planets of their own, and if these planets huddle close enough in, then they could quite easily be habitable. To date, much of our quest for life elsewhere in the universe has looked at stars broadly similar to our own, but maybe we should be looking by preference at these dwarfs?

So… what if we roll back that chart in time to a scale of 70,000 years rather than 20,000, and include the paths of dwarf stars in it (a feat which has only become possible in very recent years). For context, 70,000 years ago anatomically modern humans had already experienced their first large-scale migration out of Africa to other parts of the world, and would soon be doing so a second time. They were sharing the world with Neanderthals and other hominids, and would be for another 30-40,000 years, including various times of interbreeding. They were using stone tools and showing signs of “behavioural modernity” (religious and artistic sensitivity and such like). Slightly earlier, there may have a global crisis involving the Toba supervolcano eruption -some argue that this caused massive population loss, others are not convinced.

70,000 year old tools (http://www.sciencemag.org)
70,000 year old tools (http://www.sciencemag.org)

Whatever the effects of Toba, around 70,000 years ago a binary star system came very close to us – about 3/4 of a light year in fact. It consists of 1 red dwarf with 1 brown dwarf,  both under 100 times the mass of Jupiter. It is called Scholtz’s Star, or WISE J072003.20-084651.2 if you are feeling thoroughly pedantic. Now, 3/4 of a light year is still way outside Pluto’s orbit, but it is inside the region called the Oort Cloud, a loose collection of icy rocks and potential comets that accompany our sun and from time to time journey down into the inner solar system to become visible for a brief time.

Today, Scholtz’s Star can only be viewed in the southern hemisphere, in the constellation Monoceros. It’s about 20 light years away and receding from us. Back then you’d have needed to look in the constellation Gemini (though the shapes would be a bit changed because of stellar movement).

So, would Scholtz’s Star have been visible to our remote ancestors? Well, probably not in its normal state. Even at 3/4 of a light year, it would almost certainly be too dim to be seen with the naked eye. But many red dwarfs are what are called flare stars – their brightness flares up to many times the usual intensity on an irregular basis. And if a flare event happened while it was near to us, then it would have been vivid to our ancestors. Back then, the best time for viewing would have been in the autumn of the northern hemisphere, from the tropics northwards. So my remote European forebears might have stood and wondered at this – although the Europe of 70,000 years ago looked rather different to today’s map!

And here of course is the story – what would these people have made of such a star? Suppose that it had entered our neighbourhood while in quiescent mode – invisible to their naked eyes just as much as ours – and then flared up while close. A new star would have appeared to them, and I wonder what they would have made of it. I don’t expect they had a great deal of time for abstract philosophy back then, but I’m willing to bet they told stories and sang songs – what part would Scholtz’s Star have played in them?

Artist's impression of Scholtz's Star (Astronomy.com)
Artist’s impression of Scholtz’s Star (Astronomy.com)

An interlude – some space news

I thought that this week I would have a quick break from the Inklings, King Arthur, and such like, and report some space news which I came across a few days ago.

Polly Reads Alexa Skill Icon
Polly Reads Alexa Skill Icon

But first, an update on my latest Alexa skill – Polly Reads. This showcases the ability of Alexa’s “big sister”, Polly, to read text in multiple voices and accents. So this skill is a bit like a podcast, letting you step through a series of readings from my novels. Half Sick of Shadows is there, of course, plus some readings from Far from the Spaceports and Timing. So far the skill is available only on the UK Alexa Skills site, but it’s currently going through the approval process for other sites world-wide. **update on Wednesday morning – I just heard that it has gone live world-wide now! ** Here is the Amazon US link ** 

Now the space news, and specifically about the asteroid Ceres (or dwarf planet if you prefer). Quite apart from their general interest, this news affects how we write about the outer solar system, so is particularly relevant to my near future series.

Artist's Impression of Dawn in orbit (NASA/JPL)
Artist’s Impression of Dawn in orbit (NASA/JPL)

Many readers will know that the NASA Dawn spacecraft has been orbiting Ceres for some time now – nearly three years. This has provided us with some fascinating insights into the asteroid, especially the mountains on its surface, and the bright salt deposits found here and there. But the sheer length of time accumulated to date – something like 1500 orbits, at different elevations – means that we can now follow changes as they happen on the surface.

Now the very fact of change is something of a surprise. Not all that long ago, it was assumed that such small objects, made of rock and ice, had long since ceased to evolve. Any internal energy would have leaked away millennia ago, and the only reason for anything to happen would be if there was a collision with some other external object like a meteorite. We knew that the gas giant planets were active, with turbulent storms and hugely powerful prevailing winds, but the swarms of small rocky moons, asteroids, and dwarf planets were considered static.

Ceres - Juling Crater (NASA/JPL)
Ceres – Juling Crater (NASA/JPL)

But what Dawn has shown us is that this is wrong. Repeated views of the same parts of the surface show how areas of exposed ice are constantly growing and shrinking, even over just a few months. This could be because new water vapour is oozing out of surface cracks and then freezing, or alternatively because some layer of dust is slowly settling, and so exposing ice which was previously hidden. At this stage, we can’t tell for sure which of those (or some third explanation) is true.

Composite view of Aruna Mons (NASA/JPL)
Composite view of Aruna Mons (NASA/JPL)

The evidence now suggests that Ceres once had a liquid water ocean – most of this has frozen into a thick crust of ice, with visible mineral deposits scattered here and there.

Certainly Ceres – and presumably many other asteroids – is more active than we had presumed. Such members of our solar system remain chemically and geologically active, rather than being just inert lumps drifting passively around our sun. As and when we get out there to take a look, we’re going to find a great many more surprises. Meanwhile, we can always read about them…

How close are personable AI assistants?

A couple of days ago, a friend sent me an article talking about the present state of the art of chatbots – artificially intelligent assistants, if you like. The article focused on those few bots which are particularly convincing in terms of relationship.

Amazon Dot - Active
Amazon Dot – Active

Now, as regular readers will know, I quite often talk about the Alexa skills I develop. In fact I have also experimented with chatbots, using both Microsoft’s and Amazon’s frameworks. Both the coding style, and the flow of information and logic, are very similar between these two types of coding, so there’s a natural crossover. Alexa, of course, is predominantly a voice platform, whereas chatbots are more diverse. You can speak to, and listen to, bots, but they are more often encountered as part of a web page or mobile app.

Now, beyond the day job and my coding hobby, I also write fiction about artificially intelligent entities – the personas of Far from the Spaceports and related stories (Timing and the in-progress The Liminal Zone). Although I present these as occurring in the “near-future”, by which I mean vaguely some time in the next century or two, they are substantially more capable than what we have now. There’s a lot of marketing hype about AI, but also a lot of genuine excitement and undoubted advancement.

Far from the Spaceports cover
Far from the Spaceports cover

So, what are the main areas where tomorrow’s personas vastly exceed today’s chatbots?

First and foremost, a wide-ranging awareness of the context of a conversation and a relationship. Alexa skills and chatbots retain a modest amount of information during use, called session attributes, or context, depending on the platform you are using. So if the skill or bot doesn’t track through a series of questions, and remember your previous answers, that’s disappointing. The developer’s decision is not whether it is possible to remember, but rather how much to remember, and how to make appropriate use of it later on.

Equally, some things can be remembered from one session to the next. Previous interactions and choices can be carried over into the next time. Again, the questions are not how, but what should be preserved like this.

But… the volume of data you can carry over is limited – it’s fine for everyday purposes, but not when you get to wanting an intelligent and sympathetic individual to converse with. If this other entity is going to persuade, it needs to retain knowledge of a lot more than just some past decisions.

A suitable cartoon (from xkcd.com)
A suitable cartoon (from xkcd.com)

Secondly, a real conversational partner does other things with their time outside of the chat specifically between the two of you. They might tell you about places, people, or things they had seen, or ideas that had occurred to them in the meantime. But currently, almost all skills and chatbots stay entirely dormant until you invoke them. In between times they do essentially nothing. I’m not counting cases where the same skill is activated by different people – “your” instance, meaning the one that holds any record of your personal interactions, simply waits for you to get involved again. The lack of any sense of independent life is a real drawback. Sure, Alexa can give you a “fact of the day” when you say hello, but we all know that this is just fished out of an internet list somewhere, and does not represent actual independent existence and experience.

Finally (for today – there are lots of other things that might be said) today’s skills and bots have a narrow focus. They can typically assist with just one task, or a cluster of closely related tasks. Indeed, at the current state of the art this is almost essential. The algorithms that seek to understand speech can only cope with a limited and quite structured set of options. If you write some code that tries to offer too wide a spectrum of choice, the chances are that the number of misunderstandings gets unacceptably high. To give the impression of talking with a real individual, the success rate needs to be pretty high, and the entity needs to have some way of clarifying and homing in on what it was that you really wanted.

Now, I’m quite optimistic about all this. The capabilities of AI systems have grown dramatically over the last few years, especially in the areas of voice comprehension and production. My own feeling is that some of the above problems are simply software ones, which will get solved with a bit more experience and effort. But others will probably need a creative rethink. I don’t imagine that I will be talking to a persona at Slate’s level in my lifetime, but I do think that I will be having much more interesting conversations with one before too long!

Future Possibilities 3

Today is the third and last post based loosely on upcoming techie stuff I learned about at the recent Microsoft Future Decoded conference here in London. It’s another speculative one this time, focusing on quantum computing, which according to estimates by speakers might be about five years away. But a lot has to happen if that five year figure is at all accurate.

Quantum device - schematic (Microsoft.com)
Quantum device – schematic (Microsoft.com)

It’s a very technical area, both as regards the underlying maths and the physical implementation, and I don’t intend going far into that. Many groups around the world, both in industry and academia, are actively working on this, hoping to crack both theory and practice. So what’s the deal? Why all the effort?

Conventional computers, of the kind we are familiar with, operate essentially in a linear sequential way. Now, there are ways to fudge this and give a semblance of parallel working. Even on a domestic machine you can run lots of programs at the same time, but at the level of a single computing core you are still performing one thing at a time, and some clever scheduling shares resources between several in-progress tasks. A bigger computer will wire up multiple processors and have vastly more elaborate scheduling, to make the most efficient use of what it’s got. But at the end of the day, present-day logic circuits do one thing at a time.

This puts some tasks out of reach. For example, the security layer that protects your online banking transactions (and such like) relies on a complex mathematical problem, which takes an extremely long time to solve. In theory it could be done, but in practice it is impenetrable. Perhaps more interestingly, there are problems in all the sciences which are intractable not only with present-day systems, but also including any credible speed advances using present-day architecture. It actually doesn’t take much complexity to render the task impossible.

Probability models for a water molecule with different energy levels - the atoms are not at fixed places but smeared out over a wider volume (Stoneybrook University)
Probability models for a water molecule with different energy levels – the atoms are not at fixed places but smeared out over a wider volume (Stoneybrook University)

Quantum computing offers a way to actually achieve parallel processing on a massive scale. It relies not on binary true/false logic, but on the probability models which are the foundation of the quantum world. It is as though many different variations of a problem all run simultaneously, each (as it were) in their own little world. It’s a perfect solution for all kinds of problems where you would like to find an optimal solution to a complex situation. So to break our online security systems, a quantum computer would simultaneously pursue many different cracking routes to break in. By doing that, the task becomes solvable. And yes, that is going to need a rethink of how we do internet security. But for today let’s look at a couple of more interesting problems.

Root nodules on a broad bean (Wikipedia)
Root nodules on a broad bean (Wikipedia)

First, there’s one from farming, or biochemistry if you prefer. To feed the world, we need lots of nitrogen to make fertiliser. The chemical process to do this commercially is energy-intensive, and nearly 2% of the world’s power goes on this one thing. But… there is a family of plants, the leguminosae, which fix nitrogen from the air into the soil using nothing more than sunlight and the organic molecules in their roots. They are very varied, from peas and beans down to fodder crops like clover, and up to quite sizeable trees. We don’t yet know exactly how this nitrogen fixing works. We think we know the key biochemical involved, but it’s complicated… too complicated for our best supercomputers to analyse. A quantum computer might solve the problem in short order.

Climate science is another case. There are several computer programs which aim to model what is going on globally. They are fearfully complicated, aiming to include as wide a range as possible of contributing factors, together with their mutual interaction. Once again, the problem is too complicated to solve in a realistic time. So, naturally, each group working on this makes what they regard as appropriate simplifications and approximations. A quantum computer would certainly allow for more factors to be integrated, and would also allow more exploration of the consequences of one action rather than another. We could experiment with what-if models, and find effective ways to deploy limited resources.

Bonding measurement wires to a quantum device (Microsoft.com)
Bonding measurement wires to a quantum device (Microsoft.com)

So that’s a little of what might be achieved with a quantum computer. To finish this blog post off, what impact might one have in science fiction, and my own writing in particular. Well, unlike the previous two weeks, my answer here would be “not very much, I think“. Most writers, including myself, simply assume that future computers will be more powerful, more capable, than those of today. The exact technical architecture is of less literary importance! Right now it looks as if a quantum computer will only work at extremely low temperatures, not far above absolute zero. So you are talking about sizeable, static installations. If we manage to find or make the necessary materials that they could run at room temperature, that could change, but that’s way more than five years away.

Far from the Spaceports cover
Far from the Spaceports cover

So in my stories, Slate would not be a quantum computer, just a regular one running some very sophisticated software. Now, the main information hub down in London, Khufu, could possibly be such a thing – certainly he’s a better candidate, sitting statically in one place, processing and analysing vast quantities of data, making connections between facts that aren’t at all obvious on the surface. But as regards the story, it hardly matters whether he is one or the other.

So, interested as I am in the development of a quantum computer, I don’t think it will feature in an important way in the world of Far from the Spaceports!

That’s it for today, and indeed for this little series… until next year.

Future Possibilities 2

The second part of this quick review of the Future Decoded conference looks at things a little further ahead. This was also going to be the final part, but as there’s a lot of cool stuff to chat about, I’ve decided to add part 3…

Prediction of data demand vs supply (IDC.org)
Prediction of data demand vs supply (IDC.org)

So here’s a problem that is a minor one at the moment, but with the potential to grow into a major one. In short, the world has a memory shortage! Already we are generating more bits and bytes that we would like to store, than we have capacity for. Right now it’s an inconvenience rather than a crisis, but year by year the gap between wish and actuality is growing. If growth in both these areas continues as at present, within a decade we will only be able to store about a third of what we want. A decade or so later that will drop to under one percent.

Think about it on the individual level. You take a short video clip while on holiday. It goes onto your phone. At some stage you back it up in Dropbox, or iCloud, or whatever your favourite provider is. Maybe you keep another copy on your local hard drive. Then you post it to Facebook and Google+. You send it to two different WhatsApp groups and email it to a friend. Maybe you’re really pleased with it and make a YouTube version. You now have ten copies of your 50Mb video… not to mention all the thumbnail images, cached and backup copies saved along the way by these various providers, which you’re almost certainly not aware of and have little control over. Your ten seconds of holiday fun has easily used 1Gb of the world’s supply of memory! For comparison, the entire Bible would fit in about 3 Mb in plain uncompressed text, and taking a wild guess, you would use well under that 1 Gb value to store every last word of the world’s sacred literature. And a lot of us are generating holiday videos these days! Then lots of cyclists wear helmet cameras these days, cars have dash cams… and so on. We are generating prodigious amounts of imagery.

So one solution is that collectively we get more fussy about cleaning things up. You find yourself deleting the phone version when you’ve transferred it to Dropbox. You decide that a lower resolution copy will do for WhatsApp. Your email provider tells you that attachments will be archived or disposed of according to some schedule. Your blog allows you to reference a YouTube video in a link, rather than uploading yet another copy. Some clever people somewhere work out a better compression algorithm. But… even all these workarounds together will still not be enough to make up for the shortfall, if the projections are right.

Amazon Dot - Active
Amazon Dot – Active

Holiday snaps aside, a great deal of this vast growth in memory usage is because of emerging trends in computing. Face and voice recognition, image analysis, and other AI techniques which are now becoming mainstream use a great deal of stored information to train the models ready for use. Regular blog readers will know that I am particularly keen on voice assistants like Alexa. My own Alexa programming doesn’t use much memory, as the skills are quite modest and tolerably well written. But each and every time I make an Alexa request, that call goes off somewhere into the cloud, to convert what I said (the “utterance”) into what I meant (the “intent”). Alexa is pretty good at getting it right, which means that there is a huge amount of voice training data sitting out there being used to build the interpretive models. Exactly the same is true for Siri, Cortana, Google Home, and anyone else’s equivalent. Microsoft call this training area a “data lake”. What’s more, there’s not just one of them, but several, at different global locations to reduce signal lag.

Far from the Spaceports cover
Far from the Spaceports cover

Hopefully that’s given some idea of the problem. Before looking at the idea for a solution that was presented the other day, let’s think what that means for fiction writing.  My AI persona Slate happily flits off to the asteroid belt with her human investigative partner Mitnash in Far from the Spaceports. In Timing, they drop back to Mars, and in the forthcoming Authentication Key they will get out to Saturn, but for now let’s stick to the asteroids. That means they’re anywhere from 15 to 30 minutes away from Earth by signal. Now, Slate does from time to time request specific information from the main hub Khufu in Earth, but necessarily this can only be for some detail not locally available. Slate can’t send a request down to London every time Mit says something, just so she can understand it. Trying to chat with up to an hour lag between statements would be seriously frustrating. So she has to carry with her all of the necessary data and software models that she needs for voice comprehension, speech, and defence against hacking, not to mention analysis, reasoning, and the capacity to feel emotion. Presupposing she has the equivalent of a data lake, she has to carry it with her. And that is simply not feasible with today’s technology.

DNA Schematic (Wikipedia)
DNA Schematic (Wikipedia)

So the research described the other day is exploring the idea of using DNA as the storage medium, rather than a piece of specially constructed silicon. DNA is very efficient at encoding data – after all, a sperm and egg together have all the necessary information to build a person. The problems are how to translate your original data source into the various chemical building blocks along a DNA helix, and conversely how to read it out again at some future time. There’s a publicly available technical paper describing all this. We were shown a short video which had been encoded, stored, and decoded using just this method. But it is fearfully expensive right now, so don’t expect to see a DNA external drive on your computer anytime soon!

Microsoft data centre (ZDNet/Microsoft)
Microsoft data centre (ZDNet/Microsoft)

The benefits purely in terms of physical space are colossal. The largest British data centre covers the equivalent of about eight soccer grounds (or four cricket pitches), using today’s technology. The largest global one is getting on for ten times that size. With DNA encoding, that all shrinks down to about a matchbox. For storytelling purposes that’s fantastic – Slate really is off to the asteroids and beyond, along with her data lake in plenty of local storage, which now takes up less room and weight than a spare set of underwear for Mit. Current data centres also use about the same amount of power as a small town, (though because of judicious choice of technology they are much more ecologically efficient) but we’ll cross the power bridge another time.

However, I suspect that many of us might see ethical issues here. The presenter took great care to tell us that the DNA used was not from anything living, but had been manufactured from scratch for the purpose. No creatures had been harmed in the making of this video. But inevitably you wonder if all researchers would take this stance. Might a future scenario play out that some people are forced to sell – or perhaps donate – their bodies for storage? Putting what might seem a more positive spin on things, wouldn’t it seem convenient to have all your personal data stored, quite literally, on your person, and never entrusted to an external device at all? Right now we are a very long way from either of these possibilities, but it might be good to think about the moral dimensions ahead of time.

Either way, the starting problem – shortage of memory – is a real one, and collectively we need to find some kind of solution…

And for the curious, this is the video which was stored on and retrieved from DNA – regardless of storage method, it’s a fun and clever piece of filming (https://youtu.be/qybUFnY7Y8w)…

 

Future possibilities 1

This is the first of two posts in which I talk about some of the major things I took away from the recent Future Decoded conference here in London. Each year they try to pick out some tech trends which they reckon will be important in the next few years.

Disability statistics by age and gender (Eurostat)
Disability statistics by age and gender (Eurostat)

This week’s theme is to do with stuff which is available now, or in the immediate future. And the first topic is assisting users. Approximately one person in six in the world is considered disabled in some way, whether from birth or through accident or illness (according to a recent WHO report). That’s about about a billion people in total. Technology ought to be able to assist, but often has failed to do so. Now a variety of assistance technologies have been around for a while – the years-old alt text in images was a step in that direction – but Windows 10 has a whole raft of such support.

Now, I am well aware that lots of people don’t like Win 10 as an operating system, but this showed it at its best. When you get to see a person blind from birth able to use social media, and a lad with cerebral palsy pursuing a career as an author, it doesn’t need a lot of sales hype. Or a programmer who lost use of all four limbs in an accident, writing lines of code live in the presentation using a mixture of Cortana’s voice control plus an on-screen keyboard triggered by eye movement. Not to mention that the face recognition login feature provided his first opportunity for privacy since the accident, as noone else had to know his password.

But the trend goes beyond disabilities of a permanent kind – most of us have what you might call situational limitations at various times. Maybe we’re temporarily bed-ridden through illness. Maybe we’re simply one-handed through carrying an infant around. Whatever the specific reason, all the big tech companies are looking for ways to make such situations more easily managed.

Another big trend was augmented reality using 3d headsets. I suppose most of us think of these as gaming gimmicks, providing another way to escape the demands of life. But going round the exhibition pitches – most by third-party developers rather than Microsoft themselves – stall after stall was showing off the use of headsets in a working context.

Medical training (Microsoft.com and Case Western Reserve University)
Medical training (Microsoft.com and Case Western Reserve University)

Training was one of the big areas, with trainers and students blending reality and virtual image in order to learn skills or be immersed in key situations. We’ve been familiar with the idea of pilots training on flight simulators for years – now that same principle is being applied to medical students and emergency response teams, all the way through to mechanical engineers and carpet-layers. Nobody doubts that a real experience has a visceral quality lacking from what you get from a headset, but it has to be an advantage that trainees have had some exposure to rare but important cases.

Assembly line with hololens (Microsoft.com)
Assembly line with hololens (Microsoft.com)

This also applies to on-the-job work. A more experienced worker can “drop in” to supervise or enhance the work of a junior one without both of them being physically present. Or a human worker can direct a mechanical tool in hostile environments or disaster zones. Or possible solutions can be tried out without having to make up physical prototypes. You can imagine a kind of super-Skype meeting, with mixed real and virtual attendance. Or a better way to understand a set of data than just dumping it into a spreadsheet – why not treat it as a plot of land you can wander round and explore?

Cover, The Naked Sun (Goodreads)
Cover, The Naked Sun (Goodreads)

Now most of these have been explored in fiction several times, with both their positive and negative connotations. And I’m sure that a few of these will turn out to be things of the moment which don’t make it into everyday use. And right now the dinky headsets which make it all happen are too expensive to find in every house, or on everyone’s desk at work – unless you have a little over £2500 lying around doing nothing. But a lot of organisations are betting that there’ll be good use for the technology, and I guess the next five years will show us whether they’re right or wrong. Will these things stay as science fiction, or become part of the science of life?

So that’s this week – developments that are near-term and don’t represent a huge change in what we have right now. Next time I’ll be looking at things further ahead, and more speculative…

 

 

Left behind by events, part 3

This is the third and final part of Left Behind by Events, in which I take a look at my own futuristic writing and try to guess which bits I will have got utterly wrong when somebody looks back at it from a future perspective! But it’s also the first of a few blogs in which I will talk a bit about some of the impressions I got of technical near-future as seen at the annual Microsoft Future Decoded conference that I went to the other day.

Amazon Dot - Active
Amazon Dot – Active

So I am tolerably confident about the development of AI. We don’t yet have what I call “personas” with autonomy, emotion, and gender. I’m not counting the pseudo-gender produced by selecting a male or female voice, though actually even that simple choice persuades many people – how many people are pedantic enough to call Alexa “it” rather than “she”? But at the rate of advance of the relevant technologies, I’m confident that we will get there.

I’m equally confident, being an optimistic guy, that we’ll develop better, faster space travel, and have settlements of various sizes on asteroids and moons. The ion drive I posit is one definite possibility: the Dawn asteroid probe already uses this system, though at a hugely smaller rate of acceleration than what I’m looking for. The Hermes, which features in both the book and film The Martian, also employs this drive type. If some other technology becomes available, the stories would be unchanged – the crucial point is that intra-solar-system travel takes weeks rather than months.

The Sting (PInterest)
The Sting (PInterest)

I am totally convinced that financial crime will take place! One of the ways we try to tackle it on Earth is to share information faster, so that criminals cannot take advantage of lags in the system to insert falsehoods. But out in the solar system, there’s nothing we can do about time lags. Mars is between 4 and 24 minutes from Earth in terms of a radio or light signal, and there’s nothing we can do about that unless somebody invents a faster-than-light signal. And that’s not in range of my future vision. So the possibility of “information friction” will increase as we spread our occupancy wider. Anywhere that there are delays in the system, there is the possibility of fraud… as used to great effect in The Sting.

Something I have not factored in at all is biological advance. I don’t have cyborgs, or genetically enhanced people, or such things. But I suspect that the likelihood is that such developments will occur well within the time horizon of Far from the Spaceports. Biology isn’t my strong suit, so I haven’t written about this. There’s a background assumption that illness isn’t a serious problem in this future world, but I haven’t explored how that might happen, or what other kinds of medical change might go hand-in-hand with it. So this is almost certainly going to be a miss on my part.

Moving on to points of contact with the conference, there is the question of my personas’ autonomy. Right now, all of our current generation of intelligent assistants – Alexa, Siri, Cortana, Google Home and so on – rely utterly on a reliable internet connection and a whole raft of cloud-based software to function. No internet or no cloud connection = no Alexa.

This is clearly inadequate for a persona like Slate heading out to the asteroid belt! Mitnash is obviously not going to wait patiently for half an hour or so between utterances in a conversation. For this to work, the software infrastructure that imparts intelligence to a persona has to travel along with it. Now this need is already emerging – and being addressed – right now. I guess most of us are familiar with the idea of the Cloud. Your Gmail account, your Dropbox files, your iCloud pictures all exists somewhere out there… but you neither know nor care where exactly they live. All you care is that you can get to them when you want.

A male snow leopard (Wikipedia)
A male snow leopard (Wikipedia)

But with the emerging “internet of things” that is having to change. Let’s say that a wildlife programme puts a trail camera up in the mountains somewhere in order to get pictures of a snow leopard. They want to leave it there for maybe four months and then collect it again. It’s well out of wifi range. In those four months it will capture say 10,000 short videos, almost all of which will not be of snow leopards. There will be mountain goats, foxes, mice, leaves, moving splashes of sunshine, flurries of rain or snow… maybe the odd yeti. But the memory stick will only hold say 500 video clips. So what do you do? Throw away everything that arrives after it gets full? Overwrite the oldest clips when you need to make space? Arrange for a dangerous and disruptive resupply trip by your mountaineer crew?

Or… and this is the choice being pursued at the moment… put some intelligence in your camera to try to weed out non-snow-leopard pictures. Your camera is no longer a dumb picture-taking device, but has some intelligence. It also makes your life easier when you have recovered the camera and are trying to scan through the contents. Even going through my Grasmere badger-cam vids every couple of weeks involves a lot of deleting scenes of waving leaves!

So this idea is now being called the Cloud Edge. You put some processing power and cleverness out in your peripheral devices, and only move what you really need into the Cloud itself. Some of the time, your little remote widgets can make up their own minds what to do. You can, so I am told, buy a USB stick with trainable neural network on it for sifting images (or other similar tasks) for well under £100. Now, this is a far cry from an independently autonomous persona able to zip off to the asteroid belt, but it shows that the necessary technologies are already being tackled.

Artist's Impression of Dawn in orbit (NASA/JPL)
Artist’s Impression of Dawn in orbit (NASA/JPL)

I’ve been deliberately vague about how far into the future Far from the Spaceports, Timing, and the sequels in preparation are set. If I had to pick a time I’d say somewhere around the one or two century mark. Although science fact notoriously catches up with science fiction faster than authors imagine, I don’t expect to see much of this happening in my lifetime (which is a pity, really, as I’d love to converse with a real Slate). I’d like to think that humanity from one part of the globe or another would have settled bases on other planets, moons, or asteroids while I’m still here to see them, and as regular readers will know, I am very excited about where AI is going. But a century to reach the level of maturity of off-Earth habitats that I propose seems, if anything, over-optimistic.

That’s it for today – over the next few weeks I’ll be talking about other fun things I learned…

News and updates

This week’s blog is a collection of bits and pieces.

Half Sick of Shadows cover
Half Sick of Shadows cover

First, there’s a reminder that at the Before the Second Sleep blog alongside the review of Half Sick of Shadows there’s a giveaway copy to be won – just leave a comment to be in with a chance in the draw, which will take place sometime in November.

Secondly, for a bit of fun here is the link to the Desert Island Books chat which appeared on Prue Batten’s blog. What ten books would you take if you were going to be stranded on a desert island for a period of time. Well, you can find out my choices at that link – it’s a right mixture of fiction and non-fiction. And I got to pick my very own desert island, and with a minor stretch of credulity I selected Bryher, one of the Isles of Scilly. There are a lot worse places that you could get stranded…

The north end of Bryher
The north end of Bryher

What about space news?

Artist's impression, Dawn at Ceres (NASA/JPL)
Artist’s impression, Dawn at Ceres (NASA/JPL)

Well, there have been recent updates to two of my favourite NASA missions. The future of Dawn, which has been orbiting the asteroid Ceres for some time, after originally studying Vesta, has been in question for some time. Basically there were two choices – leave the craft in orbit around Ceres until the onboard fuel supply runs out, or move on to a third destination and learn something there. Either way, the plan for the end of life has always been to avoid accidentally contaminating Ceres or anywhere else with debris. Well, the decision was finally made to stay at Ceres, carry out some manoeuvres to increase the scientific and visual return over the next few months, and then shift to a parking orbit late next year. The low point of the orbit should be only about 120 miles from the surface, half the height of the previous approach.

New Horizons badge (NASA)
New Horizons badge (NASA)

And finally, New Horizons, which provided great pictures of Pluto and Charon a couple of years ago, has been woken from its standby mode in order to carry out early preparations for a planned encounter in the Kuiper Belt. The target this time goes by the catchy name of 2014 MU69. Pluto is on the inside edge of the Kuiper Belt, whereas 2014 MU69 is in the middle. But although there are a fair n umber of bits of rock scattered in this disk-like region, it is still vastly empty, and the chances of New Horizons colliding with a previously unknown body are very slim. If all goes according to plan, the craft will navigate rather closer to 2014 MU69 than it did to Pluto – a necessary action, as the light levels are considerably lower. Since we know very little about the body, this does present a level of risk, but one which is considered worth taking. There are a few course corrections planned for late this year, then it’s back into sleep mode for a few months until the middle of next year. Flyby should happen on January 1st, 2019. And after that? More targets are being explored, and the power supply and onboard systems are reckoned to have another twenty years of life, so we could be in for more treats…