Alexa and William Wordsworth

Amazon Dot - Active
Amazon Dot – Active

Well, a couple of weeks have passed and it’s time to get back to blogging. And for this week, here is the Alexa post that I mentioned a little while ago, back in December last year.

First, to anticipate a later part of this post, is the extract of Alexa reciting the first few lines of Wordsworth’s Daffodils…

It has been a busy time for Alexa generally – Amazon have extended sales of various of the hardware gizmos to many other countries. That’s well and good for everyone: the bonus for us developers is that they have also extended the range of countries into which custom skills can be deployed. Sometimes with these expansions Amazon helpfully does a direct port to the new locale, and other times it’s up to the developer to do this by hand. So when skills appeared in India, everything I had done to that date was copied across automatically, without me having to do my own duplication of code. From Monday Jan 8th the process of generating default versions for Australia and New Zealand will begin. And Canada is also now in view. Of course, that still leaves plenty of future catch-up work, firstly making sure that their transfer process worked OK, and secondly filling in the gaps for combinations of locale and skill which didn’t get done. The full list of languages and countries to which skills can be deployed is now

  • English (UK)
  • English (US)
  • English (Canada)
  • English (Australia / New Zealand)
  • English (India)
  • German
  • Japanese
The world, Robinson projection (Wiki)
The world, Robinson projection (Wiki)

Based on progress so far, Amazon will simply continue extending this to other combinations over time. I suspect that French Canadian will be quite high on their list, and probably other European languages – for example Spanish would give a very good international reach into Latin America. Hindi would be a good choice, and Chinese too, presupposing that Amazon start to market Alexa devices there. Currently an existing Echo or Dot will work in China if hooked up to a network, but so far as I know the gadgets are not on sale there – instead several Chinese firms have begun producing their own equivalents. Of course, there’s nothing to stop someone in another country accessing the skill in one or other of the above languages – for example a Dutch person might consider using either the English (UK) or German option.

To date I have not attempted porting any skills in German or Japanese, essentially through lack of necessary language skills. But all of the various English variants are comparatively easy to adapt to, with an interesting twist that I’ll get to later.

Wordsworth Facts Web Icon
Wordsworth Facts Web Icon

So my latest skill out of the stable, so to speak, is Wordsworth Facts. It has two parts – a small list of facts about the life of William Wordsworth, his family, and some of his colleagues, and also some narrated portions from his poems. Both sections will increase over time as I add to them. It was interesting, and a measure of how text-to-speech technology is improving all the time, to see how few tweaks were necessary to get Alexa to read these extract tolerably well. Reading poetry is harder than reading prose, and I was expecting difficulties. The choice of Wordsworth helped here, as his poetry is very like prose (indeed, he was criticised for this at the time). As things turned out, in this case some additional punctuation was needed to get these sounding reasonably good, but that was all. Unlike some of the previous reading portions I have done, there was no need to tinker with phonetic alphabets to get words sounding right. It certainly helps not to have ancient Egyptian, Canaanite, or futuristic names in the mix!

And this brings me to one of the twists in the internationalisation of skills. The same letter can sound rather different in different versions of English when used in a word – you say tomehto and I say tomarto, and all that. And I necessarily have to dive into custom pronunciations of proper names of characters and such like – Damariel gets a bit messed up, and even Mitnash, which I had assumed would be easily interpreted, gets mangled. So part of the checking process will be to make sure that where I have used a custom phonetic version of someone’s name, it comes out right.

Wordsworth Facts is live across all of the English variants listed above – just search in your local Amazon store in the Alexa Skills section by name (or to see all my skills to date, search for “DataScenes Development“, which is the identity I use for coding purposes. If you’re looking at the UK Alexa Skills store, this is the link.

The next skill I am planning to go live with, probably in the next couple of weeks, is Polly Reads. Those who read this blog regularly – or indeed the Before The Second Sleep blog (see this link, or this, or this) – may well think of Polly as Alexa’s big sister. Polly can use multiple different voices and languages rather than a fixed one, though Polly is focused on generating spoken speech rather than interpreting what a user might be saying (the module in Amazon’s suite that does the comprehension bit is called Lex). So Polly Reads is a compendium of all the various book readings I have set up using Polly, onto which I’ll add a few of my own author readings where I haven’t yet set Polly up with the necessary text and voice combinations. The skill is kind of like a playlist, or maybe a podcast, and naturally my plan is to extend the set of readings over time. More news of that will be posted before the end of the month, all being well.

Kayak logo (from https://www.kayakonline.info/)
Kayak logo (from https://www.kayakonline.info/)

The process exposed a couple of areas where I would really like Amazon to enhance the audio capabilities of Alexa. The first was when using the built-in ability to access music (ie not my own custom skill). Compared to a lot of Alexa interaction, this feels very clunky – there is no easy way to narrow in on a particular band, for example – “The band is Dutch and they play prog rock but I can’t remember the name” could credibly come up with Kayak, but doesn’t. There’s no search facility built in to the music service. And you have to get the track name pretty much dead on – “Alexa, Play The Last Farewell by Billy Boyd” gets you nowhere except for a “I can’t find that” message, since it is called “The Last Goodbye“. A bit more contextual searching would be good. Basically, this boils down to a shortfall in what technically we call context, and what in a person would be short-term memory – the coder of a skill has to decide exactly what snippets of information to remember from the interaction so far – anything which is not explicitly remembered, will be discarded.

That was a user-moan. The second is more of a developer-moan. Playing audio tracks of more than a few seconds – like a book extract, or a decent length piece of music – involves transferring control from your own skill to Alexa, who then manages the sequencing of tracks and all that. That’s all very well, and I understand the purpose behind it, but it also means that you have lost some control over the presentation of the skill as the various tracks play. For example, on the new Echo Show (the one with the screen) you cannot interleave the tracks with relevant pictures – like a book cover, for example. Basically the two bits of capability don’t work very well together. Of course all these things are very new, but it would be great to see some better integration between the different pieces of the jigsaw. Hopefully this will be improved with time…

That’s it for now – back to reading and writing..

Half Sick of Shadows and IndieBrag

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

I was going to write a blog on something to do with Alexa, but that will now appear after the Christmas holiday break. That’s partly because I have been moving rocks and making new gravel paths, and ending the day somewhat fatigued…

So instead, this is just a short post about an email I received last night, saying that Half Sick of Shadows has been awarded an IndieBrag Medallion.

Specially, I read this:

We have completed the review process for your book “Half Sick of Shadows” and I am pleased to inform you that it has been selected to receive a B.R.A.G. Medallion. We would now like to assist you in gaining recognition of your fine work.
In return, we ask that you permit us to add your book to the listing of Medallion honorees on our website www.bragmedallion.com.

Well, needless to say I haven’t yet had time to do the stuff at their website – that will follow over the next few days – but that was a very nice piece of news just as the holiday break is starting!

Bits and Pieces (2)

A follow-up to my earlier post this week, catching up on some more news. But first, here is a couple of snaps (one enlarged and annotated) I took earlier today in the early morning as I walked to East Finchley tube station.

Jupiter and Mars, annotated
 The Moon, Jupiter and Mars, annotated
The Moon, Jupiter, and Mars
The Moon, Jupiter and Mars

All very evocative, and leads nicely into my next link, which is a guest post I wrote for Lisl’s Before the Second Sleep blog, on the subject of title. Naturally enough, it’s a topic that really interests me – how will human settlements across the solar system adapt to and reflect the physical nature of the world they are set on?

In particular I look at Mars’ moon Phobos, both in the post and in Timing. So far as we can tell, Phobos is extremely fragile. Several factors cause this, including its original component parts, the closeness of its orbit to Mars, and the impact of whatever piece of space debris caused the giant crater Stickney. But whatever the cause… how might human society adapt to living on a moon where you can’t trust the ground below your feet? For the rest of the post, follow this link.

And also here’s a reminder of the Kindle Countdown offer on most of my books, and the Goodreads giveaway on Half Sick of Shadows. Here are the links…

Half Sick of Shadows is on Goodreads giveaway, with three copies to be won by the end of this coming weekend.

All the other books are on Kindle countdown deal at £0.99 or $0.99 if you are in the UK or US respectively – but once again only until the end of the weekend. Links for these are:

Science fiction series
Far from the Spaceports UK link and US link
Timing UK link and US link

Late Bronze Age historical fiction
In a Milk and Honeyed Land UK link and US link
Scenes from a Life UK link and US link
The Flame Before Us UK link and US link

And I haven’t forgotten about the upcoming Alexa news, following recent activity coding for the new Alexa Show (the one with the screen). But that’s for another day…

Bits and pieces

It’s been an exceptionally busy time at work recently, so I haven’t had time to write much. But happily, lots of other things are happening, so here’s a compendium of them.

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

First, Half Sick of Shadows was reviewed on Sruti’s Bookblog, with a follow-up interview. The links are: the review itself, plus the first and second half of the interview. “She wishes for people to value her but they seem to be changing and missing… She can see the world, but she always seemed curbed and away from everything.”

 

Secondly, right now there’s a whole lot of deals available on my novels, from oldest to newest. Half Sick of Shadows is on Goodreads giveaway, with three copies to be won by the end of next weekend.

All the other books are on Kindle countdown deal at £0.99 or $0.99 if you are in the UK or US respectively. Links for these are:

Science fiction series
Far from the Spaceports UK link and US link
Timing UK link and US link

Late Bronze Age historical fiction
In a Milk and Honeyed Land UK link and US link
Scenes from a Life UK link and US link
The Flame Before Us UK link and US link

Pretty soon there’ll be some more Alexa news, as I’ve been busily coding for the new Alexa Show (the one with the screen). But that’s for another day…

December deals

As it’s December, and all the shops are starting to get into Christmas mood, I thought I’d join in. So from December 10th-17th most of my books are on Kindle offers at 99p or 99c.

This means the science fiction series
Far from the Spaceports UK link and US link
Timing UK link and US link

and the Late Bronze Age historical fiction
In a Milk and Honeyed Land UK link and US link
Scenes from a Life UK link and US link
The Flame Before Us UK link and US link

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

Amazon rules prevent me from putting Half Sick of Shadows on a countdown deal (it’s already too economically priced) but in order to be more or less consistent there is a Goodreads giveaway of three copies running at the same time – just follow the link on or after December 10th to enter!

Don’t miss out!

The last person to leave Doggerland

A few days ago on The Review Facebook page (look back to December 1st) the question was posed – what person in history would you like to see written about? Naturally enough, most replies focused on historical individuals who had lived interesting lives but had never really had the attention in fact or fiction that the various contributors felt was appropriate.

Now, I kept quiet in this discussion, because my mind had immediately run away down an entirely different avenue, and it didn’t seem the right place to ramble on about that. But here in the blog is a different matter!

Woolly Mammoth skull retrieved from the sea near Holland in 1999-2000, dating from well before the period I have in mind (Wikipedia)
Woolly Mammoth skull retrieved from the sea near Holland in 1999-2000, dating from well before the period I have in mind (Wikipedia)

Doggerland is the name we give to the stretch of land which once joined the eastern counties of England to parts of Europe. Nowadays the North Sea covers that whole span, but every so often ancient relics are retrieved, mostly by accident in fishing nets (the first such being a barbed antler tool back in 1931). The name Doggerland comes from the Dogger Bank, which is a large region of sandbanks and shoals in the North Sea, in places no more than 50′ deep.

So nowadays the sea divides Norfolk and the Netherlands, Lincolnshire and Denmark. And with climate change and slowly rising sea levels, this is unlikely to change. But let’s roll back some ten thousand years, and see the changing picture.

Doggerland from space (The Telegraph newspaper, 01 Sep 2015)
Doggerland from space (The Telegraph newspaper, 01 Sep 2015)

When the land warmed after the last ice age, Britain and Europe were united by a broad low-lying tract of land (this was c. 11000BC). This land was never rugged or mountainous – imagine something like present day East Anglia, Holland, or Denmark, and you have the picture. Two arms of seawater divided this from Scandinavia to the north-east, and Scotland and Northumberland to the north-west. Several rivers – including the Thames, the Seine, the Rhine – flowed into this broad plain, and thence into the Atlantic via what was to become the English Channel.

The land was good for hunting and trapping animals, the margins had fish and shellfish, and when early farmers arrived they found the soil to be fertile. It was, I suspect, a pleasant and welcoming place to be, with a climate becoming gradually milder as, decade by decade, the Ice Age retreated. The sea level rose as the ice melted. In some places, the land sank down as the sheer weight of the glaciers further north was released – this is still happening in the Scilly Isles which, very very slowly, are being submerged. Both factors spelled the end for Doggerland.

Doggerland c. 10000BC (Wikipedia)
Doggerland c. 10000BC (Wikipedia)

By now this huge expanse of territory has completely disappeared. This did not happen overnight – best estimates are that it was all gone a little before 6000BC, so it took around five thousand years to dwindle. The occupants, whether living a hunter-gatherer or settled lifestyle, had many generations to adjust to the change. I suppose they had oral traditions which spoke of how this island used to be attached to the land, or that forest used to extend several days’ journey further north. But within that long span of steady reduction, most likely there were also sudden calamities. A storm surge one winter might have taken away miles of coastline. An autumn flood might have demolished a natural barrier to the water, exposing the lower fields beyond. A series of unusually high tides might turn fresh water meadows to salt marsh. A landslide in Norway, resulting in a tsunami, probably did much to finish the process. All of these things have been seen in the low-lying lands which still border the North Sea.

Extensive study has revealed a lot about this drowned land – see this BBC article for a summary of investigations by several Scottish universities. Or this article in the Telegraph newspaper for an account of work to map the surface features which still remain.

So the story I want to tell, one day, is the story of the last person to leave Doggerland. Or, more widely, the last community to abandon its shrinking and increasingly boggy surface. What was it like to leave the places, practical and sacred, which their people had moved through for so long? How were they received by those groups already living in the regions around? Did they look back with relief or regret?

Perhaps one day, when I want to switch back from science fiction to ancient history, it’s a story that I will tell.

Future Possibilities 3

Today is the third and last post based loosely on upcoming techie stuff I learned about at the recent Microsoft Future Decoded conference here in London. It’s another speculative one this time, focusing on quantum computing, which according to estimates by speakers might be about five years away. But a lot has to happen if that five year figure is at all accurate.

Quantum device - schematic (Microsoft.com)
Quantum device – schematic (Microsoft.com)

It’s a very technical area, both as regards the underlying maths and the physical implementation, and I don’t intend going far into that. Many groups around the world, both in industry and academia, are actively working on this, hoping to crack both theory and practice. So what’s the deal? Why all the effort?

Conventional computers, of the kind we are familiar with, operate essentially in a linear sequential way. Now, there are ways to fudge this and give a semblance of parallel working. Even on a domestic machine you can run lots of programs at the same time, but at the level of a single computing core you are still performing one thing at a time, and some clever scheduling shares resources between several in-progress tasks. A bigger computer will wire up multiple processors and have vastly more elaborate scheduling, to make the most efficient use of what it’s got. But at the end of the day, present-day logic circuits do one thing at a time.

This puts some tasks out of reach. For example, the security layer that protects your online banking transactions (and such like) relies on a complex mathematical problem, which takes an extremely long time to solve. In theory it could be done, but in practice it is impenetrable. Perhaps more interestingly, there are problems in all the sciences which are intractable not only with present-day systems, but also including any credible speed advances using present-day architecture. It actually doesn’t take much complexity to render the task impossible.

Probability models for a water molecule with different energy levels - the atoms are not at fixed places but smeared out over a wider volume (Stoneybrook University)
Probability models for a water molecule with different energy levels – the atoms are not at fixed places but smeared out over a wider volume (Stoneybrook University)

Quantum computing offers a way to actually achieve parallel processing on a massive scale. It relies not on binary true/false logic, but on the probability models which are the foundation of the quantum world. It is as though many different variations of a problem all run simultaneously, each (as it were) in their own little world. It’s a perfect solution for all kinds of problems where you would like to find an optimal solution to a complex situation. So to break our online security systems, a quantum computer would simultaneously pursue many different cracking routes to break in. By doing that, the task becomes solvable. And yes, that is going to need a rethink of how we do internet security. But for today let’s look at a couple of more interesting problems.

Root nodules on a broad bean (Wikipedia)
Root nodules on a broad bean (Wikipedia)

First, there’s one from farming, or biochemistry if you prefer. To feed the world, we need lots of nitrogen to make fertiliser. The chemical process to do this commercially is energy-intensive, and nearly 2% of the world’s power goes on this one thing. But… there is a family of plants, the leguminosae, which fix nitrogen from the air into the soil using nothing more than sunlight and the organic molecules in their roots. They are very varied, from peas and beans down to fodder crops like clover, and up to quite sizeable trees. We don’t yet know exactly how this nitrogen fixing works. We think we know the key biochemical involved, but it’s complicated… too complicated for our best supercomputers to analyse. A quantum computer might solve the problem in short order.

Climate science is another case. There are several computer programs which aim to model what is going on globally. They are fearfully complicated, aiming to include as wide a range as possible of contributing factors, together with their mutual interaction. Once again, the problem is too complicated to solve in a realistic time. So, naturally, each group working on this makes what they regard as appropriate simplifications and approximations. A quantum computer would certainly allow for more factors to be integrated, and would also allow more exploration of the consequences of one action rather than another. We could experiment with what-if models, and find effective ways to deploy limited resources.

Bonding measurement wires to a quantum device (Microsoft.com)
Bonding measurement wires to a quantum device (Microsoft.com)

So that’s a little of what might be achieved with a quantum computer. To finish this blog post off, what impact might one have in science fiction, and my own writing in particular. Well, unlike the previous two weeks, my answer here would be “not very much, I think“. Most writers, including myself, simply assume that future computers will be more powerful, more capable, than those of today. The exact technical architecture is of less literary importance! Right now it looks as if a quantum computer will only work at extremely low temperatures, not far above absolute zero. So you are talking about sizeable, static installations. If we manage to find or make the necessary materials that they could run at room temperature, that could change, but that’s way more than five years away.

Far from the Spaceports cover
Far from the Spaceports cover

So in my stories, Slate would not be a quantum computer, just a regular one running some very sophisticated software. Now, the main information hub down in London, Khufu, could possibly be such a thing – certainly he’s a better candidate, sitting statically in one place, processing and analysing vast quantities of data, making connections between facts that aren’t at all obvious on the surface. But as regards the story, it hardly matters whether he is one or the other.

So, interested as I am in the development of a quantum computer, I don’t think it will feature in an important way in the world of Far from the Spaceports!

That’s it for today, and indeed for this little series… until next year.

Future Possibilities 2

The second part of this quick review of the Future Decoded conference looks at things a little further ahead. This was also going to be the final part, but as there’s a lot of cool stuff to chat about, I’ve decided to add part 3…

Prediction of data demand vs supply (IDC.org)
Prediction of data demand vs supply (IDC.org)

So here’s a problem that is a minor one at the moment, but with the potential to grow into a major one. In short, the world has a memory shortage! Already we are generating more bits and bytes that we would like to store, than we have capacity for. Right now it’s an inconvenience rather than a crisis, but year by year the gap between wish and actuality is growing. If growth in both these areas continues as at present, within a decade we will only be able to store about a third of what we want. A decade or so later that will drop to under one percent.

Think about it on the individual level. You take a short video clip while on holiday. It goes onto your phone. At some stage you back it up in Dropbox, or iCloud, or whatever your favourite provider is. Maybe you keep another copy on your local hard drive. Then you post it to Facebook and Google+. You send it to two different WhatsApp groups and email it to a friend. Maybe you’re really pleased with it and make a YouTube version. You now have ten copies of your 50Mb video… not to mention all the thumbnail images, cached and backup copies saved along the way by these various providers, which you’re almost certainly not aware of and have little control over. Your ten seconds of holiday fun has easily used 1Gb of the world’s supply of memory! For comparison, the entire Bible would fit in about 3 Mb in plain uncompressed text, and taking a wild guess, you would use well under that 1 Gb value to store every last word of the world’s sacred literature. And a lot of us are generating holiday videos these days! Then lots of cyclists wear helmet cameras these days, cars have dash cams… and so on. We are generating prodigious amounts of imagery.

So one solution is that collectively we get more fussy about cleaning things up. You find yourself deleting the phone version when you’ve transferred it to Dropbox. You decide that a lower resolution copy will do for WhatsApp. Your email provider tells you that attachments will be archived or disposed of according to some schedule. Your blog allows you to reference a YouTube video in a link, rather than uploading yet another copy. Some clever people somewhere work out a better compression algorithm. But… even all these workarounds together will still not be enough to make up for the shortfall, if the projections are right.

Amazon Dot - Active
Amazon Dot – Active

Holiday snaps aside, a great deal of this vast growth in memory usage is because of emerging trends in computing. Face and voice recognition, image analysis, and other AI techniques which are now becoming mainstream use a great deal of stored information to train the models ready for use. Regular blog readers will know that I am particularly keen on voice assistants like Alexa. My own Alexa programming doesn’t use much memory, as the skills are quite modest and tolerably well written. But each and every time I make an Alexa request, that call goes off somewhere into the cloud, to convert what I said (the “utterance”) into what I meant (the “intent”). Alexa is pretty good at getting it right, which means that there is a huge amount of voice training data sitting out there being used to build the interpretive models. Exactly the same is true for Siri, Cortana, Google Home, and anyone else’s equivalent. Microsoft call this training area a “data lake”. What’s more, there’s not just one of them, but several, at different global locations to reduce signal lag.

Far from the Spaceports cover
Far from the Spaceports cover

Hopefully that’s given some idea of the problem. Before looking at the idea for a solution that was presented the other day, let’s think what that means for fiction writing.  My AI persona Slate happily flits off to the asteroid belt with her human investigative partner Mitnash in Far from the Spaceports. In Timing, they drop back to Mars, and in the forthcoming Authentication Key they will get out to Saturn, but for now let’s stick to the asteroids. That means they’re anywhere from 15 to 30 minutes away from Earth by signal. Now, Slate does from time to time request specific information from the main hub Khufu in Earth, but necessarily this can only be for some detail not locally available. Slate can’t send a request down to London every time Mit says something, just so she can understand it. Trying to chat with up to an hour lag between statements would be seriously frustrating. So she has to carry with her all of the necessary data and software models that she needs for voice comprehension, speech, and defence against hacking, not to mention analysis, reasoning, and the capacity to feel emotion. Presupposing she has the equivalent of a data lake, she has to carry it with her. And that is simply not feasible with today’s technology.

DNA Schematic (Wikipedia)
DNA Schematic (Wikipedia)

So the research described the other day is exploring the idea of using DNA as the storage medium, rather than a piece of specially constructed silicon. DNA is very efficient at encoding data – after all, a sperm and egg together have all the necessary information to build a person. The problems are how to translate your original data source into the various chemical building blocks along a DNA helix, and conversely how to read it out again at some future time. There’s a publicly available technical paper describing all this. We were shown a short video which had been encoded, stored, and decoded using just this method. But it is fearfully expensive right now, so don’t expect to see a DNA external drive on your computer anytime soon!

Microsoft data centre (ZDNet/Microsoft)
Microsoft data centre (ZDNet/Microsoft)

The benefits purely in terms of physical space are colossal. The largest British data centre covers the equivalent of about eight soccer grounds (or four cricket pitches), using today’s technology. The largest global one is getting on for ten times that size. With DNA encoding, that all shrinks down to about a matchbox. For storytelling purposes that’s fantastic – Slate really is off to the asteroids and beyond, along with her data lake in plenty of local storage, which now takes up less room and weight than a spare set of underwear for Mit. Current data centres also use about the same amount of power as a small town, (though because of judicious choice of technology they are much more ecologically efficient) but we’ll cross the power bridge another time.

However, I suspect that many of us might see ethical issues here. The presenter took great care to tell us that the DNA used was not from anything living, but had been manufactured from scratch for the purpose. No creatures had been harmed in the making of this video. But inevitably you wonder if all researchers would take this stance. Might a future scenario play out that some people are forced to sell – or perhaps donate – their bodies for storage? Putting what might seem a more positive spin on things, wouldn’t it seem convenient to have all your personal data stored, quite literally, on your person, and never entrusted to an external device at all? Right now we are a very long way from either of these possibilities, but it might be good to think about the moral dimensions ahead of time.

Either way, the starting problem – shortage of memory – is a real one, and collectively we need to find some kind of solution…

And for the curious, this is the video which was stored on and retrieved from DNA – regardless of storage method, it’s a fun and clever piece of filming (https://youtu.be/qybUFnY7Y8w)…

 

Future possibilities 1

This is the first of two posts in which I talk about some of the major things I took away from the recent Future Decoded conference here in London. Each year they try to pick out some tech trends which they reckon will be important in the next few years.

Disability statistics by age and gender (Eurostat)
Disability statistics by age and gender (Eurostat)

This week’s theme is to do with stuff which is available now, or in the immediate future. And the first topic is assisting users. Approximately one person in six in the world is considered disabled in some way, whether from birth or through accident or illness (according to a recent WHO report). That’s about about a billion people in total. Technology ought to be able to assist, but often has failed to do so. Now a variety of assistance technologies have been around for a while – the years-old alt text in images was a step in that direction – but Windows 10 has a whole raft of such support.

Now, I am well aware that lots of people don’t like Win 10 as an operating system, but this showed it at its best. When you get to see a person blind from birth able to use social media, and a lad with cerebral palsy pursuing a career as an author, it doesn’t need a lot of sales hype. Or a programmer who lost use of all four limbs in an accident, writing lines of code live in the presentation using a mixture of Cortana’s voice control plus an on-screen keyboard triggered by eye movement. Not to mention that the face recognition login feature provided his first opportunity for privacy since the accident, as noone else had to know his password.

But the trend goes beyond disabilities of a permanent kind – most of us have what you might call situational limitations at various times. Maybe we’re temporarily bed-ridden through illness. Maybe we’re simply one-handed through carrying an infant around. Whatever the specific reason, all the big tech companies are looking for ways to make such situations more easily managed.

Another big trend was augmented reality using 3d headsets. I suppose most of us think of these as gaming gimmicks, providing another way to escape the demands of life. But going round the exhibition pitches – most by third-party developers rather than Microsoft themselves – stall after stall was showing off the use of headsets in a working context.

Medical training (Microsoft.com and Case Western Reserve University)
Medical training (Microsoft.com and Case Western Reserve University)

Training was one of the big areas, with trainers and students blending reality and virtual image in order to learn skills or be immersed in key situations. We’ve been familiar with the idea of pilots training on flight simulators for years – now that same principle is being applied to medical students and emergency response teams, all the way through to mechanical engineers and carpet-layers. Nobody doubts that a real experience has a visceral quality lacking from what you get from a headset, but it has to be an advantage that trainees have had some exposure to rare but important cases.

Assembly line with hololens (Microsoft.com)
Assembly line with hololens (Microsoft.com)

This also applies to on-the-job work. A more experienced worker can “drop in” to supervise or enhance the work of a junior one without both of them being physically present. Or a human worker can direct a mechanical tool in hostile environments or disaster zones. Or possible solutions can be tried out without having to make up physical prototypes. You can imagine a kind of super-Skype meeting, with mixed real and virtual attendance. Or a better way to understand a set of data than just dumping it into a spreadsheet – why not treat it as a plot of land you can wander round and explore?

Cover, The Naked Sun (Goodreads)
Cover, The Naked Sun (Goodreads)

Now most of these have been explored in fiction several times, with both their positive and negative connotations. And I’m sure that a few of these will turn out to be things of the moment which don’t make it into everyday use. And right now the dinky headsets which make it all happen are too expensive to find in every house, or on everyone’s desk at work – unless you have a little over £2500 lying around doing nothing. But a lot of organisations are betting that there’ll be good use for the technology, and I guess the next five years will show us whether they’re right or wrong. Will these things stay as science fiction, or become part of the science of life?

So that’s this week – developments that are near-term and don’t represent a huge change in what we have right now. Next time I’ll be looking at things further ahead, and more speculative…

 

 

Left behind by events, part 3

This is the third and final part of Left Behind by Events, in which I take a look at my own futuristic writing and try to guess which bits I will have got utterly wrong when somebody looks back at it from a future perspective! But it’s also the first of a few blogs in which I will talk a bit about some of the impressions I got of technical near-future as seen at the annual Microsoft Future Decoded conference that I went to the other day.

Amazon Dot - Active
Amazon Dot – Active

So I am tolerably confident about the development of AI. We don’t yet have what I call “personas” with autonomy, emotion, and gender. I’m not counting the pseudo-gender produced by selecting a male or female voice, though actually even that simple choice persuades many people – how many people are pedantic enough to call Alexa “it” rather than “she”? But at the rate of advance of the relevant technologies, I’m confident that we will get there.

I’m equally confident, being an optimistic guy, that we’ll develop better, faster space travel, and have settlements of various sizes on asteroids and moons. The ion drive I posit is one definite possibility: the Dawn asteroid probe already uses this system, though at a hugely smaller rate of acceleration than what I’m looking for. The Hermes, which features in both the book and film The Martian, also employs this drive type. If some other technology becomes available, the stories would be unchanged – the crucial point is that intra-solar-system travel takes weeks rather than months.

The Sting (PInterest)
The Sting (PInterest)

I am totally convinced that financial crime will take place! One of the ways we try to tackle it on Earth is to share information faster, so that criminals cannot take advantage of lags in the system to insert falsehoods. But out in the solar system, there’s nothing we can do about time lags. Mars is between 4 and 24 minutes from Earth in terms of a radio or light signal, and there’s nothing we can do about that unless somebody invents a faster-than-light signal. And that’s not in range of my future vision. So the possibility of “information friction” will increase as we spread our occupancy wider. Anywhere that there are delays in the system, there is the possibility of fraud… as used to great effect in The Sting.

Something I have not factored in at all is biological advance. I don’t have cyborgs, or genetically enhanced people, or such things. But I suspect that the likelihood is that such developments will occur well within the time horizon of Far from the Spaceports. Biology isn’t my strong suit, so I haven’t written about this. There’s a background assumption that illness isn’t a serious problem in this future world, but I haven’t explored how that might happen, or what other kinds of medical change might go hand-in-hand with it. So this is almost certainly going to be a miss on my part.

Moving on to points of contact with the conference, there is the question of my personas’ autonomy. Right now, all of our current generation of intelligent assistants – Alexa, Siri, Cortana, Google Home and so on – rely utterly on a reliable internet connection and a whole raft of cloud-based software to function. No internet or no cloud connection = no Alexa.

This is clearly inadequate for a persona like Slate heading out to the asteroid belt! Mitnash is obviously not going to wait patiently for half an hour or so between utterances in a conversation. For this to work, the software infrastructure that imparts intelligence to a persona has to travel along with it. Now this need is already emerging – and being addressed – right now. I guess most of us are familiar with the idea of the Cloud. Your Gmail account, your Dropbox files, your iCloud pictures all exists somewhere out there… but you neither know nor care where exactly they live. All you care is that you can get to them when you want.

A male snow leopard (Wikipedia)
A male snow leopard (Wikipedia)

But with the emerging “internet of things” that is having to change. Let’s say that a wildlife programme puts a trail camera up in the mountains somewhere in order to get pictures of a snow leopard. They want to leave it there for maybe four months and then collect it again. It’s well out of wifi range. In those four months it will capture say 10,000 short videos, almost all of which will not be of snow leopards. There will be mountain goats, foxes, mice, leaves, moving splashes of sunshine, flurries of rain or snow… maybe the odd yeti. But the memory stick will only hold say 500 video clips. So what do you do? Throw away everything that arrives after it gets full? Overwrite the oldest clips when you need to make space? Arrange for a dangerous and disruptive resupply trip by your mountaineer crew?

Or… and this is the choice being pursued at the moment… put some intelligence in your camera to try to weed out non-snow-leopard pictures. Your camera is no longer a dumb picture-taking device, but has some intelligence. It also makes your life easier when you have recovered the camera and are trying to scan through the contents. Even going through my Grasmere badger-cam vids every couple of weeks involves a lot of deleting scenes of waving leaves!

So this idea is now being called the Cloud Edge. You put some processing power and cleverness out in your peripheral devices, and only move what you really need into the Cloud itself. Some of the time, your little remote widgets can make up their own minds what to do. You can, so I am told, buy a USB stick with trainable neural network on it for sifting images (or other similar tasks) for well under £100. Now, this is a far cry from an independently autonomous persona able to zip off to the asteroid belt, but it shows that the necessary technologies are already being tackled.

Artist's Impression of Dawn in orbit (NASA/JPL)
Artist’s Impression of Dawn in orbit (NASA/JPL)

I’ve been deliberately vague about how far into the future Far from the Spaceports, Timing, and the sequels in preparation are set. If I had to pick a time I’d say somewhere around the one or two century mark. Although science fact notoriously catches up with science fiction faster than authors imagine, I don’t expect to see much of this happening in my lifetime (which is a pity, really, as I’d love to converse with a real Slate). I’d like to think that humanity from one part of the globe or another would have settled bases on other planets, moons, or asteroids while I’m still here to see them, and as regular readers will know, I am very excited about where AI is going. But a century to reach the level of maturity of off-Earth habitats that I propose seems, if anything, over-optimistic.

That’s it for today – over the next few weeks I’ll be talking about other fun things I learned…

Writing, both historical and speculative