Over the weekend I came across one of those many internet tropes – a quote from someone, on a pretty background, with no interpretive comment by the poster. I must admit that normally I ignore these and scroll past them to a post which has more engagement with a real person. But this one did actually catch my eye, mainly because it resonated with what I was already thinking about.
Here’s the quote (without the pretty background)
“Science fiction deals with improbable possibilities, fantasy with plausible impossibilities” (Miriam Allen deFord)
Of course I started worrying at this, like a lively dog chewing at a toy. Leaving aside the rather pleasing symmetry of words, did I actually agree with it? The lady to whom the quote is attributed was an American writer whose main activity was in the mid-twentieth century. She was roughly contemporary with EE (Doc) Smith, a generation down from HG Wells, and rather older than Isaac Asimov. Most of her writing was in the form of short stories for magazines, though she wrote a few novels as well. She straddled the genres of mystery writing, true crime accounts, and science fiction – for the curious who don’t want to shell out real money, several of her works are on the Project Gutenberg site.
So, did I end up agreeing with the sentiment? Well, not really. Miriam Allen deFord was writing in a time when genres were quite strictly defined, especially by those individuals who ran the magazines of the day. Those people were hugely influential within their sphere, and were instrumental in founding the writing careers of a lot of people. But their personal likes and dislikes shaped what was written. Allegedly, Isaac Asimov almost never wrote about alien life because John Campbell, editor at Astounding Science Fiction (later called Analog), had a personal antipathy to that kind of storyline. In Asimov’s case, the habit was so strong that, so far as I can recall, aliens appear just twice in his writing – in a parallel universe in The Gods Themselves, and in an enormously far ahead future in The End of Infinity.
We live today in a different world. Genres do not create such important divisions. This is most true in the indie world, but successful authors in the trad world also experiment with crossing genre boundaries. For example, Kazuo Ishiguro has explored several non-standard plotlines and combinations. But many indie authors positively revel in creating books which don’t fit traditional pigeonholes.
Nowadays, science fiction and fantasy are often bundled together under the joint heading “speculative fiction”, with less perceived importance on whether the particular book fits one side or the other of some imaginary line. To be sure, there is still a spectrum of actual content, from “hard” science fiction in which the science bit seeks to be as credible as possible, through to fantasy which does not even seek a rational justification for actions or attributes. Most of my science fiction writing leans towards the geeky end of that spectrum, with Half Sick of Shadows a striking exception. Anyway, within that spectrum there are enormous areas of mixed colour – plot elements for which either a scientific or fantasy explanation might be found, and about which perhaps different characters in the book might hold different opinions. I think that’s fine, and a sign that the whole field has matured from a kind of binary opposition.
Well, a couple of weeks have passed and it’s time to get back to blogging. And for this week, here is the Alexa post that I mentioned a little while ago, back in December last year.
First, to anticipate a later part of this post, is the extract of Alexa reciting the first few lines of Wordsworth’s Daffodils…
It has been a busy time for Alexa generally – Amazon have extended sales of various of the hardware gizmos to many other countries. That’s well and good for everyone: the bonus for us developers is that they have also extended the range of countries into which custom skills can be deployed. Sometimes with these expansions Amazon helpfully does a direct port to the new locale, and other times it’s up to the developer to do this by hand. So when skills appeared in India, everything I had done to that date was copied across automatically, without me having to do my own duplication of code. From Monday Jan 8th the process of generating default versions for Australia and New Zealand will begin. And Canada is also now in view. Of course, that still leaves plenty of future catch-up work, firstly making sure that their transfer process worked OK, and secondly filling in the gaps for combinations of locale and skill which didn’t get done. The full list of languages and countries to which skills can be deployed is now
English (Australia / New Zealand)
Based on progress so far, Amazon will simply continue extending this to other combinations over time. I suspect that French Canadian will be quite high on their list, and probably other European languages – for example Spanish would give a very good international reach into Latin America. Hindi would be a good choice, and Chinese too, presupposing that Amazon start to market Alexa devices there. Currently an existing Echo or Dot will work in China if hooked up to a network, but so far as I know the gadgets are not on sale there – instead several Chinese firms have begun producing their own equivalents. Of course, there’s nothing to stop someone in another country accessing the skill in one or other of the above languages – for example a Dutch person might consider using either the English (UK) or German option.
To date I have not attempted porting any skills in German or Japanese, essentially through lack of necessary language skills. But all of the various English variants are comparatively easy to adapt to, with an interesting twist that I’ll get to later.
So my latest skill out of the stable, so to speak, is Wordsworth Facts. It has two parts – a small list of facts about the life of William Wordsworth, his family, and some of his colleagues, and also some narrated portions from his poems. Both sections will increase over time as I add to them. It was interesting, and a measure of how text-to-speech technology is improving all the time, to see how few tweaks were necessary to get Alexa to read these extract tolerably well. Reading poetry is harder than reading prose, and I was expecting difficulties. The choice of Wordsworth helped here, as his poetry is very like prose (indeed, he was criticised for this at the time). As things turned out, in this case some additional punctuation was needed to get these sounding reasonably good, but that was all. Unlike some of the previous reading portions I have done, there was no need to tinker with phonetic alphabets to get words sounding right. It certainly helps not to have ancient Egyptian, Canaanite, or futuristic names in the mix!
And this brings me to one of the twists in the internationalisation of skills. The same letter can sound rather different in different versions of English when used in a word – you say tomehto and I say tomarto, and all that. And I necessarily have to dive into custom pronunciations of proper names of characters and such like – Damariel gets a bit messed up, and even Mitnash, which I had assumed would be easily interpreted, gets mangled. So part of the checking process will be to make sure that where I have used a custom phonetic version of someone’s name, it comes out right.
Wordsworth Facts is live across all of the English variants listed above – just search in your local Amazon store in the Alexa Skills section by name (or to see all my skills to date, search for “DataScenes Development“, which is the identity I use for coding purposes. If you’re looking at the UK Alexa Skills store, this is the link.
The next skill I am planning to go live with, probably in the next couple of weeks, is Polly Reads. Those who read this blog regularly – or indeed the Before The Second Sleep blog (see this link, or this, or this) – may well think of Polly as Alexa’s big sister. Polly can use multiple different voices and languages rather than a fixed one, though Polly is focused on generating spoken speech rather than interpreting what a user might be saying (the module in Amazon’s suite that does the comprehension bit is called Lex). So Polly Reads is a compendium of all the various book readings I have set up using Polly, onto which I’ll add a few of my own author readings where I haven’t yet set Polly up with the necessary text and voice combinations. The skill is kind of like a playlist, or maybe a podcast, and naturally my plan is to extend the set of readings over time. More news of that will be posted before the end of the month, all being well.
The process exposed a couple of areas where I would really like Amazon to enhance the audio capabilities of Alexa. The first was when using the built-in ability to access music (ie not my own custom skill). Compared to a lot of Alexa interaction, this feels very clunky – there is no easy way to narrow in on a particular band, for example – “The band is Dutch and they play prog rock but I can’t remember the name” could credibly come up with Kayak, but doesn’t. There’s no search facility built in to the music service. And you have to get the track name pretty much dead on – “Alexa, Play The Last Farewell by Billy Boyd” gets you nowhere except for a “I can’t find that” message, since it is called “The Last Goodbye“. A bit more contextual searching would be good. Basically, this boils down to a shortfall in what technically we call context, and what in a person would be short-term memory – the coder of a skill has to decide exactly what snippets of information to remember from the interaction so far – anything which is not explicitly remembered, will be discarded.
That was a user-moan. The second is more of a developer-moan. Playing audio tracks of more than a few seconds – like a book extract, or a decent length piece of music – involves transferring control from your own skill to Alexa, who then manages the sequencing of tracks and all that. That’s all very well, and I understand the purpose behind it, but it also means that you have lost some control over the presentation of the skill as the various tracks play. For example, on the new Echo Show (the one with the screen) you cannot interleave the tracks with relevant pictures – like a book cover, for example. Basically the two bits of capability don’t work very well together. Of course all these things are very new, but it would be great to see some better integration between the different pieces of the jigsaw. Hopefully this will be improved with time…
I was going to write a blog on something to do with Alexa, but that will now appear after the Christmas holiday break. That’s partly because I have been moving rocks and making new gravel paths, and ending the day somewhat fatigued…
So instead, this is just a short post about an email I received last night, saying that Half Sick of Shadows has been awarded an IndieBrag Medallion.
Specially, I read this:
We have completed the review process for your book “Half Sick of Shadows” and I am pleased to inform you that it has been selected to receive a B.R.A.G. Medallion. We would now like to assist you in gaining recognition of your fine work. In return, we ask that you permit us to add your book to the listing of Medallion honorees on our website www.bragmedallion.com.
Well, needless to say I haven’t yet had time to do the stuff at their website – that will follow over the next few days – but that was a very nice piece of news just as the holiday break is starting!
A follow-up to my earlier post this week, catching up on some more news. But first, here is a couple of snaps (one enlarged and annotated) I took earlier today in the early morning as I walked to East Finchley tube station.
The Moon, Jupiter and Mars, annotated
The Moon, Jupiter and Mars
All very evocative, and leads nicely into my next link, which is a guest post I wrote for Lisl’s Before the Second Sleep blog, on the subject of title. Naturally enough, it’s a topic that really interests me – how will human settlements across the solar system adapt to and reflect the physical nature of the world they are set on?
In particular I look at Mars’ moon Phobos, both in the post and in Timing. So far as we can tell, Phobos is extremely fragile. Several factors cause this, including its original component parts, the closeness of its orbit to Mars, and the impact of whatever piece of space debris caused the giant crater Stickney. But whatever the cause… how might human society adapt to living on a moon where you can’t trust the ground below your feet? For the rest of the post, follow this link.
And also here’s a reminder of the Kindle Countdown offer on most of my books, and the Goodreads giveaway on Half Sick of Shadows. Here are the links…
Half Sick of Shadows is on Goodreads giveaway, with three copies to be won by the end of this coming weekend.
All the other books are on Kindle countdown deal at £0.99 or $0.99 if you are in the UK or US respectively – but once again only until the end of the weekend. Links for these are:
It’s been an exceptionally busy time at work recently, so I haven’t had time to write much. But happily, lots of other things are happening, so here’s a compendium of them.
First, Half Sick of Shadows was reviewed on Sruti’s Bookblog, with a follow-up interview. The links are: the review itself, plus the first and second half of the interview. “She wishes for people to value her but they seem to be changing and missing… She can see the world, but she always seemed curbed and away from everything.”
Secondly, right now there’s a whole lot of deals available on my novels, from oldest to newest. Half Sick of Shadows is on Goodreads giveaway, with three copies to be won by the end of next weekend.
All the other books are on Kindle countdown deal at £0.99 or $0.99 if you are in the UK or US respectively. Links for these are:
Amazon rules prevent me from putting Half Sick of Shadows on a countdown deal (it’s already too economically priced) but in order to be more or less consistent there is a Goodreads giveaway of three copies running at the same time – just follow the link on or after December 10th to enter!
A few days ago on The Review Facebook page (look back to December 1st) the question was posed – what person in history would you like to see written about? Naturally enough, most replies focused on historical individuals who had lived interesting lives but had never really had the attention in fact or fiction that the various contributors felt was appropriate.
Now, I kept quiet in this discussion, because my mind had immediately run away down an entirely different avenue, and it didn’t seem the right place to ramble on about that. But here in the blog is a different matter!
Doggerland is the name we give to the stretch of land which once joined the eastern counties of England to parts of Europe. Nowadays the North Sea covers that whole span, but every so often ancient relics are retrieved, mostly by accident in fishing nets (the first such being a barbed antler tool back in 1931). The name Doggerland comes from the Dogger Bank, which is a large region of sandbanks and shoals in the North Sea, in places no more than 50′ deep.
So nowadays the sea divides Norfolk and the Netherlands, Lincolnshire and Denmark. And with climate change and slowly rising sea levels, this is unlikely to change. But let’s roll back some ten thousand years, and see the changing picture.
When the land warmed after the last ice age, Britain and Europe were united by a broad low-lying tract of land (this was c. 11000BC). This land was never rugged or mountainous – imagine something like present day East Anglia, Holland, or Denmark, and you have the picture. Two arms of seawater divided this from Scandinavia to the north-east, and Scotland and Northumberland to the north-west. Several rivers – including the Thames, the Seine, the Rhine – flowed into this broad plain, and thence into the Atlantic via what was to become the English Channel.
The land was good for hunting and trapping animals, the margins had fish and shellfish, and when early farmers arrived they found the soil to be fertile. It was, I suspect, a pleasant and welcoming place to be, with a climate becoming gradually milder as, decade by decade, the Ice Age retreated. The sea level rose as the ice melted. In some places, the land sank down as the sheer weight of the glaciers further north was released – this is still happening in the Scilly Isles which, very very slowly, are being submerged. Both factors spelled the end for Doggerland.
By now this huge expanse of territory has completely disappeared. This did not happen overnight – best estimates are that it was all gone a little before 6000BC, so it took around five thousand years to dwindle. The occupants, whether living a hunter-gatherer or settled lifestyle, had many generations to adjust to the change. I suppose they had oral traditions which spoke of how this island used to be attached to the land, or that forest used to extend several days’ journey further north. But within that long span of steady reduction, most likely there were also sudden calamities. A storm surge one winter might have taken away miles of coastline. An autumn flood might have demolished a natural barrier to the water, exposing the lower fields beyond. A series of unusually high tides might turn fresh water meadows to salt marsh. A landslide in Norway, resulting in a tsunami, probably did much to finish the process. All of these things have been seen in the low-lying lands which still border the North Sea.
So the story I want to tell, one day, is the story of the last person to leave Doggerland. Or, more widely, the last community to abandon its shrinking and increasingly boggy surface. What was it like to leave the places, practical and sacred, which their people had moved through for so long? How were they received by those groups already living in the regions around? Did they look back with relief or regret?
Perhaps one day, when I want to switch back from science fiction to ancient history, it’s a story that I will tell.
Today is the third and last post based loosely on upcoming techie stuff I learned about at the recent Microsoft Future Decoded conference here in London. It’s another speculative one this time, focusing on quantum computing, which according to estimates by speakers might be about five years away. But a lot has to happen if that five year figure is at all accurate.
It’s a very technical area, both as regards the underlying maths and the physical implementation, and I don’t intend going far into that. Many groups around the world, both in industry and academia, are actively working on this, hoping to crack both theory and practice. So what’s the deal? Why all the effort?
Conventional computers, of the kind we are familiar with, operate essentially in a linear sequential way. Now, there are ways to fudge this and give a semblance of parallel working. Even on a domestic machine you can run lots of programs at the same time, but at the level of a single computing core you are still performing one thing at a time, and some clever scheduling shares resources between several in-progress tasks. A bigger computer will wire up multiple processors and have vastly more elaborate scheduling, to make the most efficient use of what it’s got. But at the end of the day, present-day logic circuits do one thing at a time.
This puts some tasks out of reach. For example, the security layer that protects your online banking transactions (and such like) relies on a complex mathematical problem, which takes an extremely long time to solve. In theory it could be done, but in practice it is impenetrable. Perhaps more interestingly, there are problems in all the sciences which are intractable not only with present-day systems, but also including any credible speed advances using present-day architecture. It actually doesn’t take much complexity to render the task impossible.
Quantum computing offers a way to actually achieve parallel processing on a massive scale. It relies not on binary true/false logic, but on the probability models which are the foundation of the quantum world. It is as though many different variations of a problem all run simultaneously, each (as it were) in their own little world. It’s a perfect solution for all kinds of problems where you would like to find an optimal solution to a complex situation. So to break our online security systems, a quantum computer would simultaneously pursue many different cracking routes to break in. By doing that, the task becomes solvable. And yes, that is going to need a rethink of how we do internet security. But for today let’s look at a couple of more interesting problems.
First, there’s one from farming, or biochemistry if you prefer. To feed the world, we need lots of nitrogen to make fertiliser. The chemical process to do this commercially is energy-intensive, and nearly 2% of the world’s power goes on this one thing. But… there is a family of plants, the leguminosae, which fix nitrogen from the air into the soil using nothing more than sunlight and the organic molecules in their roots. They are very varied, from peas and beans down to fodder crops like clover, and up to quite sizeable trees. We don’t yet know exactly how this nitrogen fixing works. We think we know the key biochemical involved, but it’s complicated… too complicated for our best supercomputers to analyse. A quantum computer might solve the problem in short order.
Climate science is another case. There are several computer programs which aim to model what is going on globally. They are fearfully complicated, aiming to include as wide a range as possible of contributing factors, together with their mutual interaction. Once again, the problem is too complicated to solve in a realistic time. So, naturally, each group working on this makes what they regard as appropriate simplifications and approximations. A quantum computer would certainly allow for more factors to be integrated, and would also allow more exploration of the consequences of one action rather than another. We could experiment with what-if models, and find effective ways to deploy limited resources.
So that’s a little of what might be achieved with a quantum computer. To finish this blog post off, what impact might one have in science fiction, and my own writing in particular. Well, unlike the previous two weeks, my answer here would be “not very much, I think“. Most writers, including myself, simply assume that future computers will be more powerful, more capable, than those of today. The exact technical architecture is of less literary importance! Right now it looks as if a quantum computer will only work at extremely low temperatures, not far above absolute zero. So you are talking about sizeable, static installations. If we manage to find or make the necessary materials that they could run at room temperature, that could change, but that’s way more than five years away.
So in my stories, Slate would not be a quantum computer, just a regular one running some very sophisticated software. Now, the main information hub down in London, Khufu, could possibly be such a thing – certainly he’s a better candidate, sitting statically in one place, processing and analysing vast quantities of data, making connections between facts that aren’t at all obvious on the surface. But as regards the story, it hardly matters whether he is one or the other.
So, interested as I am in the development of a quantum computer, I don’t think it will feature in an important way in the world of Far from the Spaceports!
That’s it for today, and indeed for this little series… until next year.
The second part of this quick review of the Future Decoded conference looks at things a little further ahead. This was also going to be the final part, but as there’s a lot of cool stuff to chat about, I’ve decided to add part 3…
So here’s a problem that is a minor one at the moment, but with the potential to grow into a major one. In short, the world has a memory shortage! Already we are generating more bits and bytes that we would like to store, than we have capacity for. Right now it’s an inconvenience rather than a crisis, but year by year the gap between wish and actuality is growing. If growth in both these areas continues as at present, within a decade we will only be able to store about a third of what we want. A decade or so later that will drop to under one percent.
Think about it on the individual level. You take a short video clip while on holiday. It goes onto your phone. At some stage you back it up in Dropbox, or iCloud, or whatever your favourite provider is. Maybe you keep another copy on your local hard drive. Then you post it to Facebook and Google+. You send it to two different WhatsApp groups and email it to a friend. Maybe you’re really pleased with it and make a YouTube version. You now have ten copies of your 50Mb video… not to mention all the thumbnail images, cached and backup copies saved along the way by these various providers, which you’re almost certainly not aware of and have little control over. Your ten seconds of holiday fun has easily used 1Gb of the world’s supply of memory! For comparison, the entire Bible would fit in about 3 Mb in plain uncompressed text, and taking a wild guess, you would use well under that 1 Gb value to store every last word of the world’s sacred literature. And a lot of us are generating holiday videos these days! Then lots of cyclists wear helmet cameras these days, cars have dash cams… and so on. We are generating prodigious amounts of imagery.
So one solution is that collectively we get more fussy about cleaning things up. You find yourself deleting the phone version when you’ve transferred it to Dropbox. You decide that a lower resolution copy will do for WhatsApp. Your email provider tells you that attachments will be archived or disposed of according to some schedule. Your blog allows you to reference a YouTube video in a link, rather than uploading yet another copy. Some clever people somewhere work out a better compression algorithm. But… even all these workarounds together will still not be enough to make up for the shortfall, if the projections are right.
Holiday snaps aside, a great deal of this vast growth in memory usage is because of emerging trends in computing. Face and voice recognition, image analysis, and other AI techniques which are now becoming mainstream use a great deal of stored information to train the models ready for use. Regular blog readers will know that I am particularly keen on voice assistants like Alexa. My own Alexa programming doesn’t use much memory, as the skills are quite modest and tolerably well written. But each and every time I make an Alexa request, that call goes off somewhere into the cloud, to convert what I said (the “utterance”) into what I meant (the “intent”). Alexa is pretty good at getting it right, which means that there is a huge amount of voice training data sitting out there being used to build the interpretive models. Exactly the same is true for Siri, Cortana, Google Home, and anyone else’s equivalent. Microsoft call this training area a “data lake”. What’s more, there’s not just one of them, but several, at different global locations to reduce signal lag.
Hopefully that’s given some idea of the problem. Before looking at the idea for a solution that was presented the other day, let’s think what that means for fiction writing. My AI persona Slate happily flits off to the asteroid belt with her human investigative partner Mitnash in Far from the Spaceports. In Timing, they drop back to Mars, and in the forthcoming Authentication Key they will get out to Saturn, but for now let’s stick to the asteroids. That means they’re anywhere from 15 to 30 minutes away from Earth by signal. Now, Slate does from time to time request specific information from the main hub Khufu in Earth, but necessarily this can only be for some detail not locally available. Slate can’t send a request down to London every time Mit says something, just so she can understand it. Trying to chat with up to an hour lag between statements would be seriously frustrating. So she has to carry with her all of the necessary data and software models that she needs for voice comprehension, speech, and defence against hacking, not to mention analysis, reasoning, and the capacity to feel emotion. Presupposing she has the equivalent of a data lake, she has to carry it with her. And that is simply not feasible with today’s technology.
So the research described the other day is exploring the idea of using DNA as the storage medium, rather than a piece of specially constructed silicon. DNA is very efficient at encoding data – after all, a sperm and egg together have all the necessary information to build a person. The problems are how to translate your original data source into the various chemical building blocks along a DNA helix, and conversely how to read it out again at some future time. There’s a publicly available technical paper describing all this. We were shown a short video which had been encoded, stored, and decoded using just this method. But it is fearfully expensive right now, so don’t expect to see a DNA external drive on your computer anytime soon!
The benefits purely in terms of physical space are colossal. The largest British data centre covers the equivalent of about eight soccer grounds (or four cricket pitches), using today’s technology. The largest global one is getting on for ten times that size. With DNA encoding, that all shrinks down to about a matchbox. For storytelling purposes that’s fantastic – Slate really is off to the asteroids and beyond, along with her data lake in plenty of local storage, which now takes up less room and weight than a spare set of underwear for Mit. Current data centres also use about the same amount of power as a small town, (though because of judicious choice of technology they are much more ecologically efficient) but we’ll cross the power bridge another time.
However, I suspect that many of us might see ethical issues here. The presenter took great care to tell us that the DNA used was not from anything living, but had been manufactured from scratch for the purpose. No creatures had been harmed in the making of this video. But inevitably you wonder if all researchers would take this stance. Might a future scenario play out that some people are forced to sell – or perhaps donate – their bodies for storage? Putting what might seem a more positive spin on things, wouldn’t it seem convenient to have all your personal data stored, quite literally, on your person, and never entrusted to an external device at all? Right now we are a very long way from either of these possibilities, but it might be good to think about the moral dimensions ahead of time.
Either way, the starting problem – shortage of memory – is a real one, and collectively we need to find some kind of solution…
And for the curious, this is the video which was stored on and retrieved from DNA – regardless of storage method, it’s a fun and clever piece of filming (https://youtu.be/qybUFnY7Y8w)…
This is the first of two posts in which I talk about some of the major things I took away from the recent Future Decoded conference here in London. Each year they try to pick out some tech trends which they reckon will be important in the next few years.
This week’s theme is to do with stuff which is available now, or in the immediate future. And the first topic is assisting users. Approximately one person in six in the world is considered disabled in some way, whether from birth or through accident or illness (according to a recent WHO report). That’s about about a billion people in total. Technology ought to be able to assist, but often has failed to do so. Now a variety of assistance technologies have been around for a while – the years-old alt text in images was a step in that direction – but Windows 10 has a whole raft of such support.
Now, I am well aware that lots of people don’t like Win 10 as an operating system, but this showed it at its best. When you get to see a person blind from birth able to use social media, and a lad with cerebral palsy pursuing a career as an author, it doesn’t need a lot of sales hype. Or a programmer who lost use of all four limbs in an accident, writing lines of code live in the presentation using a mixture of Cortana’s voice control plus an on-screen keyboard triggered by eye movement. Not to mention that the face recognition login feature provided his first opportunity for privacy since the accident, as noone else had to know his password.
But the trend goes beyond disabilities of a permanent kind – most of us have what you might call situational limitations at various times. Maybe we’re temporarily bed-ridden through illness. Maybe we’re simply one-handed through carrying an infant around. Whatever the specific reason, all the big tech companies are looking for ways to make such situations more easily managed.
Another big trend was augmented reality using 3d headsets. I suppose most of us think of these as gaming gimmicks, providing another way to escape the demands of life. But going round the exhibition pitches – most by third-party developers rather than Microsoft themselves – stall after stall was showing off the use of headsets in a working context.
Training was one of the big areas, with trainers and students blending reality and virtual image in order to learn skills or be immersed in key situations. We’ve been familiar with the idea of pilots training on flight simulators for years – now that same principle is being applied to medical students and emergency response teams, all the way through to mechanical engineers and carpet-layers. Nobody doubts that a real experience has a visceral quality lacking from what you get from a headset, but it has to be an advantage that trainees have had some exposure to rare but important cases.
This also applies to on-the-job work. A more experienced worker can “drop in” to supervise or enhance the work of a junior one without both of them being physically present. Or a human worker can direct a mechanical tool in hostile environments or disaster zones. Or possible solutions can be tried out without having to make up physical prototypes. You can imagine a kind of super-Skype meeting, with mixed real and virtual attendance. Or a better way to understand a set of data than just dumping it into a spreadsheet – why not treat it as a plot of land you can wander round and explore?
Now most of these have been explored in fiction several times, with both their positive and negative connotations. And I’m sure that a few of these will turn out to be things of the moment which don’t make it into everyday use. And right now the dinky headsets which make it all happen are too expensive to find in every house, or on everyone’s desk at work – unless you have a little over £2500 lying around doing nothing. But a lot of organisations are betting that there’ll be good use for the technology, and I guess the next five years will show us whether they’re right or wrong. Will these things stay as science fiction, or become part of the science of life?
So that’s this week – developments that are near-term and don’t represent a huge change in what we have right now. Next time I’ll be looking at things further ahead, and more speculative…