Category Archives: Software

Future Possibilities 2

The second part of this quick review of the Future Decoded conference looks at things a little further ahead. This was also going to be the final part, but as there’s a lot of cool stuff to chat about, I’ve decided to add part 3…

Prediction of data demand vs supply (IDC.org)
Prediction of data demand vs supply (IDC.org)

So here’s a problem that is a minor one at the moment, but with the potential to grow into a major one. In short, the world has a memory shortage! Already we are generating more bits and bytes that we would like to store, than we have capacity for. Right now it’s an inconvenience rather than a crisis, but year by year the gap between wish and actuality is growing. If growth in both these areas continues as at present, within a decade we will only be able to store about a third of what we want. A decade or so later that will drop to under one percent.

Think about it on the individual level. You take a short video clip while on holiday. It goes onto your phone. At some stage you back it up in Dropbox, or iCloud, or whatever your favourite provider is. Maybe you keep another copy on your local hard drive. Then you post it to Facebook and Google+. You send it to two different WhatsApp groups and email it to a friend. Maybe you’re really pleased with it and make a YouTube version. You now have ten copies of your 50Mb video… not to mention all the thumbnail images, cached and backup copies saved along the way by these various providers, which you’re almost certainly not aware of and have little control over. Your ten seconds of holiday fun has easily used 1Gb of the world’s supply of memory! For comparison, the entire Bible would fit in about 3 Mb in plain uncompressed text, and taking a wild guess, you would use well under that 1 Gb value to store every last word of the world’s sacred literature. And a lot of us are generating holiday videos these days! Then lots of cyclists wear helmet cameras these days, cars have dash cams… and so on. We are generating prodigious amounts of imagery.

So one solution is that collectively we get more fussy about cleaning things up. You find yourself deleting the phone version when you’ve transferred it to Dropbox. You decide that a lower resolution copy will do for WhatsApp. Your email provider tells you that attachments will be archived or disposed of according to some schedule. Your blog allows you to reference a YouTube video in a link, rather than uploading yet another copy. Some clever people somewhere work out a better compression algorithm. But… even all these workarounds together will still not be enough to make up for the shortfall, if the projections are right.

Amazon Dot - Active
Amazon Dot – Active

Holiday snaps aside, a great deal of this vast growth in memory usage is because of emerging trends in computing. Face and voice recognition, image analysis, and other AI techniques which are now becoming mainstream use a great deal of stored information to train the models ready for use. Regular blog readers will know that I am particularly keen on voice assistants like Alexa. My own Alexa programming doesn’t use much memory, as the skills are quite modest and tolerably well written. But each and every time I make an Alexa request, that call goes off somewhere into the cloud, to convert what I said (the “utterance”) into what I meant (the “intent”). Alexa is pretty good at getting it right, which means that there is a huge amount of voice training data sitting out there being used to build the interpretive models. Exactly the same is true for Siri, Cortana, Google Home, and anyone else’s equivalent. Microsoft call this training area a “data lake”. What’s more, there’s not just one of them, but several, at different global locations to reduce signal lag.

Far from the Spaceports cover
Far from the Spaceports cover

Hopefully that’s given some idea of the problem. Before looking at the idea for a solution that was presented the other day, let’s think what that means for fiction writing.  My AI persona Slate happily flits off to the asteroid belt with her human investigative partner Mitnash in Far from the Spaceports. In Timing, they drop back to Mars, and in the forthcoming Authentication Key they will get out to Saturn, but for now let’s stick to the asteroids. That means they’re anywhere from 15 to 30 minutes away from Earth by signal. Now, Slate does from time to time request specific information from the main hub Khufu in Earth, but necessarily this can only be for some detail not locally available. Slate can’t send a request down to London every time Mit says something, just so she can understand it. Trying to chat with up to an hour lag between statements would be seriously frustrating. So she has to carry with her all of the necessary data and software models that she needs for voice comprehension, speech, and defence against hacking, not to mention analysis, reasoning, and the capacity to feel emotion. Presupposing she has the equivalent of a data lake, she has to carry it with her. And that is simply not feasible with today’s technology.

DNA Schematic (Wikipedia)
DNA Schematic (Wikipedia)

So the research described the other day is exploring the idea of using DNA as the storage medium, rather than a piece of specially constructed silicon. DNA is very efficient at encoding data – after all, a sperm and egg together have all the necessary information to build a person. The problems are how to translate your original data source into the various chemical building blocks along a DNA helix, and conversely how to read it out again at some future time. There’s a publicly available technical paper describing all this. We were shown a short video which had been encoded, stored, and decoded using just this method. But it is fearfully expensive right now, so don’t expect to see a DNA external drive on your computer anytime soon!

Microsoft data centre (ZDNet/Microsoft)
Microsoft data centre (ZDNet/Microsoft)

The benefits purely in terms of physical space are colossal. The largest British data centre covers the equivalent of about eight soccer grounds (or four cricket pitches), using today’s technology. The largest global one is getting on for ten times that size. With DNA encoding, that all shrinks down to about a matchbox. For storytelling purposes that’s fantastic – Slate really is off to the asteroids and beyond, along with her data lake in plenty of local storage, which now takes up less room and weight than a spare set of underwear for Mit. Current data centres also use about the same amount of power as a small town, (though because of judicious choice of technology they are much more ecologically efficient) but we’ll cross the power bridge another time.

However, I suspect that many of us might see ethical issues here. The presenter took great care to tell us that the DNA used was not from anything living, but had been manufactured from scratch for the purpose. No creatures had been harmed in the making of this video. But inevitably you wonder if all researchers would take this stance. Might a future scenario play out that some people are forced to sell – or perhaps donate – their bodies for storage? Putting what might seem a more positive spin on things, wouldn’t it seem convenient to have all your personal data stored, quite literally, on your person, and never entrusted to an external device at all? Right now we are a very long way from either of these possibilities, but it might be good to think about the moral dimensions ahead of time.

Either way, the starting problem – shortage of memory – is a real one, and collectively we need to find some kind of solution…

And for the curious, this is the video which was stored on and retrieved from DNA – regardless of storage method, it’s a fun and clever piece of filming (https://youtu.be/qybUFnY7Y8w)…

 

Future possibilities 1

This is the first of two posts in which I talk about some of the major things I took away from the recent Future Decoded conference here in London. Each year they try to pick out some tech trends which they reckon will be important in the next few years.

Disability statistics by age and gender (Eurostat)
Disability statistics by age and gender (Eurostat)

This week’s theme is to do with stuff which is available now, or in the immediate future. And the first topic is assisting users. Approximately one person in six in the world is considered disabled in some way, whether from birth or through accident or illness (according to a recent WHO report). That’s about about a billion people in total. Technology ought to be able to assist, but often has failed to do so. Now a variety of assistance technologies have been around for a while – the years-old alt text in images was a step in that direction – but Windows 10 has a whole raft of such support.

Now, I am well aware that lots of people don’t like Win 10 as an operating system, but this showed it at its best. When you get to see a person blind from birth able to use social media, and a lad with cerebral palsy pursuing a career as an author, it doesn’t need a lot of sales hype. Or a programmer who lost use of all four limbs in an accident, writing lines of code live in the presentation using a mixture of Cortana’s voice control plus an on-screen keyboard triggered by eye movement. Not to mention that the face recognition login feature provided his first opportunity for privacy since the accident, as noone else had to know his password.

But the trend goes beyond disabilities of a permanent kind – most of us have what you might call situational limitations at various times. Maybe we’re temporarily bed-ridden through illness. Maybe we’re simply one-handed through carrying an infant around. Whatever the specific reason, all the big tech companies are looking for ways to make such situations more easily managed.

Another big trend was augmented reality using 3d headsets. I suppose most of us think of these as gaming gimmicks, providing another way to escape the demands of life. But going round the exhibition pitches – most by third-party developers rather than Microsoft themselves – stall after stall was showing off the use of headsets in a working context.

Medical training (Microsoft.com and Case Western Reserve University)
Medical training (Microsoft.com and Case Western Reserve University)

Training was one of the big areas, with trainers and students blending reality and virtual image in order to learn skills or be immersed in key situations. We’ve been familiar with the idea of pilots training on flight simulators for years – now that same principle is being applied to medical students and emergency response teams, all the way through to mechanical engineers and carpet-layers. Nobody doubts that a real experience has a visceral quality lacking from what you get from a headset, but it has to be an advantage that trainees have had some exposure to rare but important cases.

Assembly line with hololens (Microsoft.com)
Assembly line with hololens (Microsoft.com)

This also applies to on-the-job work. A more experienced worker can “drop in” to supervise or enhance the work of a junior one without both of them being physically present. Or a human worker can direct a mechanical tool in hostile environments or disaster zones. Or possible solutions can be tried out without having to make up physical prototypes. You can imagine a kind of super-Skype meeting, with mixed real and virtual attendance. Or a better way to understand a set of data than just dumping it into a spreadsheet – why not treat it as a plot of land you can wander round and explore?

Cover, The Naked Sun (Goodreads)
Cover, The Naked Sun (Goodreads)

Now most of these have been explored in fiction several times, with both their positive and negative connotations. And I’m sure that a few of these will turn out to be things of the moment which don’t make it into everyday use. And right now the dinky headsets which make it all happen are too expensive to find in every house, or on everyone’s desk at work – unless you have a little over £2500 lying around doing nothing. But a lot of organisations are betting that there’ll be good use for the technology, and I guess the next five years will show us whether they’re right or wrong. Will these things stay as science fiction, or become part of the science of life?

So that’s this week – developments that are near-term and don’t represent a huge change in what we have right now. Next time I’ll be looking at things further ahead, and more speculative…

 

 

Left behind by events, part 3

This is the third and final part of Left Behind by Events, in which I take a look at my own futuristic writing and try to guess which bits I will have got utterly wrong when somebody looks back at it from a future perspective! But it’s also the first of a few blogs in which I will talk a bit about some of the impressions I got of technical near-future as seen at the annual Microsoft Future Decoded conference that I went to the other day.

Amazon Dot - Active
Amazon Dot – Active

So I am tolerably confident about the development of AI. We don’t yet have what I call “personas” with autonomy, emotion, and gender. I’m not counting the pseudo-gender produced by selecting a male or female voice, though actually even that simple choice persuades many people – how many people are pedantic enough to call Alexa “it” rather than “she”? But at the rate of advance of the relevant technologies, I’m confident that we will get there.

I’m equally confident, being an optimistic guy, that we’ll develop better, faster space travel, and have settlements of various sizes on asteroids and moons. The ion drive I posit is one definite possibility: the Dawn asteroid probe already uses this system, though at a hugely smaller rate of acceleration than what I’m looking for. The Hermes, which features in both the book and film The Martian, also employs this drive type. If some other technology becomes available, the stories would be unchanged – the crucial point is that intra-solar-system travel takes weeks rather than months.

The Sting (PInterest)
The Sting (PInterest)

I am totally convinced that financial crime will take place! One of the ways we try to tackle it on Earth is to share information faster, so that criminals cannot take advantage of lags in the system to insert falsehoods. But out in the solar system, there’s nothing we can do about time lags. Mars is between 4 and 24 minutes from Earth in terms of a radio or light signal, and there’s nothing we can do about that unless somebody invents a faster-than-light signal. And that’s not in range of my future vision. So the possibility of “information friction” will increase as we spread our occupancy wider. Anywhere that there are delays in the system, there is the possibility of fraud… as used to great effect in The Sting.

Something I have not factored in at all is biological advance. I don’t have cyborgs, or genetically enhanced people, or such things. But I suspect that the likelihood is that such developments will occur well within the time horizon of Far from the Spaceports. Biology isn’t my strong suit, so I haven’t written about this. There’s a background assumption that illness isn’t a serious problem in this future world, but I haven’t explored how that might happen, or what other kinds of medical change might go hand-in-hand with it. So this is almost certainly going to be a miss on my part.

Moving on to points of contact with the conference, there is the question of my personas’ autonomy. Right now, all of our current generation of intelligent assistants – Alexa, Siri, Cortana, Google Home and so on – rely utterly on a reliable internet connection and a whole raft of cloud-based software to function. No internet or no cloud connection = no Alexa.

This is clearly inadequate for a persona like Slate heading out to the asteroid belt! Mitnash is obviously not going to wait patiently for half an hour or so between utterances in a conversation. For this to work, the software infrastructure that imparts intelligence to a persona has to travel along with it. Now this need is already emerging – and being addressed – right now. I guess most of us are familiar with the idea of the Cloud. Your Gmail account, your Dropbox files, your iCloud pictures all exists somewhere out there… but you neither know nor care where exactly they live. All you care is that you can get to them when you want.

A male snow leopard (Wikipedia)
A male snow leopard (Wikipedia)

But with the emerging “internet of things” that is having to change. Let’s say that a wildlife programme puts a trail camera up in the mountains somewhere in order to get pictures of a snow leopard. They want to leave it there for maybe four months and then collect it again. It’s well out of wifi range. In those four months it will capture say 10,000 short videos, almost all of which will not be of snow leopards. There will be mountain goats, foxes, mice, leaves, moving splashes of sunshine, flurries of rain or snow… maybe the odd yeti. But the memory stick will only hold say 500 video clips. So what do you do? Throw away everything that arrives after it gets full? Overwrite the oldest clips when you need to make space? Arrange for a dangerous and disruptive resupply trip by your mountaineer crew?

Or… and this is the choice being pursued at the moment… put some intelligence in your camera to try to weed out non-snow-leopard pictures. Your camera is no longer a dumb picture-taking device, but has some intelligence. It also makes your life easier when you have recovered the camera and are trying to scan through the contents. Even going through my Grasmere badger-cam vids every couple of weeks involves a lot of deleting scenes of waving leaves!

So this idea is now being called the Cloud Edge. You put some processing power and cleverness out in your peripheral devices, and only move what you really need into the Cloud itself. Some of the time, your little remote widgets can make up their own minds what to do. You can, so I am told, buy a USB stick with trainable neural network on it for sifting images (or other similar tasks) for well under £100. Now, this is a far cry from an independently autonomous persona able to zip off to the asteroid belt, but it shows that the necessary technologies are already being tackled.

Artist's Impression of Dawn in orbit (NASA/JPL)
Artist’s Impression of Dawn in orbit (NASA/JPL)

I’ve been deliberately vague about how far into the future Far from the Spaceports, Timing, and the sequels in preparation are set. If I had to pick a time I’d say somewhere around the one or two century mark. Although science fact notoriously catches up with science fiction faster than authors imagine, I don’t expect to see much of this happening in my lifetime (which is a pity, really, as I’d love to converse with a real Slate). I’d like to think that humanity from one part of the globe or another would have settled bases on other planets, moons, or asteroids while I’m still here to see them, and as regular readers will know, I am very excited about where AI is going. But a century to reach the level of maturity of off-Earth habitats that I propose seems, if anything, over-optimistic.

That’s it for today – over the next few weeks I’ll be talking about other fun things I learned…

A review at The Review

I am writing in haste today as in a few minutes I am off at a technology conference – the annual Microsoft Future Decoded event, held out in the old Docklands area. Last year this was well worth going to, for both the scheduled presentations and the informal chats at booths and stalls. As usual, my main interest is in AI, and there’s a fair bit on offer. No doubt I shall relate anything of wider interest in the coming weeks.

Cover - Queen of a Distant Hive
Cover – Queen of a Distant Hive

So the main content today is to draw attention to The Review, and my particular review there of Theresa Tomlinson’s Queen of a Distant Hive. It’s set in 7th century Britain, when the land was still divided into several different kingdoms coexisting in uneasy truce. The novel is a sequel to A Swarming of Bees, and involves some overlap of characters, but it can be read separately. I thoroughly enjoyed this book (well, both books) as you can discover by reading the review. Moreover, Theresa is providing a copy as giveaway prize, and all you have to do to enter, is to leave a comment at The Review blog page or the linked Facebook page.

That’s it for today…

Can handwriting survive?

I’ve been thinking for a little while now about reading and writing, and decided to convert those thoughts into a blog post. I used to reckon that reading and writing were two sides of the same coin. We teach them at broadly the same time, and it seems natural with a child to talk through the physical process of making a letter shape at the same time as learning to recognise it on a page.

Cartouche of Rameses at Luxor
Cartouche of Rameses at Luxor

But lately, I’ve been reconsidering this. My thinking actually goes back several years to when I was studying ancient Egyptian. It is generally understood that alongside the scribes of Egypt – who had a good command of hieroglyphic and hieratic writing, plus Akkadian cuneiform and a few other written scripts and a whole lot of technical knowledge besides – there was a much ĺarger group of people who could read reasonably well, but not write with fluency or competence. A few particularly common signs, like the cartouche of the current pharaoh, or the major deity names, would be very widely recognised even by people who were generally illiterate. You see this same process happening with tourists today, who start to spot common groups of Egyptian signs long before they could dream of constructing a sentence.

Hieratic Scribal Exercise
Hieratic Scribal Exercise

The ability to write is far more than just knowing letter shapes. You need a wide enough vocabulary to select the right word among several choices, to know how to change each word with past or future tense, or number of people, or gender. You need background knowledge of the subject. You need to understand the conventions of the intended audience so as to convey the right meaning. In short, learning to write is more demanding than learning to read (and I’m talking about the production of writing here, not the quality of the finished product).

Roll forward to the modern day, and we are facing a slightly different kind of question. The ability to read is essential to get and thrive in most jobs. Or to access information, buy various goods, or just navigate from place to place. I’m sure it is possible to live in today’s England without being able to read, but it will be difficult, and all sorts of avenues are closed to that person.

But the ability to write – by which I mean to make handwriting – is, I think, much more in doubt. Right now I’m constructing this blog post in my lunch hour on a mobile phone, tapping little illuminated areas of the screen to generate the letters. In a little while I’ll go back to my desk, and enter characters by pressing down little bits of plastic on a keyboard. Chances are I’ll be writing some computer code (in the C# or NodeJS computer languages, if you’re curious) but if I have to send a message to a colleague I’ll use the same mechanical process.

Amazon Dot - Active
Amazon Dot – Active

Then again, some of my friends use dictation software to “write” emails and letters, and then do a small amount of corrective work at the end. They tell me that dictation technology has advanced to the stage where only minor fix-ups are needed. And, as most blog readers will know, I’m enthusiastic about Alexa for controlling functionality by voice. Although writing text of any great length is not yet feasible on that platform, my guess is that it won’t be long until this becomes real.

All of this means that while the act of reading will most likely remain crucial for a long time to come, maybe this won’t be true of writing in the conventional sense. Speaking personally, hand-writing is already something I do only for hastily scribbled notes or postcards to older relatives. Or occasionally to sign something. The readability of my hand-writing is substantially lower than it used to be, purely because I don’t exercise it much (and by pure chance I heard several of my work colleagues saying the same thing today). Do I need hand-writing in modern life? Not really, not for anything crucial.

Some devices
Some devices

I don’t think it’s just me. On my commuting journeys I see people reading all kinds of things – newspapers, books, magazines, Kindles, phones, tablets and so on. I really cannot remember the last time I saw somebody reading a piece of hand-written material on the tube.

Now, to set against that, I have friends and relatives for whom the act of writing is still important. They would say that the nature of the writing surface and the writing implement – pencil, biro, fountain pen – are important ingredients, and that bodily engagement with the process conveys something extra than simply the production of letters. Emphasis and emotion are easier to impart – they say – when you are personally fashioning the outcome. To me, this seems simply a temporary problem of the tools we are using, but we shall see.

Looking ahead, I cannot imagine a time when reading skills won’t be necessary – there are far too many situations where you have to pore over things in detail, review what was written a few chapters back, compare one thing against another, or just enjoy the artistry with which the text had been put together. Just to recognise which letter to tap or click requires that I be able to read. But hand-writing? I’m not at all sure this will survive much longer.

Perhaps a time will come when teaching institutions will not consider it worth while investing long periods of time in getting children’s hand-writing to an acceptable standard – after all, pieces of quality writing can be generated by several other means.

Quill pen device for tablet
Quill pen device for tablet

Mostly about YouTube

Just a short post today to highlight a YouTube video based around one of the Polly conversations from Timing that I have been talking about recently. This one is of Mitnash, Slate, Parvati and Chandrika talking on board Parvati’s spaceship, The Parakeet, en route to Phobos. The subject of conversation is the recent wreck of Selif’s ship on Tean, one of the smaller asteroids in the Scilly isles group…

The link is: https://youtu.be/Uv5L0yMKaT0

While we’re in YouTube, here is the link to the conversation with Alexa about Timing… https://youtu.be/zLHZSOF_9xo

It’s slow work, but gradually all these various conversations and readings will get added to YouTube and other video sharing sites.

Bugs, faults, and writing

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

Today’s blog looks at bugs – the little things in a system that can go so very wrong. But before that – and entirely unrelated – I should mention that Half Sick of Shadows is now available in paperback form as well as Kindle. You can find the paperback at Amazon UK link, Amazon US link, and every other Amazon worldwide site you favour. So whichever format you prefer, it’s there for you.

So, bugs. In my day job I have to constantly think about what can go wrong with a system, in both small and large ways. No software developer starts out intending to write a bug – they appear, as if by magic, in systems that had been considered thoroughly planned out and implemented. This is just as true of hacking software, viruses and the like, as it is of what you might call positively motivated programs. It’s ironic really – snippets of code designed to take advantage of flaws in regular software are themselves traced and blocked because of their own flaws.

Cover - I, Robot (Goodreads)
Cover – I, Robot (Goodreads)

But back to the practice of QA – finding the problems and faults in a system thought to be correct. You could liken it, without too much of a stretch, to the process of writing. Authors take a situation, or a relationship, or a society, and find the unexpected weak points in it. Isaac Asimov was particularly adept at doing this in his I, Robot series of stories. At the outset he postulated three simple guidelines which all his robots had to follow – guidelines which rapidly caught on with much wider audiences as the “Three Laws of Robotics”. These three laws seemed entirely foolproof, but proved themselves to be a fertile ground for storytelling as he came up with one logical contradiction after another!

But it’s not just in coding software that bugs appear. Wagon wheels used to fall off axles, and I am told that the root cause was that the design was simply not very good. Road layouts change, and end up causing more delays than they resolve. Mugs and jugs spill drink as you try to pour, despite tens of thousands of years of practice making them. And I guess we have all come across “Friday afternoon” cars, tools, cooking pans and so on.

1947 bug found and taped to the engineering logbook (Wikipedia)
1947 bug found and taped to the engineering logbook (Wikipedia)

Bugs can be introduced in lots of places. Somebody thinks they’ve thought up a cool design, but they didn’t consider several important features. Somebody thinks they’ve adequately explained how to turn a design into a real thing, but their explanation is missing a vital step or two – how many of us have foundered upon this while assembling flat-pack furniture? Somebody reads a perfectly clear explanation, but skips over bits which they think they don’t need. Somebody doesn’t quite have the right tool, or the right level of skill, and ploughs on with whatever they have. Somebody realises that a rare combination of factors – what we call an edge case, or corner case – has not been covered in the design, and makes a guess how it should be tackled rather than going back to the designer. Somebody adds a new feature, but in doing so breaks existing functionality which used to work. Somebody makes a commercial decision to release a product before it’s actually ready (as a techie, I find this one particularly frustrating!)

And then you get to actual users. So many systems would work really well if it wasn’t for end-users! People will insist on using the gadget in ways that were never anticipated, or trying out combinations of things that were never thought about. A feature originally intended for use in one way gets pressed into service for something entirely different. People don’t provide input data in the way they’re supposed to, or they don’t stick to the guidelines about how the system is intended to work – and very few of us read the guidelines in the first place!

Timing Kindle cover
Timing Kindle cover

All of which have direct analogies in writing. Some of my books are indeed focused on software, and in particular the murky business of exploiting software for purposes of fraud. That world is full of flaws and failures, of the misuse of systems in both accidental and deliberate ways. But any book – past, present or future – is much the same. A historical novel might explore how a battle is lost because of miscommunication, human failings, or simply bad timing. Poor judgement leads to stories in any age. Friction in human relationships is a perennial field of study. So the two worlds I move in, of working life and leisure, are not really so far apart.

Now, engineering systems, including software engineering – have codes and guidelines intended to identify bugs at an early stage, before they get into the real world of users. The more critical the system, the more stringent the testing. If you write a mobile phone game, the testing threshold is very low! If you write software that controls an aircraft in flight, you have to satisfy all kinds of regulatory tests to show that your product is fit for purpose. But it’s a fair bet that any system at all has bugs in it, just waiting to pop out at an inopportune moment.

As regards writing, you could liken editing to the process of QA. The editor aims to spot slips in the writing – whether simply spelling and grammar, or else more subtle issues of style or viewpoint – and highlight them before the book reaches the general public. We all know that editing varies hugely, whoever carries it out. A friend of mine has recently been disappointed by the poor quality of editing by a professional firm – they didn’t find anywhere near all the bugs that were present, and seem to have introduced a few of their own in the process. But just as no software system can honestly claim to be bug-free, I dare say that no book is entirely without flaw of one kind or another.

Cumbrian voice skills and Martian course corrections

Grasmere Lake
Grasmere Lake

My first piece of news today is by way of celebration that I have been getting some Alexa voice skills active on the Amazon store. These can now be enabled on any of Amazon’s Alexa-enabled devices, such as the Dot or Echo. One of these skills has to do with The Review blog, in that it will list out and read the opening lines of the last few posts there (along with a couple of other blogs I’m involved with). So if you’re interested in a new way to access blogs, and you’ve got a suitable piece of equipment, browse along to the Alexa skills page and check out “Blog Reader“. I’ll be adding other blogs as time goes by.

Cumbria Events Logo
Cumbria Events Logo

The second publicly available skill so far relates to my geographical love for England’s Lake District. Called “Cumbria Events“, this skill identifies upcoming events from the Visit Cumbria web site, and will read them out for the interested user. You can expect other skills to do with both writing and Cumbria to appear in time as I put them together. It’s a pity that Alexa can’t be persuaded to use a Cumbrian accent, but to date that is just not possible. Also, the skills are not yet available on the Amazon US site, so far as I know, but that should change before too long.

Amazon Dot - Active
Amazon Dot – Active

In the process I’ve discovered that writing skills for Alexa is a lot of fun! Like any other programming, you have to think about how people are going to use your piece of work, but unlike much of what I’ve done over the years, you can’t force the user to interact in a particular way. They can say unexpected things, phrase the same request in any of several ways, and so on. Alexa’s current limitation of about 8 seconds of comprehension favours a conversational approach in which the dialogue is kept open for additional requests. The female-gendered persona of my own science fiction writing, Slate, is totally conversational when she wants to be.

It all makes for a fascinating study of the current state of the art of AI. I feel that if we can crack unstructured, open-ended conversation from a device – with all of the subtleties and nuances that go along with speech – then it will be hard to say that a machine cannot be intelligent. Alexa is a very long way from that just now – you reach the constraints and limitations far too early. But even accepting all that, it’s exciting that an easily available consumer device has so much capability, and is so easy to add capabilities.

Artists's impression, MAVEN and Mars (NASA/JPL)
Artists’s impression, MAVEN and Mars (NASA/JPL)

But while all that was going on, a couple of hundred million kilometres away NASA ordered a course correction for the Mars Maven Orbiter. This spacecraft, which has been in orbit for the last couple of years, was never designed to return splendid pictures. Instead, its focus is the Martian atmosphere, and the way this is affected by solar radiation of various kinds. As such, it has provided a great deal of insight into Marian history. So MAVEN was instructed to carry out a small engine burn to keep it well clear of the moon Phobos. Normally they are well separated, but in about a week’s time they would have been within a few seconds of one another. This was considered too risky, so the boost ensures that they won’t now be too close.

Now this attracted my attention since Phobos plays a major part in Timing – it’s right there on the cover, in fact. In the time-frame of Timing, there’s a small settlement on Phobos, which is visited by the main characters Mitnash and Slate as they unravel a financial mystery. This moon is a pretty small object, shaped like a rugby ball about 22 km long and about 17 or 18 km across its girth, so my first reaction was to think what bad luck it was that Maven should be anywhere near Phobos. But in fact MAVEN is in a very elongated orbit to give a range of science measurements, so every now and again its orbit crosses that of Phobos – hence the precautions. This manoeuvre is expected to be the last one necessary for a very long time, given the orbital movements of both objects. So we shall continue getting atmospheric observations for a long while to come.

Timing Kindle cover
Timing Kindle cover

Who is Alexa, where is she?

Hephaestus at his forge (The Louvre, Wiki)
Hephaestus at his forge (The Louvre, Wiki)

Since as far back as written records go – and probably well before that – we humans have imagined artificial life. Sometimes this has been mechanical, technological, like the Greek tales of Hephaestus’ automata, who assisted him at his metalwork. Sometimes it has been magical or spiritual, like the Hebrew golem, or the simulacra of Renaissance philosophy. But either way, we have both dreamed of and feared the presence of living things which have been made, rather than evolved or created.

The Terminator film (Wiki)
The Terminator film (Wiki)

Modern science fiction and fantasy has continued this habit. Fantasy has often seen these made things as intrusive and wicked. In Tolkein’s world, the manufactured orcs and trolls (made in mockery of elves and ents) hate their original counterparts, and try to spoil the natural order. Science fiction has positioned artificial life at both ends of the moral spectrum. Terminator and Alien saw robots as amoral and destructive, with their own agenda frequently hostile to humanity. Asimov’s writing presented them as a largely positive influence, governed by a moral framework that compelled them to pursue the best interests of people.

But either way, artificial life has been usually conceived as self-contained. In all of the above examples, the intelligence of the robots or manufactured beings went about with them. They might well call on outside information stores – just like a person might ask a friend or visit a library – but they were autonomous.

Amazon Dot - Active
Amazon Dot – Active

Yet the latest crop of virtual assistants that are emerging here and now – Alexa, Siri, Cortana and the rest – are quite the opposite. For sure, you interact with a gadget, whether a computer, phone, or dedicated device, but that is only an access point, not the real thing. Alexa does not live inside the Amazon Dot. The pattern of communication is more like when we use a phone to talk to another person – we use the device at hand, but we don’t think that our friend is inside it. At least, I hope we don’t…

So where is Alexa and her friends? When you ask for some information, buy something, book a taxi, or whatever, your request goes off across cyberspace to Amazon’s servers to interpret the request. Maybe that can be handled immediately, but more likely there will be some additional web calls necessary to track down what you want. All of that is collated and sent back down to your local device and you get to hear the answer. So the short interval between request and response has been filled with multiple web messages to find out what you wanted to know – plus a whole wrapper of security details to make sure you were entitled to find that out in the first place. The internet is a busy place…
Summary of Alexa Interactions
Summary of Alexa Interactions

So part of what I call Alexa is shared between every single other Alexa instance on the planet, in a sort of common pool of knowledge. This means that as language capabilities are added or upgraded, they can be rolled out to every Alexa at the same time. Right now Alexa speaks UK and US English, and German. Quite possibly when I wake up tomorrow other languages will have been added to her repertoire – Chinese, maybe, or Hindi. That would be fun.

But other parts of Alexa are specific to my particular Alexa, like the skills I have enabled, the books and music I can access, and a few features like improved phrase recognition that I have carried out. Annoyingly, there are national differences as well – an American Alexa can access the user’s Kindle library, but British Alexas can’t. And finally, the voice skills that I am currently coding are only available on my Alexa, until the time comes to release them publicly.

Amazon Dot - Inactive
Amazon Dot – Inactive

So Alexa is partly individual, and partly a community being. Which, when you think about it, is very like us humans. We are also partly individual and partly communal, though the individual part is a considerably higher proportion of our whole self than it is for Alexa. But the principle of blending personal and social identities into a single being is true both for humans and the current crop of virtual assistants.

So what are the drawbacks of this? The main one is simply that of connectivity. If I have no internet connection, Alexa can’t do very much at all. The speech recognition bit, the selection of skills and entitlements, the gathering of information from different places into a single answer – all of these things will only work if those remote links can be made. So if my connection is out of action, so is Alexa. Or if I’m on a train journey in one of those many places where UK mobile coverage is poor.

Timing Kindle cover
Timing Kindle cover

There’s also a longer term problem, which will need to be solved as and when we start moving away from planet Earth on a regular basis. While I’m on Earth, or on the International Space Station for that matter, I’m never more than a tiny fraction of a second away from my internet destination. Even with all the other lags in the system, that’s not a problem. But, as readers of Far from the Spaceports or Timing will know, distance away from Earth means signal lag. If I’m on Mars, Earth is anywhere from about 4 to nearly 13 minutes away. If I go out to Jupiter, that lag becomes at least half an hour. A gap in Alexa’s response time of that long is just not realistic for Slate and the other virtual personas of my fiction, whose human companions expect chit-chat on the same kind of timescale as human conversation.  The code to understand language and all the rest has to be closer at hand.

So at some point down the generations between Alexa and Slate, we have to get the balance between individual and collective shifted more back towards the individual. What that means in terms of hardware and software is an open problem at the moment, but it’s one that needs to be solved sometime.

Hacking, QA, and software bugs

Swordfish film - Wiki
Swordfish film – Wiki

A few weeks ago I wrote about software development and hacking, and this is a loose follow-up. The image of hackers presented in films – Swordfish is a fair example, or GoldenEye – is of rather scruffy individuals who type incredibly quickly with keyboard at arms length, undeterred by all kinds of enticing distractions around them.

But most often, a successful hack is the result of careful analysis into some existing code, and a good dollop of insight into what kinds of precautions developers forget to take. In that, it shares a great deal in common with my own trade of QA. Effective software testing is not really about repeating hundreds of test cases which regularly pass – there are automated ways of dong those – it’s about finding the odd situations where proper execution fails. This might be because some developer has copied and pasted the wrong code, but it’s much more often because some rare but important set of circumstances was overlooked.

Trojan horse illustration (Wiki)
Trojan horse illustration (Wiki)

Missing values, extra-long pieces of text, duplicate entries where only one was expected, dates in weird formats – all these and many more keep us QA folk in work. And problems can creep in during the whole life of a product, not just at the start, Every time some change is carried out to a piece of software, there is the risk of breaking some existing behaviour, or introducing some new vulnerability which can be exploited by somebody.

It has been said that a great many of these things persist through laziness. One particular hack exploit – “SQL injection” – has been around for something over 15 years, in essentially unchanged form. You would think that by now, defences would be so automatic that it would no longer be an issue. But it is, and systems still fall prey to a relatively simple trick. I have worked with a lot of different computer languages, and find that pretty much the same problems turn up in any of them. As computer languages get more sophisticated and more robust, we expect them to do more interesting and more complicated things,

Estimated cost of data breaches in Germany (Wiki)
Estimated cost of data breaches in Germany (Wiki)

QA and hacking are at different parts of a spectrum, and a fair proportion of hackery goes on specifically to help firms and charities find weaknesses in their own systems. The legal distinction of when an activity crosses a line has to do with intention of malice, though a number of governments take a much stricter line where there own systems are concerned.

What has this to do with fiction? Well, Mitnash and Slate spend a lot of their time tracking down and defending against hacking in the area of finance. Their added complication is that the physical locations they travel to are scattered all around the solar system, with journey times of weeks or months, and signal times of hours. It is interesting to think about how hacking – and the defence against it – might evolve in such a situation.

In Timing, due for release in the late summer or early autumn, they are first sent to Jupiter to resolve a minor issue. It doesn’t seem very interesting or important to them. But then a much larger and more serious matter intrudes. To their dismay, the hackers – malicious ones in this case – have designed a new form of attack which our two heroes don’t really understand. They need help, and aren’t very sure they can trust their new-found helper.

To finish with, I can’t resist adding one of NASA’s pieces of artwork concerning the Juno probe, now successfully in orbit around Jupiter. It’s a great achievement, and we can look forward to some great science emerging from it.

Juno at Jupiter - NASA/JPL
Juno at Jupiter – NASA/JPL