Category Archives: Science

Can handwriting survive?

I’ve been thinking for a little while now about reading and writing, and decided to convert those thoughts into a blog post. I used to reckon that reading and writing were two sides of the same coin. We teach them at broadly the same time, and it seems natural with a child to talk through the physical process of making a letter shape at the same time as learning to recognise it on a page.

Cartouche of Rameses at Luxor
Cartouche of Rameses at Luxor

But lately, I’ve been reconsidering this. My thinking actually goes back several years to when I was studying ancient Egyptian. It is generally understood that alongside the scribes of Egypt – who had a good command of hieroglyphic and hieratic writing, plus Akkadian cuneiform and a few other written scripts and a whole lot of technical knowledge besides – there was a much ĺarger group of people who could read reasonably well, but not write with fluency or competence. A few particularly common signs, like the cartouche of the current pharaoh, or the major deity names, would be very widely recognised even by people who were generally illiterate. You see this same process happening with tourists today, who start to spot common groups of Egyptian signs long before they could dream of constructing a sentence.

Hieratic Scribal Exercise
Hieratic Scribal Exercise

The ability to write is far more than just knowing letter shapes. You need a wide enough vocabulary to select the right word among several choices, to know how to change each word with past or future tense, or number of people, or gender. You need background knowledge of the subject. You need to understand the conventions of the intended audience so as to convey the right meaning. In short, learning to write is more demanding than learning to read (and I’m talking about the production of writing here, not the quality of the finished product).

Roll forward to the modern day, and we are facing a slightly different kind of question. The ability to read is essential to get and thrive in most jobs. Or to access information, buy various goods, or just navigate from place to place. I’m sure it is possible to live in today’s England without being able to read, but it will be difficult, and all sorts of avenues are closed to that person.

But the ability to write – by which I mean to make handwriting – is, I think, much more in doubt. Right now I’m constructing this blog post in my lunch hour on a mobile phone, tapping little illuminated areas of the screen to generate the letters. In a little while I’ll go back to my desk, and enter characters by pressing down little bits of plastic on a keyboard. Chances are I’ll be writing some computer code (in the C# or NodeJS computer languages, if you’re curious) but if I have to send a message to a colleague I’ll use the same mechanical process.

Amazon Dot - Active
Amazon Dot – Active

Then again, some of my friends use dictation software to “write” emails and letters, and then do a small amount of corrective work at the end. They tell me that dictation technology has advanced to the stage where only minor fix-ups are needed. And, as most blog readers will know, I’m enthusiastic about Alexa for controlling functionality by voice. Although writing text of any great length is not yet feasible on that platform, my guess is that it won’t be long until this becomes real.

All of this means that while the act of reading will most likely remain crucial for a long time to come, maybe this won’t be true of writing in the conventional sense. Speaking personally, hand-writing is already something I do only for hastily scribbled notes or postcards to older relatives. Or occasionally to sign something. The readability of my hand-writing is substantially lower than it used to be, purely because I don’t exercise it much (and by pure chance I heard several of my work colleagues saying the same thing today). Do I need hand-writing in modern life? Not really, not for anything crucial.

Some devices
Some devices

I don’t think it’s just me. On my commuting journeys I see people reading all kinds of things – newspapers, books, magazines, Kindles, phones, tablets and so on. I really cannot remember the last time I saw somebody reading a piece of hand-written material on the tube.

Now, to set against that, I have friends and relatives for whom the act of writing is still important. They would say that the nature of the writing surface and the writing implement – pencil, biro, fountain pen – are important ingredients, and that bodily engagement with the process conveys something extra than simply the production of letters. Emphasis and emotion are easier to impart – they say – when you are personally fashioning the outcome. To me, this seems simply a temporary problem of the tools we are using, but we shall see.

Looking ahead, I cannot imagine a time when reading skills won’t be necessary – there are far too many situations where you have to pore over things in detail, review what was written a few chapters back, compare one thing against another, or just enjoy the artistry with which the text had been put together. Just to recognise which letter to tap or click requires that I be able to read. But hand-writing? I’m not at all sure this will survive much longer.

Perhaps a time will come when teaching institutions will not consider it worth while investing long periods of time in getting children’s hand-writing to an acceptable standard – after all, pieces of quality writing can be generated by several other means.

Quill pen device for tablet
Quill pen device for tablet

Polly and Half Sick of Shadows

Saturn, from Cassini (NASA)
Saturn, from Cassini (NASA)

Today’s blog is primarily about the latest addition to book readings generated using Amazon’s Polly text-to-speech software, but before getting to that it’s worth saying goodbye to the Cassini space probe. This was launched nearly twenty years ago, has been orbiting Saturn and its moons since 2004, and is now almost out of fuel. By the end of the week, following a deliberate course change to avoid polluting any of the moons, Cassini will impact Saturn and break up in the atmosphere there.

So, Half Sick of Shadows and Polly. Readers of this blog, or the Before the Second Sleep blog (first post and second post) will know that I have been using Amazon’s Polly technology to generate book readings. The previous set were for the science fiction book Timing, Far from the Spaceports 2. Today it is the turn of Half Sick of Shadows.

Without further ado, and before getting to some technical stuff, here is the result. It’s a short extract from late on in the book, and I selected it specifically because there are several speakers.

OK. Polly is a variation of the text-to-speech capability seen in Amazon Alexa, but with a couple of differences. First, it is geared purely to voice output, rather than the mix of input and output needed for Alexa to work.

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

Secondly, Polly allows a range of gender, voice and language, not just the fixed voice of Alexa. The original intention was to provide multi-language support in various computer or mobile apps, but it suits me very well for representing narrative and dialogue. For this particular reading I have used four different voices.

If you want to set up your own experiment, you can go to this link and start to play. You’ll need to set up some login credentials to get there, but you can extend your regular Amazon ones to do this. This demo page allows you to select which voice you want and enter any desired text. You can even download the result if you want.

Amazon Polly test console
Amazon Polly test console

But the real magic starts when you select the SSML tab, and enter more complex examples. SSML is an industry standard way of describing speech, and covers a whole wealth of variations. You can add what are effectively stage directions with it – pauses of different lengths, directions about parts of speech, emphasis, and (if necessary) a phonetic letter by letter description. You can speed up or slow down the reading, and raise or lower the pitch. Finally, and even more usefully for my purposes, you can select the spoken language as well as the language of the speaker. So you can have an Italian speaker pronouncing an English sentence, or vice versa. Since all my books are written in English, that means I can considerably extend the range of speakers. Some combinations don’t work very well, so you have to test what you have specified, but that’s fair enough.

If you’re comfortable with the coding effort required, you can call the Polly libraries with all the necessary settings and generate a whole lot of text all at once, rather than piecemeal. Back when I put together the Timing extracts, I wrote a program which was configurable enough that now I just have to specify the text concerned, plus the selection of voices and other sundry details. It still takes a little while to select the right passage and get everything organised, but it’s a lot easier than starting from scratch every time. Before too much longer, there’ll be dialogue extracts from Far from the Spaceports as well!

Far from the Spaceports cover
Far from the Spaceports cover

 

Voyager at forty…

Voyager 1 takes off, Sept 5th 1977 (NASA)
Voyager 1 takes off, Sept 5th 1977 (NASA)

Forty years ago, the Voyager probes 1 and 2 were launched. I remember it happening, along with the feelings of pride and excitement that mankind had been able to construct and launch such things. It was less than a decade from the first moon landing, and it felt as though space was progressively, and quite rapidly, opening up to us all. Those were optimistic days.

Jupiter as seen from Voyager 1 (NASA/JPL)
Jupiter as seen from Voyager 1 (NASA/JPL)

The launch time was chosen very carefully, so as to take advantage of a rare planetary line-up to gain acceleration as they passed each of several planets over the years. This manoeuvre has come to be known as slingshot, and is used extensively in films and books as well as for real. Anyway, this series of relatively close passes also meant that we were treated, at increasing intervals, to images of planets with details which at the time had never been seen. These remote places, mere points of light to the naked eye, suddenly became real places, and we saw how familiar things like weather patterns, volcanoes, and water appeared throughout our solar system.

The two probes are still travelling outwards, still gathering new information, and still sending signals back to Earth. These signals now take 16 hours for Voyager 2, and nearly 20 hours for Voyager 1, and are fantastically weak compared to the strength at take-off. One of the many scientific spinoffs has been the development of ever more accurate equipment to listen to these distant voices. But the lifetime of the battery power is finite. From 2020 onwards, the scientific instruments will be turned off one by one to prolong the on-board power, and after 2025 none will be operational. From then on, the spacecraft will simply continue on as complicated pieces of metal. Their current velocity will be broadly the same, as there is hardly any gravitational drag.

Where is Voyager? (NASA/JPL)
Where is Voyager? (NASA/JPL)

Since 2013, Voyager 1 has officially been classed as travelling through interstellar space, as opposed to the volume of space directly linked to our sun. You could liken this to the atmosphere which surrounds a planet, attenuating in stages to interplanetary space – and indeed the region is now called the heliosphere. The very fact that such a boundary region exists was not recognised before Voyager 1’s instrument data was analysed. Our present understanding is that in this zone, the constant stream of particles pouring outwards from our sun – the solar wind – ceases to have a clear direction of flow and becomes turbulent. You could liken it to air flow around the speed of sound, but the density of particles is so thin that there is no hazard to navigation! In this region, Voyager 1 is encountering as many particles from other stars as it does from ours – the boundary zone acts as a buffer shielding our entire solar system from too much stuff passing casually in.

Neptune, as seen by Voyager 2 (NASA/JPL)
Neptune, as seen by Voyager 2 (NASA/JPL)

Voyager 2, though launched first, has taken a slightly different trajectory, and is now a few years behind Voyager 1. Currently it is still in the heliopause – the boundary zone – and will emerge in a few years. Both craft will – in around 300 years or so – begin to traverse a region called the Oort Cloud. This is a vague and fuzzy shell largely inhabited by comets and similar celestial debris, which occasionally get disturbed enough to drop down to the inner system and make themselves known. It is possible that one of the Voyagers will get close enough to interact with one of these objects, but hugely unlikely given the sheer volume of space concerned.

Other things being equal, they will come out the other side of the Oort cloud in about 30,000 years… and still the nearest star will be our sun. It will take about 40,000 years, give or take, before they will be nearer another star than our own sun. This last figure highlights just how far it is from one star to the next, compared with the distances from a star to the associated planets. Right now, the closest star to us is Alpha Centauri, but by that time another star will be our nearest neighbour, Gliese 445. But even that won’t be very close – the point of nearest approach is about 1.6 light years.

The Golden Record (NASA/JPL)
The Golden Record (NASA/JPL)

Both Voyager craft carry “The Golden Record”, looking not unlike an old LP vinyl record, containing a diverse collection of information about us. I remember there being considerable controversy about this as launch time approached back in 1977. There were earnest debates about the content – should Johnny B. Goode be part of our interstellar welcome pack? Was it improper to have pictures of a naked man and woman? But there were also more basic questions. Did we wanted to make our existence known to other possible life forms? Should we include what are effectively navigation instructions telling whoever finds them how to find us? Those who are curious can look up the exact list of what we sent on these golden disks here, and even listen to the audio content here.

For me, the Voyager craft have been a background feature of life from my late teens. For some people, they have been the focus of their entire working lives. And so far, they are the only two objects that we have built which have escaped the gravity well of our sun, and are now at large in the galaxy.

“Pale Blue Dot” – Earth from Voyager 1, 4 billion miles away (NASA/JPL)

A research snippet

I thought today I’d share some research I have been doing for my WIP science fiction book, The Liminal Zone.

Full moon (NASA/JPL)
Full moon (NASA/JPL)

For various plot reasons I needed to know the answer to the following problem. Suppose you were standing on the surface of Pluto’s moon Charon, looking up at Pluto, fully lit by the sun… how bright would that be compared to looking up at the full moon from Earth?

This depends on a few factors:

  1. How bright is Pluto compared to our Moon?
  2. How big are Pluto and Charon compared to Earth and the Moon?
  3. What is the separation between Pluto and Charon compared to that between Earth and Moon?
  4. How much light from the sun falls on Pluto or Charon compared to Earth and Moon?

The relationship between these various factors boil down to a fairly simple equation – comparing everything to the full moon brightness, which is fairly familiar to us, you have to:

  1. Scale up by the ratio of intrinsic reflectivity of the two bodies (called the albedo)
  2. Scale up by the ratio of the apparent area of sky covered by the two bodies
  3. Scale down by the square of the relative distance from the sun.

The apparent area can be calculated relatively easily knowing the radius of the body in question and the distance apart.

At this point you start looking up the raw figures from any of several science sites (a handy list follows below).

Earthrise from lunar orbit (NASA/JPL)
Earthrise from lunar orbit (NASA/JPL)

Let’s first think about the simpler problem of how bright a “Full Earth” is as seen from the Moon. The Earth is, on average, 2.5 times as reflective as the Moon (that’s averaging over cloudy and clear skies, land and water, etc), and the area of sky it covers is about 14 times that of the Moon. So a Full Earth as seen from the Moon is about 35 times as bright as the Full Moon as seen from Earth. Quite a sight.

Charon from New Horizons spacecraft (NASA/JPL)
Charon from New Horizons spacecraft (NASA/JPL)

Let’s move out to Pluto, and imagine we are standing looking up at a “Full Charon”. Charon is brighter than the Earth, is much smaller, much closer to Pluto than our Moon is to us, and much much further away from the sun (forty times further on average).

When you put all those figures together you find that the apparent diameter of Charon in Pluto’s sky is nearly eight times that of our Moon, so nearly sixty times the apparent area. Scale up for the extra brightness and down for the distance from the sun, and you find that Charon has about 1/6 of the brightness of our full moon. Probably still just enough to cast shadows.

Pluto from New Horizons spacecraft (NASA/JPL)
Pluto from New Horizons spacecraft (NASA/JPL)

And finally, looking up at a “Full Pluto” from Charon. Pluto is about twice the size of Charon so about four times the area. By way of comparison, that means Pluto would nicely fit inside either the top or bottom half of the constellation Orion – between belt and shoulders, or belt and feet. Pluto is also brighter than Charon. Put that all together and you find that Pluto’s full light is about two thirds that of a full moon here.

I found this quite a remarkable fact when I crunched the numbers. Go all the way out from our Earth to the furthest of the standard nine planets, and the experience of standing on Charon looking up at Pluto is almost the same – in terms of brightness – as standing here looking up at the Moon. A useful comparison for my character, who is doing just that.

Facts and figures for the curious…
Albedo values (average)
  • Moon 0.12
  • Earth 0.3
  • Charon 0.45
  • Pluto 0.6
Radius values
  • Moon 1737 km
  • Earth 6371 km
  • Charon 606 km
  • Pluto 1187 km
Distances from planet to moon
  • Earth-Moon distance 384,400 km
  • Pluto-Charon distance 18,384 km
Apparent angular size
  • Moon from Earth 0.5 degrees
  • Earth from Moon 1.9 deg
  • Charon from Pluto 3.8 deg
  • Pluto from Charon 7.4 deg

 

 

 

More about Polly… and Pluto besides

Today’s blog is mainly about the mp3 conversation extracts from Timing, which I talked about last week. And right up front here are links to my favourite two…

  • On board Rydal’s ship, the Heron… http://datascenesdev.com/Alexa/voicefiles/All_Extract_A.mp3
  • On board Parvati’s ship, the Parakeet… http://datascenesdev.com/Alexa/voicefiles/All_Extract_C.mp3

 

While talking about Timing, it seems a good idea to remind everyone about a recent review which captured neatly a great deal of what  was trying to convey in the story: “a story that provides questions as well as answers, thrill and satisfaction, and an adventure that can’t be beat“.

Sometime in the next couple of weeks they’ll be uploaded to YouTube, but for now they are just audio links included below and on the appropriate blog page. You’ll find more about this below. In passing, there’s a small prize available for the first person who correctly spots what’s wrong with the voice selection for Chandrika! Also, and unrelated to that, you’ll hear that not all of the voices are equally successful. I shall continue to tweak them, so hopefully the quality will steadily improve.

But before that, NASA just released two YouTube videos to celebrate the two year anniversary of when the New Horizons probe was at nearest approach to Pluto and Charon. They have turned the collection of images and other telemetry into flyby simulations of the dwarf planet and its moon, as though you were manoeuvring over them. Both the colours and the vertical heights of surface features have been exaggerated so you can get a better sense of what you are seeing, but that aside, it’s as close as most of us will get to personally experiencing these places.

  • Pluto: https://youtu.be/g1fPhhTT2Oo

  • Charon: https://youtu.be/f0Q7O7TZ7Ks

OK, back to Polly. As well as specifying which of several different voices you want, you can give Polly some metadata about the sentence to help generate correct pronunciation. Last week I talked about getting proper nouns correct, like Mitnash. But in English you also get lots of words which are spelled the same but pronounced differently – homonyms. The one which I ran into was “minute”, which can either be a unit of time (min-nit) or something very small (my-newt). Another problem case I found was “produce” – was I expecting the noun form (prod-yuce) or the verb (pro-deuce)?

In all such cases, Polly tries to guess from context which you mean, but sometimes guesses wrong. Happily you can simply add some metadata to say which you want. Sometimes this is simply a matter of adding in a tag saying “I want the noun”. Other times you can say which of several alternate senses of the word you want, and simply check the underlying list until you find the right one. And if all else fails, there’s always the option of spelling it out phonetically…

 

 

More about AI and voice technology

A couple of weeks ago I went to a day event put on by Amazon showcasing their web technologies. My own main interests were – naturally – in the areas of AI and voice, but there was plenty there if instead you were into security, or databases, or the so-called “internet of things”.

Amazon Dot - Active
Amazon Dot – Active

Readers of this blog will know of my enthusiasm for Alexa, and perhaps will also know about the range of Alexa skills I have been developing (if you’re interested, go to the UK or the US sites). So I thought I’d go a little bit more into both Alexa and the two building blocks which support Alexa – Lex for language comprehension, and Polly for text-to-speech generation.

Alexa does not in any substantial sense live inside your Amazon Echo or Dot – that simply provides the equivalent of your ears and mouth. Insofar as the phrase is appropriate, Alexa lives in the cloud, interacting with you by means of specific convenient devices. Indeed, Amazon are already moving the focus away from particular pieces of hardware, towards being able to access the technology from a very wide range of devices including web pages, phones, cars, your Kindle, and so on. When you interact with Alexa, the flow of information looks a bit like this (ignoring extra bits and pieces to do with security and such like).

Alexa information flows (simplified)
Alexa information flows (simplified)

And if you tease that apart a little bit then this is roughly how Lex and Polly fit in.

Lex and Polly information flows (simplified)
Lex and Polly information flows (simplified)

 

So for today I want to look a bit more at the two “gateway” parts of the jigsaw – Lex and Polly. Lex is there to sort out what it is you want to happen – your intent – given what it is you said. Of course, given the newness of the system, every so often Lex gets it wrong. What entertains me is not so much those occasions when you get misunderstood, but the extremity of some people’s reaction to this. Human listeners make mistakes just like software ones do, but in some circles each and every failure case of Lex is paraded as showing that the technology is inherently flawed. In reality, it is simply under development. It will improve, but I don’t expect that it will ever get to 100% perfection, any more than people will.

Anyway, let’s suppose that Lex has correctly interpreted your intent. Then all kinds of things may happen behind the scenes, from simple list lookups through to complex analysis and decision-making. The details of that are up to the particular skill, and I’m not going to talk about that.

Instead, let’s see what happens on the way back to the user. The skill as a whole has decided on some spoken response. At the current state of the art, that response is almost certainly defined by the coder as a block of text, though one can imagine that in the future, a more intelligent and autonomous Alexa might decide for herself how to frame a reply. But however generated, that body of text has to be transformed into a stream of spoken words – and that is Polly’s job.

A standard Echo or Dot is set up to produce just one voice. There is a certain amount of configurability – pitch can be raised or lowered, the speed of speech altered, or the pronunciation of unusual words defined. But basically Alexa has a single voice when you use one of the dedicated gadgets to access her. But Polly has a lot more – currently 48 voices (18 male and 30 female), in 23 languages. Moreover, you can require that the speaker language and the written language differ, and so mimic a French person speaking English. Which is great if what you want to do is read out a section of a book, using different voices for the dialogue.

Timing Kindle cover
Timing Kindle cover

That’s just what I have been doing over the last couple of days, using Timing (Far from the Spaceports Book 2) as a test-bed. The results aren’t quite ready for this week, but hopefully by next week you can enjoy some snippets. Of course, I rapidly found that even 48 voices are not enough to do what you want. There is a shortage of some languages – in particular Middle Eastern and Asian voices are largely absent – but more will be added in time. One of the great things about Polly (speaking as a coder) is that switching between different voices is very easy, and adding in customised pronunciation is a breeze using a phonetic alphabet. Which is just as well. Polly does pretty well on “normal” words, but celestial bodies such as Phobos and Ceres are not, it seems, considered part of a normal vocabulary! Even the name Mitnash needed some coaxing to get it sounding how I wanted.

The world of Far from the Spaceports and Timing (and the in preparation Authentication Key) is one where the production of high quality and emotionally sensitive speech by artificial intelligences (personas in the books) taken for granted. At present we are a very long way from that – Alexa is a very remote ancestor of Slate, if you like – but it’s nice to see the start of something emerging around us.

Friday June 30th was International Asteroid Day!

Artist's impression of asteroid (NASA/JPL)
Artist’s impression of asteroid (NASA/JPL)

And no, I hadn’t realised this myself until a couple of days before… but NASA and others around the world had a day’s focus on asteroids. Now, to be sure most of that focus was looking at the thorny question of Near Earth Objects, both asteroids and comets, and what we might be able to do if one was on a collision course.

Far from the Spaceports cover
Far from the Spaceports cover

But it seemed to me that this was as good a time as any to celebrate my fictional Scilly Isle asteroids, as described in Far from the Spaceports and Timing (and the work in progress provisionally called The Authentication Key). In those stories, human colonies have been established on some of the asteroids, and indeed on sundry planets and moons. These settlements have gone a little beyond mining stations and are now places that people call home. A scenario well worth remembering on International Asteroid Day!

Kindle Cover - Half Sick of Shadows
Kindle Cover – Half Sick of Shadows

While on the subject of books, some lovely reviews for Half Sick of Shadows have been coming in.

Hoover Reviews said:
“The inner turmoil of The Lady, as she struggles with the Mirror to gain access to the people she comes in contact with, drives the tale as the Mirror cautions her time and again about the dangers involved.  The conclusion of the tale, though a heart rending scene, is also one of hope as The Lady finally finds out who she is.”

The Review said:
“Half Sick of Shadows is in a genre all its own, a historical fantasy with some science fiction elements and healthy dose of mystery, it is absolutely unique and a literary sensation. Beautifully written, with an interesting storyline and wonderful imagery, it is in a realm of its own – just like the Lady of Shalott… It truly is mesmerising.”

Find out for yourself at Amazon.co.uk or Amazon.com.

Half Sick of Shadows Alexa skill icon
Half Sick of Shadows Alexa skill icon

Or chat about the book with Alexa by enabling the skill at the UK or US stores.

Colonising Mars?

Elon Musk, founder and CEO of SpaceX, has made no secret of his plans for facilitating a colony on Mars for a long time now. But last September, in a public presentation, he explained it all in considerably more detail. The reasoning, and the raw logistical figures behind it, are still available. His credibility is built around the SpaceX programme. This in turn is based on a concept of reusing equipment rather than throwing it away each launch, and it has had a string of successes lately. The initial booster stage now returns to a landing platform, there to go through a process which recommissions it for another launch.

SpaceX booster stage returning to land (space.com credit SpaceX)
SpaceX booster stage returning to land (space.com credit SpaceX)

Quite apart from any recycling benefits, this then allows SpaceX to seriously undercut other firms’ prices of putting satellites into orbit. It still couldn’t be called cheap – one set of figures quotes $65 million – but that’s only about one sixth of the regular cost. If you’re happy to know that your equipment is going into orbit on a rocket that is not brand new, it’s a huge saving. Every successful launch, return to base, and relaunch, adds to buyers’ confidence that the procedure can be trusted.

But the big picture goes well beyond Earth orbit. Musk believes that the best way to mitigate the risks of life on Earth – global warming, conflict, extremist views of all kinds, and so on – is to spread out more widely. In a recent lecture, Stephen Hawking has said essentially the same thing. And in Musk’s  vision, Mars is a better bet than the moon for this, for a whole cluster of reasons including the presence of an atmosphere (albeit a thin one compared to here) and a greater likeness to Earth in terms of gravity and size.

So reusable rockets into Earth orbit are simply a starting point. Once you have a reasonably-sized fleet of such things, you can build larger objects already in space, and fly them over to Mars when the orbital positions are ideal. The logic of gravitational pull around a planet means that the hardest, and most energy-intensive part is needed to get you from the surface up to a stable orbit. Once there, much gentler and longer-lasting means of propulsion will get you onward bound.

Artist's Impression of Dawn in orbit (NASA/JPL)
Artist’s Impression of Dawn in orbit (NASA/JPL)

To take a contemporary situation, NASA’s Dawn probe is currently orbiting the asteroid Ceres. Its hydrazine fuel, which powers the little manoeuvring and attitude thrusters, is nearly exhausted. The mission control team are trying to decide on the best course of action. In its current high orbit only a few months of fuel remain. A closer orbit, which would give better quality pictures, would use it up in a matter of weeks. But using the main ion drive, a different power source altogether, to go somewhere else would probably give a few years of science. Fairly soon we should hear which option they have chosen, and where they consider the best balance is between risk and reward. The message for here is that staying close to a planet, or taking off from one, is costly in terms of fuel.

So Musk reckons that over the course of a century or so, he can arrange transportation for a million Martian colonists. In terms of grand sweep, it is so far ahead of anyone else’s plans as to seem impossible at first sight. But if all goes according to his admittedly ambitious plan, the first of many journeys could take place ten years from now. He – and I for that matter – might not live to see the Martian population reach a million, but he certainly expects to see it firmly established.

Far from the Spaceports cover
Far from the Spaceports cover

With Far from the Spaceports, its sequel Timing, and the work-in-progress provisionally called The Authentication Key, I deliberately did not fix a future date. It’s far enough ahead of now that artificial intelligence is genuinely personal and relational – sufficiently far ahead that it is entirely normal for a human investigator to be partnered long-term on an equal basis with an AI persona. None of the present clutch of virtual assistants have any chance at all of this, and my guess is that we are talking many generations of software development before this could happen. It’s also far enough ahead that there are colonies in many locations – certainly out as far as the moons of Saturn, and I am thinking about a few “listening post” settlements further out (watch this space – the stories aren’t written yet!). However, I hadn’t really thought in terms of a million colonists on Mars, and it may well be that, as happens so often in science fiction, real events might overtake my scenario a lot quicker than I thought likely.

Back with Musk’s proposal, one obvious consequence of the whole reuse idea is that the cost per person of getting there drops hugely. This buy-in figure is typically quoted as something like $10 billion. But the SpaceX plan drops this down to around $20,000 – cheaper than the average house price in the UK. I wonder how many people, given the chance, would sell up their belongings here in exchange for a fresh start on another planet?

I was wondering what image to finish with, and then came across this NASA/JPL picture of the Mars Curiosity Rover as seen from the Mars Orbiter (the little blue dot roughly in the middle)… a fitting display of the largeness of the planet compared to what we have sent there so far.

Mars Curiosity (blue dot) as seen from Mars Reconnaissance Orbiter (NASA/JPL)
Mars Curiosity (blue dot) as seen from Mars Reconnaissance Orbiter (NASA/JPL)

Birds of intelligence

Cover, The Genius of Birds (Goodreads)
Cover, The Genius of Birds (Goodreads)

I often think about – and blog about – machine intelligence, both its current state and future possibilities. But artificial intelligence is only one small field of study in a very large and open-ended terrain. News articles on the topic of possible extraterrestrial intelligence are relatively common, even though we have not yet detected anything that can confidently be ascribed to alien sources. Closer to home, we still don’t really understand the spectrum of human intelligence in all its different manifestations, including emotional and social astuteness as well as problem solving and pattern matching.

To add to that, I’ve been reading a fascinating book exploring the various kinds of intelligence seen in the bird world – The Genius of Birds, by Jennifer Ackerman. Perhaps many of us have watched videos of tool-using corvids such as the New Caledonian crows (https://www.youtube.com/watch?v=cbSu2PXOTOc), or grey parrots demonstrating feats of speech and language comprehension going way beyond simple repetition. But avian intelligence goes well beyond these exploits, which we instantly relate to because they mirror human acts and occupations.

For a long time it was thought that since birds have no cerebral cortex, they were necessarily incapable of reasoning and abstract thought. The cortex appears in the tree of life after mammals and birds parted company. But recently it has become clear that birds simply use a different organ in their brain – the dorsal ventricular ridge, which in fact develops from the same part of the embryonic brain in a bird, that the cortex does in a mammal. The way that neurons cluster, connect, and participate in learning is the same in a bird brain as a mammal. Basically, both the birds and the mammals had to adapt to new circumstances after the natural disaster that killed off the dinosaurs – and they did so using remarkably similar strategies. The different biological frame of the two families disguises many places where a common solution has emerged.

What has this to do with writing? Well, birds can be routinely found in my science fiction stories – I assume that at minimum the more adaptable ones would find ways to survive as we spread out beyond Earth. It’s interesting to speculate which ones will accompany us.

Robin near Dungeon Ghyll, Langdale
Robin near Dungeon Ghyll, Langdale

This post is far too short to describe in any detail all the various ways in which birds display intelligence. If you want an overview of that, I recommend the book! But in brief, birds show their intelligence in a variety of ways, just like humans do. There are huge differences between species – corvids are good at problem solving, sparrows and members of the tit family are excellent at group dynamics, chickadees can remember and accurately mimic hundreds of sounds, Arctic terns are prodigiously good at navigation, herons spend considerable time and effort training their young in the art of catching fish. And so on. We tend to notice the exploits of birds which most resemble our own – like crows and parrots – but it’s always worth taking a step back to question our own blind spots. Even the birds we often dismiss as particularly stupid, often have some particular faculty at which they excel.

But as well as variation between species, individual birds of the same kind differ in particular ways. One is bolder, another more cautious. One solves particular problems much more easily than his or her siblings. Again, not very different from human beings.

Lord Vishnu riding Garuda in the form of a bird, by Raja Ravi Varma (Wikipedia)
Lord Vishnu riding Garuda in the form of a bird, by Raja Ravi Varma (Wikipedia)

It’s a sobering thought. Along with a handful of animals, a few birds have found their way into folklore. Odin had his ravens. Several Egyptian and Indian deities have bird emblems or companions. Hawks and eagles have frequently being used as symbols, though more often for their martial prowess than their wits. But by and large, we have rather looked down on birds, especially in the last century or so, imagining that their behaviour was driven purely by instinct rather than rationality. With the cumulative weight of evidence that has emerged over the last few decades, ancient anecdotal tales are metamorphosing into a consistent picture.

So while we’re trying to find intelligence out elsewhere in the galaxy, or to build it with our own hardware and software, let’s also give a thought for the surprisingly clever and adaptable creatures who already share our environment.

As for play? The final video is of a snowboarding crow in Russia (https://www.youtube.com/watch?v=3dWw9GLcOeA)

Language and pronunciation

Half Sick of Shadows Alexa skill icon
Half Sick of Shadows Alexa skill icon

I’ve been thinking these last few days, once again, about language and pronunciation. This was triggered by working on some more Alexa skills to do with my books. For those who don’t know, I have such things already in place for Half Sick of Shadows, Far from the Spaceports, and Timing. That leaves the Bronze Age series set in Kephrath, in the hill country of Canaan. And here I ran into a problem. Alexa does pretty well with contemporary names – I did have a bit of difficulty with getting her to pronounce “Mitnash” correctly, but solved that simply by changing the spelling of the text I supplied. If instead of Mitnash I wrote Mitt-nash, the text-to-speech engine had enough clues to work out what I meant.

So far so good, but you can only go part of the way down that road. You can’t keep fiddling around with weird spellings just to trick the code into doing what you want. Equally, it’s hardly reasonable to suppose that the Alexa coding team would have considered how to pronounce ancient Canaanite or Egyptian names. Sure enough the difficulties multiplied with the older books. Even “Kephrath” came out rather mangled, and things went downhill from there.
Amazon Dot - Inactive
Amazon Dot – Inactive

So I took a step back, did some investigation, and found that you can define the pronunciation of unusual words by using symbols from the phonetic alphabet. Instead of trying to guess how Alexa might pronounce Giybon, or Makty-Rasut, or Ikaret, I can simply work out what symbols I need for the consonants and vowels, and provide these details in a specific format. Instead of Mitnash, I write mɪt.næʃ. Ikaret becomes ˈIk.æ.ˌɹɛt.

So that solved the immediate problem, and over the next few days my Alexa skills for In a Milk and Honeyed Land, Scenes from a Life, and The Flame Before Us will be going live. Being slightly greedy about such things, of course I now want more! Ideally I want the ability to set up a pronunciation dictionary, so that I can just set up a list of standard pronunciations that Alexa can tap into at need – rather like having a custom list of words for a spelling checker. Basically, I want to be able to teach Alexa how to pronounce new words that aren’t in the out-of-the-box setup. I suspect that such a thing is not too far away, since I can hardly be the only person to come across this. In just about every specialised area of interest there are words which aren’t part of everyday speech.

Amazon Dot - Active
Amazon Dot – Active

But also, this brought me into contact with the perennial issue of UK and US pronunciation. Sure, a particular phonetic symbol means whatever it means, but the examples of typical words vary considerably. As a Brit, I just don’t pronounce some words the same as my American friends, so there has to be a bit of educated guesswork going into deciding what sound I’m hoping for. Of course it’s considerably more complicated than just two nations – within those two there are also large numbers of regional and cultural shifts. And of course there are plenty of countries which use English but sound quite different to either “standard British” or “standard American”.

That’s for some future, yet to be invented, dialect-aware Alexa! Right now it’s enough to code for two variations, and rely on the fact that the standard forms are recognisable enough to get by. But wouldn’t it be cool to be able to insert some extra tags into dialogue in order to get one character’s speech as – say – Cumbrian, and another as from Somerset.