I ran out of time this week to do much by way of blogging, so here are three bits of space news which may well make their way into a story sometime.
Stop Press: just today NASA announced that a relatively close star (39 light years away) has no less than 7 planets approximately Earth size orbiting it… see and the schematic picture at the end of the blog.
Firstly, the Dawn probe, still faithfully orbiting the asteroid Ceres, has detected complex organic molecules in two separate areas in the middle latitudes of the dwarf planet. The onboard instruments are not accurate enough to pin the molecules down precisely, but it seems likely that they are forms of targets. The analysis also suggests that they formed on Ceres itself, rather than being deposited there by a meteor. The most likely cause is thought to be the action of warm water circulating through chemicals under the surface. Some of the headlines suggest that this could signal the presence of life, but it’s more cautious to say that it shows that the conditions under which life could develop are present there.
The second snippet spells difficulty for my hypothetical Martian settlements. This picture was captured by the Mars Orbiter and shows two larger impact craters surrounded by a whole array of smaller ones. The likely scenario is that one object split into a cluster of fragments as it passed through the Martian atmosphere. This of itself wouldn’t be too surprising, but inspection of older photos of the same area shoes that this impact happened between 2008 and 2014. No time at all in cosmic terms, and not so much fun if you’d carefully built yourself a habitable dome there.
The problem is the thinness of the Martian atmosphere. It is considerably deeper than our one here on Earth, but hugely less dense. So when meteors arrive at the top of the layer of air, they don’t burn up so comprehensively as Earth-bound ones. More of them reach the surface. Even a comparatively small rock has enough kinetic energy to really spoil your day. Something that will need some planning…
Finally we zoom right out to the cold, dark reaches of the outer solar system. A long way beyond the orbit of Pluto there is a region called the Kuiper Belt, and out in the Kuiper Belt a new dwarf planet has recently been found. It goes by the catchy name of 2014 UZ224 and it took nearly two years to confirm its existence. Best estimates are that it is a little over 300 miles across – about half the size of Ceres. I’ve never sent Mitnash and Slate out anywhere like that – it’s about twice as far from Earth as Pluto, and the journey alone would take about four months one-way. I do have vague plans for a story set out in the Kuiper Belt, but appropriately enough it’s some way off yet. But even at that distance, you’re still less than half a percent of the distance to the nearest star… space is really big!
Since as far back as written records go – and probably well before that – we humans have imagined artificial life. Sometimes this has been mechanical, technological, like the Greek tales of Hephaestus’ automata, who assisted him at his metalwork. Sometimes it has been magical or spiritual, like the Hebrew golem, or the simulacra of Renaissance philosophy. But either way, we have both dreamed of and feared the presence of living things which have been made, rather than evolved or created.
Modern science fiction and fantasy has continued this habit. Fantasy has often seen these made things as intrusive and wicked. In Tolkein’s world, the manufactured orcs and trolls (made in mockery of elves and ents) hate their original counterparts, and try to spoil the natural order. Science fiction has positioned artificial life at both ends of the moral spectrum. Terminator and Alien saw robots as amoral and destructive, with their own agenda frequently hostile to humanity. Asimov’s writing presented them as a largely positive influence, governed by a moral framework that compelled them to pursue the best interests of people.
But either way, artificial life has been usually conceived as self-contained. In all of the above examples, the intelligence of the robots or manufactured beings went about with them. They might well call on outside information stores – just like a person might ask a friend or visit a library – but they were autonomous.
Yet the latest crop of virtual assistants that are emerging here and now – Alexa, Siri, Cortana and the rest – are quite the opposite. For sure, you interact with a gadget, whether a computer, phone, or dedicated device, but that is only an access point, not the real thing. Alexa does not live inside the Amazon Dot. The pattern of communication is more like when we use a phone to talk to another person – we use the device at hand, but we don’t think that our friend is inside it. At least, I hope we don’t…
So where is Alexa and her friends? When you ask for some information, buy something, book a taxi, or whatever, your request goes off across cyberspace to Amazon’s servers to interpret the request. Maybe that can be handled immediately, but more likely there will be some additional web calls necessary to track down what you want. All of that is collated and sent back down to your local device and you get to hear the answer. So the short interval between request and response has been filled with multiple web messages to find out what you wanted to know – plus a whole wrapper of security details to make sure you were entitled to find that out in the first place. The internet is a busy place…
So part of what I call Alexa is shared between every single other Alexa instance on the planet, in a sort of common pool of knowledge. This means that as language capabilities are added or upgraded, they can be rolled out to every Alexa at the same time. Right now Alexa speaks UK and US English, and German. Quite possibly when I wake up tomorrow other languages will have been added to her repertoire – Chinese, maybe, or Hindi. That would be fun.
But other parts of Alexa are specific to my particular Alexa, like the skills I have enabled, the books and music I can access, and a few features like improved phrase recognition that I have carried out. Annoyingly, there are national differences as well – an American Alexa can access the user’s Kindle library, but British Alexas can’t. And finally, the voice skills that I am currently coding are only available on my Alexa, until the time comes to release them publicly.
So Alexa is partly individual, and partly a community being. Which, when you think about it, is very like us humans. We are also partly individual and partly communal, though the individual part is a considerably higher proportion of our whole self than it is for Alexa. But the principle of blending personal and social identities into a single being is true both for humans and the current crop of virtual assistants.
So what are the drawbacks of this? The main one is simply that of connectivity. If I have no internet connection, Alexa can’t do very much at all. The speech recognition bit, the selection of skills and entitlements, the gathering of information from different places into a single answer – all of these things will only work if those remote links can be made. So if my connection is out of action, so is Alexa. Or if I’m on a train journey in one of those many places where UK mobile coverage is poor.
There’s also a longer term problem, which will need to be solved as and when we start moving away from planet Earth on a regular basis. While I’m on Earth, or on the International Space Station for that matter, I’m never more than a tiny fraction of a second away from my internet destination. Even with all the other lags in the system, that’s not a problem. But, as readers of Far from the Spaceports or Timing will know, distance away from Earth means signal lag. If I’m on Mars, Earth is anywhere from about 4 to nearly 13 minutes away. If I go out to Jupiter, that lag becomes at least half an hour. A gap in Alexa’s response time of that long is just not realistic for Slate and the other virtual personas of my fiction, whose human companions expect chit-chat on the same kind of timescale as human conversation. The code to understand language and all the rest has to be closer at hand.
So at some point down the generations between Alexa and Slate, we have to get the balance between individual and collective shifted more back towards the individual. What that means in terms of hardware and software is an open problem at the moment, but it’s one that needs to be solved sometime.
I recently invested in an Amazon Dot, and therefore in the AI software that makes the Dot interesting – Alexa, Amazon’s virtual assistant. But I’m not going to write about the cool stuff that this little gizmo can do, so much as what it led me to think about AI and conversation.
The ability to interact with a computer by voice consistently, effectively, and on a wide range of topics is seen by the major industry players as the next big milestone. Let’s briefly look back at the history of this.
Once upon a time all you could use was a highly artificial, structured set of commands passed in on punched cards, or (some time later) via a keyboard. If the command was wrong, the machine would not do what you expected. There was no latitude for variation, and among other things this meant that to use a computer needed special training.
The first breakthrough was to separate out the command language from the user’s options. User interfaces were born: you could instruct the machine what you wanted to do without needing to know how it did it. You could write documents or play games without knowing a word of computer language, simply by typing some letters or clicking with a mouse pointer. Somewhere around this time it became possible to communicate easily with machines in different locations, and the Internet came into being.
The next change appeared on phones first – the touch screen. At first sight there’s not a lot of change from using a mouse to click, or your finger to tap. But actually they are worlds apart. You are using your body directly to work with the content, rather than indirectly through a tool. Also, the same interface – the screen – is used to communicate both ways, rather than the machine sending output through the screen and receiving input via movements of a gadget on an entirely different surface. Touch screens have vastly extended the extent to which we can access technology and information: advanced computers are quite literally in anyone’s pocket. But touch interfaces have their problems. It’s not especially easy to create passages of text. It’s not always obvious how to use visual cues to achieve what you want. It doesn’t work well if you’re making a cake and need to look up the next stage with wet and floury hands!
Which brings us to the next breakthrough – speech. Human beings are wired for speech, just as we are wired for touch. The human brain can recognise and interpret speech sounds much faster than other noises. We learn the ability in the womb. We respond differently to different speakers and different languages before birth, and master the act of communicating needs and desires at a very early age. We infer, and broadcast, all kinds of social information through speech – gender, age, educational level, occupation, emotional state, prejudice and so on. Speech allows us to explain what we really wanted when we are misunderstood, and has propelled us along our historical trajectory. Long before systematic writing was invented, and through all the places and times where writing has been an unknown skill to many, talking has still enabled us to make society.
Enter Alexa, and Alexa’s companions such as Siri, Cortana, or “OK Google”. The aim of all of them is to allow people to find things out, or cause things to happen, simply by talking. They’re all at an early stage still, but their ability to comprehend is seriously impressive compared to a few short years ago. None of them are anywhere near the level I assume for Slate and the other “personas” in my science fiction books, with whom one can have an open-ended dialogue complete with emotional content, plus a long-term relationship.
What’s good about Alexa? First, the speech recognition is excellent. There are times when the interpreted version of my words is wrong, sometimes laughably so, but that often happens with another person. The system is designed to be open-ended, so additional features and bug fixes are regularly applied. It also allows capabilities (“skills”) to be developed by other people and added for others to make use of – watch this space over the next few months! So the technology has definitely reached a level where it is ready for public appraisal.
What’s not so good? Well, the conversation is highly structured. Depending on the particular skill in use, you are relying either on Amazon or on a third-party developer, to anticipate and code for a good range of requests. But even the best of these skills is necessarily quite constrained, and it doesn’t take long to reach the boundaries of what can be managed. There’s also very little sense of context or memory. Talking to a person, you often say “what we were talking about yesterday...” or “I chatted to Stuart today…” and the context is clear from shared experience. Right now, Alexa has no memory of past verbal transactions, and very little sense of the context of a particular request.
But also, Alexa has no sense of importance. A human conversation has all kinds of ways to communicate “this is really important to me” or “this is just fun”. Lots of conversations go something like “you know what we were talking about yesterday…“, at which the listener pauses and then says, “oh… that“. Alexa, however, cannot distinguish at present between the relative importance of “give me a random fact about puppies“, “tell me if there are delays on the Northern Line today“, or “where is the nearest doctor’s surgery?”
These are, I believe, problems that can be solved over time. The pool of data that Alexa and other similar virtual assistants work with grows daily, and the algorithms that churn through that pool in order to extract meaning are becoming more sensitive and subtle. I suspect it’s only a matter of time until one of these software constructs is equipped with an understanding of context and transactional history, and along with that, a sense of relative importance.
Alexa is a long way removed from Slate and her associates, but the ability to use unstructured, free-form sentences to communicate is a big step forward. I like to think that subsequent generations of virtual assistants will make other strides, and that we’ll be tackling issues of AI rights and working partnerships before too long.
A quick blog today, focusing on a couple of things. First, like most of us, my annual Goodreads statistics appeared, telling me what I had read in 2016 (or at least, what GR knew about, which is a fair proportion of what really happened).
So, I read 52 books in the year, up 10 from 2015 (and conveniently one a week). but the page count was down very slightly. I guess I’m reading shorter books on average! Slightly disappointingly, there were very few books more than about 50 years old, with Kalidasa’s Recognition of Shakuntala the outstanding early text. This year, I have a target of reading more old stuff alongside the new. In 2016 there was also more of a spread of genres, with roughly equal proportions of historical fiction, science fiction, fantasy, and non-fiction (aka “geeky”), contrasting with previous years where historical fiction has dominated.
I also recently read that Amazon passed the landmark of 5 million ebooks on their site in the summer, slightly ahead of the 10th birthday of the Kindle itself. The exact number varies per country – apparently Germany has more – but currently the number is growing at about 17% per annum. That’s a lot of books… about 70,000 new ones per month, in fact. Let nobody think that reading is dead! As regards fiction, Romance and Children’s books top the counts, which I suspect will come as a surprise to nobody.
Finally, we have just had a space-related anniversary, namely that of the successful landing of the ESA Huygens probe on Saturn’s moon Titan on January 14th 2005. An extraordinary video taken as it descended has been circulating recently and I am happy to reshare it. Meanwhile the Cassini “mothership” is in the last stages of its own research mission and, with fuel almost exhausted, will be directed to burn up in Saturn’s atmosphere later this year. I vividly remember the early mission reports as Cassini went into orbit around Saturn – it’s a bit sad to think of the finale, but this small spacecraft has returned a wealth of information since being launched in 1997, and in particular since arriving at Saturn in 2004.
(Video link is https://youtu.be/msiLWxDayuA?list=PLTiv_XWHnOZpKPaDTVy36z0U8GxoiIkZa)
Part 2 of this little series looks at a different phenomenon to do with the sun’s movement through the sky. Imagine yourself picking a time of day – let’s say 10:30 in the morning – and taking note of where the sun is in the sky. Do this at the same time every day of the year to build up a curve tracing the sun’s apparent movement. One way to do this would be to take a photo pointing at exactly the same angle at exactly this time, then overlay the photos on top of each other. Another way would be to put a stick in the ground as a rudimentary sundial, then mark out the end of the stick’s shadow each day. It’s an easy experiment in principle, but takes a lot of patience and accuracy to get right.
But suppose you’ve done that – what would you expect to see? We know that the sun goes up and down in the sky through the year – in winter it is lower and in summer higher. So i suspect that most people would expect to see a straight vertical line being plotted through the year as the sun cycles along its seasonal track. But actually what you get is not a straight line, but a figure eight shape. In the northern hemisphere the top loop of the 8 is smaller than the bottom, while in the southern hemisphere the loop nearer the horizon is the small one.
This curve is called the analemma, and has been known for a very long time – Greek and Latin authors wrote about it some two thousand years ago in the interest of designing a better sundial. My guess is that people observed this much longer ago, and that the creators of the great prehistoric stone observatory monuments tried to incorporate it in their designs.
We can describe this curve mathematically, and it is taught as a method of dead reckoning for those at sea. With a good watch to keep track of time, decent knowledge of the analemma shape, and some precise observations of the sun’s position in the sky, you can pinpoint your position down to around 100 nautical miles. Not bad if you’re lost at sea with no GPS!
The root cause of this is a combination of two factors in the Earth’s movement. The first is that the polar axis, around which the Earth spins to give day and night, is not at right angles to the plane of the Earth’s orbit. This offset angle, a little over 23 degrees, is what gives us seasons. The second factor is that the Earth’s orbit around the sun is not perfectly circular, but a slightly squashed oval. Moreover the sun is not at the centre of the oval, but offset to one side at one of the two focal points – we are about 5 million km closer to the sun in early January than we are in early July. The Earth does not move at a constant speed around this oval. We speed up at closest approach to the sun, and then slow down as we move further away. Those who can remember school physics might have come across this as Kepler’s 1st and 2nd laws of planetary motion, originally formulated in the early 1600s.
Now, for convenience we split our year into equal length days, which means that for one part of the year, a day according to our clocks gets ahead of its allotted portion of the orbit, and for another part it falls behind. By the end of the year it all comes out even. Also, the offset of the polar axis changes the degree to which these shifts make a real difference against the sky. The combination of these two factors is what generates the figure 8 shape of the analemma.
Let’s think back to our ancient ancestors and the stone monuments they built. We know that the positions of the stones encode astronomical information. The monument builders were aware of not just the annual cycle of the sun, but also of more subtle patterns, such as the 28 year cycle that the moon makes in its own path against the sky. Since the analemma can be mapped out with nothing more complicated than a stick to make a shadow, it seems to me quite improbable that they did not know it. Having said that, I don’t know of any specific stone patterns that can be linked directly to the analemma. Once people started making sundials, they soon found that there was no single division of hour markers that works consistently. The figure 8 shape ensures that your sundial sometimes runs fast and sometimes slow.
Moving into the future, every planet has its own variation of the analemma. The exact shape depends on interplay between the angle of the polar axis and the extent to which the orbit deviates from a pure circle. Our Earth has these two factors in approximate balance. So does Pluto, which therefore has a figure 8 shape like Earth, though in this case the top and bottom loops are almost the same size. But for other planets one factor or the other dominates. As a result, Jupiter has a simple oval shape, while Mars has a tear-drop. However, actually making the observations (as opposed to calculating them) might be tricky as you move out through the solar system. On Earth, you only have to wait 365 days. But a Jupiter year is almost 12 of our years, and Pluto takes nearly 250 years to circle the sun once. You would need extreme patience to plot out a full analemma cycle in both these places!
I guess pretty much all of us know that December 21st this year marked the winter solstice, and so – in the northern hemisphere – the shortest day and longest night of the year. But comparatively few people seem to know that this day is notthe one when the sun rises latest and sets earliest. The exact dates of those events are, at the latitude of London, just over a week different from the solstice. Specifically, the latest sunrise this year is not until January 1st 2017, and the earliest sunset was on December 12th.
It turns out that the times when the sun rises and sets are governed by a moderately complicated algorithm called the Equation of Time. This obviously varies with your latitude and longitude, but also takes into account the small differences between the solar day and the sidereal day (the day length as measured against the distant and essentially fixed stars), seasonal variations in the earth’s distance from the sun, the apparent size of the solar disk, and a host of other relevant pieces of information. Strictly speaking, one’s height above sea level, and the details of the surrounding terrain also make a difference, but not in a way that’s easy to quantify here. Finally, there are several different definitions of what angle counts as the zenith line, and I have taken the civil definition as opposed to nautical or astronomical.
Once upon a time the calculations would have taken a very long time and lots of paper, but nowadays we can throw the calculation steps into Excel and find out the information for anywhere we want, and for a reasonably long span of time into the past or future. For the curious, a step by step description can be found at this link.
Out of curiosity, I plotted the changes for a series of latitudes from that of Reykjavik in Iceland (just over 64 degrees north) via Orkney, Penrith and London in the UK, through Rome and the Tropic of Cancer to the Equator. For simplicity I just took everything on the zero longitude line (through Greenwich) since I was only interested in changes in latitude. If you wanted to do this for yourself then you would need to adjust for your actual longitude east or west from Greenwich, and your official time zone.
Here’s the corresponding chart for sunset.
A few things stand out at a quick glance. First, the time of sunrise varies considerably at some times of the year even between London and the north of Scotland. Secondly, you don’t have to go all that far north to get to the ‘land of the midnight sun‘. Thirdly, the total range of variation of sunrise is very small at the equator – about 1/2 an hour, as compared with London’s 4 1/2 hours, or Iceland’s 8 1/2 hours. The places where all these lines cross over is at the spring and autumn equinoxes, where night and day are each 12 hours long across the whole globe.
Going back to where we started, and looking carefully at the early part of the year, you can see that the day of latest sunrise happens after the solstice. The further north you go, the closer the two days are together. So in Reykjavik the latest sunrise is on December 26th. Come down to Orkney and it’s the 28th. In London you have to wait until January 1st. In Rome, January 5th. If you lived on the Tropic of Cancer (say in parts of the Sahara, roughly on a level with Kolkata, India) you’d be waiting for the 8th.
If you live right on the Equator something else comes into play. You get not just a simple days-get-longer then days-get-shorter cycle. Instead there is a more complex curve. Something similar happens in the whole belt of the tropics. This is because there are times when the sun at noon is to the south (as always happens in the northern hemisphere north of the Tropic of Cancer(, but then times when the noonday sun passes overhead and is, for a while, to the north. As it swings over and past you, the day length lengthens and then shortens again – as you can see in the graph.
OK, that’s enough of the Equation of Time for this week. Next time – another oddity about solar movements through the year, together with some thoughts about what this all means for us humans as we have observed the sun through the years. I am convinced that our remote ancestors knew about these patterns (though probably didn’t dress them up in the sines and cosines used by modern maths) and incorporated this knowledge into their monuments and observatories. But more of that next time…
There have been some great pictures of Mars coming out recently from the Indian Mars Orbiter spacecraft so I thought I’d include a few here, together with an ESA video of a simulated flyby of one of the great valleys on Mars, the Mawrth Vallis.
So here is Phobos, tiny against the curve of Mars and very close in its orbit. Most of chapter 2 of Timing takes place on this moon, partly at Asaph, a (hypothetical) settlement facing away from the planet. and partly at a sort of industrial estate in the Stickney crater facing inwards.
And here is a three-d representation of Olympus Mons, the second highest mountain in the solar system. In the book, there’s a financial training college on the lower slopes of the mountain, roughly in the foreground as you are looking at the picture.
To celebrate all this I am running a science fiction Kindle Countdown offer right now – prices start at £0.99 / $0.99 and slowly increase to the normal price by next Monday. So don’t delay… Links are:
Finally, here’s the ESA video flyby of Mawrth Vallis. It’s one of the various places where – long ago – liquid water most likely ran and shaped the terrain we see. Now it is of course dry, but it’s a place that will be the focus of science at some point in the international effort to explore the red planet.
Many years ago I read a short science fiction story by Isaac Asimov called The Martian Way, which he published in 1952. In this, planet Earth maintained control over ambitious colonies elsewhere in the solar system by means of controlling the water supply. At the start of the story everyone assumed that Earth’s vast oceans were the only source of water available. Whoever controlled the water was in charge. The plot is resolved by the retrieval of a piece of Saturn’s rings the size of a small mountain, made largely of ice. With some modest engineering work this was propelled back to Mars where it was needed. The possibility of autocratic rule based on control of the necessities of life was gone.
It was a good story, and highlights our changing comprehension of the place of water in the universe at large. Go back only a century or two, and there was a widespread assumption that whatever other worlds might exist would be pretty much like Earth. Features on the Moon were called seas, bays, lakes and marshes, presuming that they held open water. Early science fiction writers like Jules Verne (From the Earth to the Moon and Around the Moon) and HG Wells (The War of the Worlds and The First Men in the Moon) took for granted that interplanetary travel would be relatively easy, and that once you landed, you would need no special protection except against low temperatures comparable to the Arctic. When in 1877 Italian astronomer Giovanni Schiaparelli named features on Mars canali (the Italian word for ‘channels’), nobody hesitated to use the English word canal.
Then came the early days of space travel, along with a dramatic increase in the power and accuracy of telescopes. The lunar seas turned out to be open plains with no running water at all. The surface features on Mars ceased to be seen as artificial water channels, and were reinterpreted as the result of natural weathering on dry rock. The language we used for the planets changed. In 1961, Arthur C Clarke wrote A Fall of Moondust, where the plot hinged on the total absence of water. In 1969, Buzz Aldrin referred to the “Magnificent Desolation” that he saw on stepping out of the Apollo 11 lunar module. Imagery from the Apollo missions – and the personal accounts of astronauts – established the idea in the popular consciousness that the vivid blue of Earth’s oceans was something unique and precious in a starkly barren universe. The image was reinforced by the “Blue Dot” picture taken from the Voyager I probe.
But after that, there was another wave of observations and information. Perhaps water was not so rare after all. The first target was the Moon, and a careful study of places which are permanently shadowed regions. It turned out that ice will tend to aggregate anywhere which is in shadow most of the time. Buzz Aldrin, turning to fiction in Encounter with Tiber, positioned an early lunar settlement at the Moon’s south pole, specifically because of this new-found source of water. The search for ice spread wider, and now it seems that pretty much everywhere we look we find it.
The asteroids have significant amounts scattered here and there, with some impressive finds by NASA’s Dawn probe. Mars itself shows every sign that open stretches of water once shaped the terrain, though accessing it nowadays might be tricky. As I was writing this, NASA reported the discovery of an underground body of ice just under the Martian surface. It seems that Asimov’s water-seeking Martian settlers would not have needed to trek out to Saturn after all. If they did go there anyway, they would find no mile-high ice mountains since the rings are largely made of tiny granules. However, several moons of both Jupiter and Saturn apparently have ice as their surface crust, and liquid water below.
So wherever we look in the solar system we find water, usually in the form of ice. Tomorrow’s space travellers and colonists will not have to worry about having access to water, though they will have to construct specialised equipment to access it. In Far from the Spaceports and Timing, my own fictional inhabitants of the Scilly Isles, somewhere out in the asteroid belt between Mars and Jupiter, will have to import many of life’s necessities, but not water – they will be able to find their own local supply.
Asimov wrote The Martian Way just as our scientific understanding was changing – indeed as with some other things he was ahead of his time. Although some of the details of his account would need updating, the basic theme remains sound. If and when we spread around the solar system, finding water is not going to be a problem.
There’s been a whole rush of space news these last few days, and what better place to gather some of it together than here? Most of it has some relevance to the Far from the Spaceports series…
The first item I saw was an update from the Dawn spacecraft, going through a series of changes to its orbit around Ceres. For a long time it was orbiting closer to the asteroid than the International Space Station is to Earth, less than 400km from the surface, and now it is returning to a much higher orbit to complete some science measurements from about 1500km. And as a treat we got back this picture of the Occator crater, one of the main locations for the bright white spots scattered here and there on the surface. More details can be found at the NASA site.
In between I read how Elon Musk is pushing ahead his plans for a privately funded settlement on Mars – the announcement was made back at the end of September but I had not previously followed the details through. His idea is ambitious, involving a fleet of reusable rockets working towards a colony of a million individuals, sent in groups of 1-200 at a time. More details can be found at several places including space.com. According to his figures, the price per individual will drop to around $1-200,000 – a lot of money, to be sure, but not unreachable. His current aim is to get an unmanned version sent on its way in about 18 months, and manned flights within a decade. We shall see…
Then Cassini sent back this splendid picture of Saturn’s north pole. I was especially interested in that, since the planned book 3 following after Far from the Spaceports and Timing will include Saturn – or at least its moons – as a destination. Cassini has returned vast amounts of information about Saturn since 2004, but will run out of fuel late next summer and will be deliberately rerouted to burn up in Saturn’s atmosphere. This picture was taken at something like 1.4 mllion km from Saturn – 3 or 4 times the distance from the Earth to the Moon. More details can be found at the NASA site.
Finally a science article on a potential new form of spaceship engine has now been peer reviewed and published… Called the EmDrive, it was first worked on about 15 years ago by a British scientist, Roger Shawyer , and has now been taken up by NASA for serious study. The theoretical problem is that nobody has come up with a satisfactory explanation of how it could work: however several teams in the US and China have reported success, so maybe it’s going somewhere. Have a look at this link forthe latest news, or this link for some very sketchy details.
That’s all for today, but I’m sure there will be much more to come…
The first part of this blog talks about background, so if you’re keen to read instead about my new chat-bot Blakeley Raise, just skip down a few paragraphs… I’m very excited about Blakeley Raise, and hope you’ll check out the new possibilities. If you can’t wait to give it a go, click here.
So, the background… I had the great pleasure of going to the technical day of the Microsoft London Future Decoded conference last week. It was packed with all kinds of interesting stuff – far too much to take in in the course of a single day, in fact. There were cool presentations of 3d technology – the new Hololens device, enhanced ways to visualise 3d objects within a computer, and how 3d printing is shaking up some parts of the manufacturing industry. And lots of other stuff.
But it all threatened to be a bit overwhelming, so I kept my focus quite narrow and stayed mostly with the AI stream of presentations. Top level summary: Slate (in Far from the Spaceports and Timing) has no need to worry about the competition just yet, but there is some really interesting work going on. It will take a lot of generations for Slate to emerge! But the work that is being done is genuinely exciting, and a mixture of faster hardware, reliable communications, and good programming practice means that some tasks are now trickling into general everyday use.
One speaker used the phrase “slices of intelligence” to capture this, recognising that real intelligence involves not only a capacity to learn tasks and communicate visually and in words, but also to reflect on success and failure, set new challenges and move into new environments, interact with others, be aware of moral and ethical dimensions of an action, and so on. We are a very long way from producing artificial intelligence which can do most of that.
But within particular slices lots of progress has been made. Natural language parsing is now tolerably good rather than being merely laughable. Face recognition, including both identity and emotion, is reasonably accurate – though the site http://how-old.net/ produces such a vast range of potential ages from different pictures of the same person that one can be both flattered and disappointed very quickly (give it a try and you will soon find the limitations of the art at present). On a philanthropic note, image recognition software has been used to provide blind people with a commentary of interesting things in their immediate neighbourhood: see the YouTube snip at the end of this blog.
Here’s the bit about Blakeley Raise… For those of us who develop our own software, it is an exciting time. It is extremely easy now to develop a small program called a chat-bot which can be incorporated not just into web pages, but also message applications like Skype, Facebook Messenger, and a host of others. So inspired by all this I have started developing Blakeley Raise, a bot who is designed to introduce potential readers to my books. You can think of Blakeley Raise as a great-great-ancestor of Slate herself, if you like, though I don’t think Slate will be feeling anxious about the competition for a long time yet.
But one of the great things about these bots is that they can be endlessly reconfigured and upgraded. Right now, Blakeley Raise just works by recognising keywords and responding accordingly. Type in “Tell me about Timing” – or another sentence containing the word “Timing” and you’ll get some information about that book. To find out more, navigate your browser to http://www.kephrath.com/trial/BlakeleyRaise.aspx and see what happens. All being well – meaning if I can solve a few technical problems – Blakeley Raise will soon appear on other distribution channels as well. (For those who remember the episode where a Microsoft bot quickly learned how to repeat racist and other inflammatory material, don’t worry – Blakeley Raise does not learn like that)
Finally, here’s a video of one of the more philanthropic spinoffs from Microsoft’s enthusiasm about AI in practical use…