Today’s topic is anticipation and context, two themes which drive a great deal of our human interactions. Even when two people don’t know each other very well, a shared context gives them an enormous head start in mutual understanding. As I started to write this, I carried out a little experiment. I typed into a Google search page “who won the cricket world cup?”, and Google correctly guessed that I was interested in the most recent one – last year – and gave me the right answer.
Then I typed in “where was it played?” – a question that any human conversationalist would recognise as directly referencing the first question. Of course the Google web search engine has no such context, and the first page of suggested links included some pages on table top board games, medieval music, the date when the musical Hair was released from censorship (1968), and a song by the Dave Matthews band. Can you imagine a set of answers like that turning up in a pub conversation about cricket?
Was this a fair test? Clearly not, one might say. Yet it shows how even a hugely successful search engine can be woefully inadequate at recognising context. If we are going to think of software as intelligent, it has to be able to offer suggestions which are contextually appropriate – answers to questions, for sure, but also behavioural changes that adjust to changing situations and an awareness of what I am actually seeking. If something cannot adjust like this, we are unlikely to think of it as intelligent.
Now, all of the major players in the online world are aware of this, especially when the results are being delivered to a mobile phone where my patience level, reliability of connection, and willingness to trawl through dozens of potential matches are all pretty low. We want the system to know enough about us that we don’t have to pedantically explain the same stuff over and over again. Mobile search is going this way, and mobile route planning is already there.
But… the flip side of that is security, or trust, if you prefer. Do you want the major online players to know so much about you that they can anticipate your every whim? Who else gets to know that much about your actions? In the debate about convenience versus privacy, many – perhaps most – people actually want convenience. Privacy is hard work: it is technically difficult and, usually, inconvenient. Lots of people can’t really be bothered, and simply trust that what they are doing is insufficiently interesting to attract the wrong kind of attention.
Which brings us to Far from the Spaceports. Mitnash and Slate are, in part, ethically motivated hackers. They have expertise in the art of cracking into someone else’s code, but they’re doing it in the interest of tackling crime rather than committing it. But it’s a fine line, and some of their actions probably cross a moral line somewhere:
“I just don’t know what the pair of them will get up to once they get in to the system. They’re going to take this a lot further than I would”…
A purist would say that what we planned to do was not exactly legal, any more than the code inspection at the relay buoy was. But then, there were not exactly any laws that applied to this situation…
But hacking aside, one of the most prominent features of the relationship between Mitnash and Slate is their ability to anticipate what each other wants next. For all the differences in their respective hardware systems, as a pair of coworkers, colleagues, and friends, they work on a rather unconscious level with each other.
As an aside, and I’ll be saying more about it later in the week, there’s a Facebook launch event for Far from the Spaceports next Monday, December 7th, 7pm-9pm UK time.