We’ve got decades to go… decades!

Sometimes I simply cannot avoid the childish delight that erupts from my mind when I switch on my mobile phone in the Maldives and find email messages start to arrive.  The magic of if — the fact I’m on the other side of the planet and this stuff still works — never ceases to amaze me.

Other times, though, I’m reminded of just how far we’ve got to go.  I’m reminded that for all the harrahs, whooping iPhone adverts and tweets about the gorgeous HTC Desire, the reality is, we’re no further forward than we were perhaps 4-5 years ago.

4-5 years ago I could download an application and stick it on my Nokia.  I could get off the plane in the Maldives and my phone worked, magically.  I could get my email pushed to me.  I could get IM anywhere on the planet with the fantastic BlackBerry Gtalk integration.

Not much has changed, sadly.  Faster devices, better ‘experiences’ — but only incrementally so.

Does this bus go past Great Windmill Street?

It was a simple question.  I was walking along Holborn with the camera equipment the other day heading to a Nokia briefing about 10 blocks away.  As I walked past an unfamiliar bus stop, I briefly wondered if the number 38 that has just pulled up, went past Great Windmill Street (the location of my meeting).

I looked around slowly.  Moving whilst you’re carrying the camera equipment is not a fun experience.

The bus stop signage was blank with a ‘sorry there’s no information here‘ message from Transport for London. I considered getting my iPhone out — for I knew the London Bus app would tell me what I wanted to know in a few clicks.

And that’s when it dawned on me

That’s when I was reminded of just how limited the current mobile interfaces are today.  What I really wanted to do was point my phone at the bus and for my phone’s screen to immediately change to a contextual menu of possibilities related to the bus.

Right-away I wanted my phone to ‘do a Terminator‘ — you know — start scrolling a pertinent list of information about the bus, about my location, about the possibilities associated with this bus.  I wanted a route-map from (my location to the end of the line, with a convenient flashing dot putting me in context. I wanted the system to have automatically calculated that by taking *this* bus, I would arrive at Great Windmill Street in 5.8 minutes.  The system would have naturally been aware of my schedule and the location of my next meeting and used that data accordingly.

And I wanted more!

Standing there I wondered why I couldn’t immediately order a taxi from the handset.  I’d have liked the phone or the cloud or the ‘mobile network’ to have already polled the available taxis in the area and automatically determined that I could be at my destination in 3 minutes for a cost of 6 pounds. Plus a network operator ‘booking fee’ of 20p. Or something like that.

I’d have liked to have seen the “Book taxi? 15 seconds arrival time” option on screen.  I’d have liked the device to have presented the taxi option to me because it knows I prefer, all things being equal, to use pre-taxed income as a business expense and thus 6 quid wouldn’t have bothered me too much, especially given my heightened blood pressure, heart rate and stress indicators (from carrying all the camera gear).

Further, I’d have liked the phone to have automatically prioritised my preferred taxi providers.

This should all have been displayed to me dynamically and *immediately*. No clicking, no data lag, no messing around.

Alas, I’m dreaming

For all the ‘my Desire is better than your N900 which is better than your N8 and your X10‘ discussions, for all that hot air drifting around the industry about ‘LTE’ and ‘LBS’, the sad reality is that this kind of user experience is decades away.  Decades.  Oh, the basic ‘apps’ exist, especially on the iPhone.  I can piece together this experience in about 5 minutes with multiple, multiple apps and some phone calls.  But goodness me, the industry can’t even collectively ensure I can make a 4k/sec voice call without it dropping as I walk down Oxford Street.

The industry can’t even sort itself out with an ecosystem that enables app developers to plug-in to some kind of greater mobile consciousness that would deliver this kind of experience to everyone. This is the same industry that collectively demands 30-40% of ‘anything’ that goes through its transactional systems. Dear me.

Its going to be years and years before we can get this kind of thing.

By Ewan

Ewan is Founder and Editor of Mobile Industry Review. He writes about a wide variety of industry issues and is usually active on Twitter most days. You can read more about him or reach him with these details.

11 replies on “We’ve got decades to go… decades!”

@Ewan… two thoughts (1) isn't this the essence of what “enhanced reality” is supposed to be able (whether you have to wear trendy (nerdy?) glasses or not…) and (2) what you are describing could (broadly) be delivered by a mashup if the various services expose an API (and most do expose a REST API these days). So your camera takes a pic… the OCR app captures the text “38” which gets pumped into your bus app… and at the same time your real time location-based service determines where you are, where the nearest cab is, reads your diary for the destinations etc.

The only bit of that that is a bit doubtful at present is the “recognise this is a bus and get the number bit” but all the rest should be fairly doable with existing applications.

So I think you are being a bit pessimistic in the “years and years” estimate… when something can be created from broadly “off the shelf” stuff sooner rather than latter some hobbyist will start glueing the stuff together in exciting and fun ways… and they will gather a cutting edge group of fans… and sometime after that someone (e.g a clueless operator) will throw a lot of money at them in an attempt to jump on the bandwagon… and they will then butcher and mutilate the concept in their official v1.0 release but at least it makes the concept mainstream… and then the original developers will leave and re-launch a mainstream working version under another name (or the open-source version takes off and leaves the operator-locked-in version for dead)…

Anyway, the key is that people with a “product vision” evangelise those ideas, and eventually someone will build it!

As I read through your use case here I started thinking practically how I could actually implement this in London Bus. All the technology exists today that could make this happen, the Bus would need some sort of near field communication or at least a realtime tracking system (they are all already fitted with GPS systems) I could reference based on the bearing of your phone, there’s security implications of being allowed access to appointment data etc, but that said, all of this can sort-of be done. Or go for cortex view of what you are looking at (i.e I know you are looking at a bus…perhaps you want to know where it goes) like what the crew at WINEFindr did. That technology took 5 years to develop from primary research.

Stepping back from all the technical stuff, you touch on a wider point here. One user commented to me that getting real-time information for all 20,000 bus stops & where their buses are in London is a no brainer and could not understand why I’d left such an obvious piece of functionality out; I must be stupid. And there in lies the problem, more & more, peoples expectation of technology is starting to outweigh what’s currently possible or even simply put together.

BTW TfL are working on making real-time bus information available, go live is about mid 2011.

Stephen, aye — indeed — however I don't want to mess around taking photos. I want the whole thing to just work. Indeed I suppose I need an 'always on' camera. I shouldn't have to tell the device that I need it to switch on it's camera so it can do some OCR, do you see what I mean?

Yup, you shouldn't need to take a pic per se, but you would have to “initiate the scan” somehow I think (simply because of the battery implications of constant camera use + network lookups for data).

The ideal would be eyeball tracking (e.g. like they do with website usability test equipment) with an “attention threshold” – stare at something for > n seconds and a context menu appears.

Again, the core tech components are COTS stuff, its not theoretical stuff still on a breadboard in someone's basement. It just needs stitching together in a coherent way.

Probably the best analogy I can think of is the home automation world, X10, all that stuff. This has been floating around for ages and has broadly been enthusiast only… but increasingly major manufacturers are bringing out “home servers” to run this stuff, people are building it into “smart homes” for both eco reasons and to control their multimedia systems… pretty quickly this will become mainstream and unremarkable. I guess its the “Tipping Point” argument?

Anyway – how about MIR gets Nokia or Google or Apple (or Samsung or HTC) to sponsor a competition for the best integrated enhanced/augmented reality app? 😉

“Make it so!”

I must say i really enjoyed reading this article and i must agree with quite alot of the points that you have made. The tech is obviously there with the likes of google goggles however it just seems that it is not being applied across the board. Maybe someone will pick up on this article and create something that useful 🙂

i agree with your article but you are going a bit too far when you say that you should just be able to point your phone at something and all of that stuff should happen. is pushing one icon to make all of the magic happen too much to ask? with that kind of logic i guess you are unhappy with having to press the mail icon to read your email messages.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.