This week, Layar, a mobile augmented reality browser for Android phones launched globally to an incredible amount of excitement in the tech community. So, what exactly is Layar? Well, I’ll let Mashable describe it because they can do a much more efficient job than I can:
“Layar is a Reality Browser, which means it displays real time digital meta data on top of the physical world around you, as seen through the camera of your mobile phone. Point the camera anywhere, and you’ll see layers of information on top of real world objects; these layers can be real estate info, bars and shops, tourist information, tweets from users etc. Imagine sitting in an internet cafe and seeing what the folks around you are tweeting through you camera? Well, that’s exactly how it works”.
Only, here’s the problem. That may be how it’s being pitched/positioned, but that’s not “exactly” how it will work when consumers get their hands on it.
Let me be honest: like most interactive marketers, when I first read about this tech I felt like Steve Martin (in “The Jerk”) when the new phone books showed up. My mind was swimming with possibilities – I mean, think about all the ways we could use AR to provide real value to consumers. Think of the amazing experiences we could create! Then, I talked to our Senior Software Engineer who unceremoniously brought me back down to earth with a “thud”. The problem, as it turns out, is not with the software that’s being developed (as I firmly believe that companys like Layar deserve a serious “hat-tip” for pushing the industry forward) but with the hardware (i.e. the phones) that consumers will be running these technologies on. The fact is the hardware just isn’t accurate enough to deliver on the types of precise experiences that are being showcased/promised in videos around the net. Here are the two primary reasons:
- THE GPS – No matter what handset you’re using, we aren’t dealing with military grade GPS. The fact is, even in the best conditions (please note the word “best”), civilian GPS is accurate up to 50 feet. With that in mind we spent the other day playing with Layar on a Google G1 (note: the phone’s hardware is not Google’s but HTC’s) to see what we were dealing with, and we noticed something. Typically we experienced a GPS accuracy level of somewhere between 100 to 250 feet. Now, remember, that could go 250 feet to the left, right, forward, or backwards… So really, the device was telling us that the piece of data that was overlaid on the phone’s screen was somewhere within a 31,400 square foot (if accuracy was 100 feet) to 196,250 square foot (if accuracy was 250 feet) area. That means you won’t be able to swing your phone around an Internet café and match a tweet with a face. In fact, you can’t even safely assume that the person whose tweets you’re reading is even in the café with you (not to mention you may feel a touch silly in an internet café holding up your phone and spinning around anyways, but that’s a conversation for another day).
- THE IPHONE 3GS’ COMPASS – As of right now you can’t get AR apps on the iPhone but that’s going to change next month. So we should probably discuss this now as the the new phone’s compass will be a major component of most GPS related executions. Let’s check out the screenshot below. Do you see that “V” coming out from the user’s location? That “V” indicates that they are facing somewhere within those parameters. So, if the device doesn’t know exactly where you’re facing, how can you tell if the real estate data you’re looking at is actually for the house you’re viewing through your phone’s camera? Not to mention that as you get farther away, the margin of error increases. So that subway station you think you’re walking towards… it could actually end up being three blocks over and two blocks up. And that’s just what you want when wandering around New York right? A nice game of “hot or cold”.
At the end of the day, if the hardware can’t accurately tell: where you are, where the data that’s being overlaid is anchored, or where you’re facing… how reliable and useful an experience can you possibly have?
Now, you may be thinking that those two inaccuracies noted above aren’t that bad. Certainly, when you’re driving a car you never really noticed any issues with your GPS, right? Here’s the thing, an exit ramp being a couple hundred feet off is not a big deal. Between the street signs, the fact that there’s only one ramp in the area, and your speed, you just don’t notice the inaccuracies. But, if you’re going to overlay data on a precise location (e.g. real estate information about a house, or information about a city’s historical landmarks) via a phone’s video screen, those inaccuracies make a huge difference in the consumer experience. And here’s the corker, if at the end of the day the data isn’t accurate/reliable why will consumers use it (outside of that initial “this is cool/different” moment that one has the first time they try it)? So, I guess the real question is why would a consumer continue to use it?
Now, don’t get me wrong; my hope is that there are developers out there that are seeing this technology and having an “A-ha!” moment. I’d love it if some fantastic and useful apps get built even with the hardware’s limitations. I’m starting to doubt that it will happen given what I’ve been discussing, but a guy can always hope right? All I’m trying to do today is manage your expectations, because the experiences that are being promised are not what you’re going to experience when you get the application in your hands.
So, enough blabbering out of me, what do you think?