Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 31, 2014

How augmented reality is augmenting its own future


How augmented reality is augmenting its own future

People have been talking augmented reality (AR) for years. In fact, since L. Frank Baum’s The Master Key: An Electrical Fairytale was published in 1901.

In it he described how someone wearing a pair of glasses would be able to see whether the person they are talking to is good or evil, wise or foolish, kind or cruel:

I give you the Character Marker. It consists of this pair of spectacles. While you wear them every one you meet will be marked upon the forehead with a letter indicating his or her character. The good will bear the letter ‘G,’ the evil the letter ‘E.’ The wise will be marked with a ‘W’ and the foolish with an ‘F.’ The kind will show a ‘K’ upon their foreheads and the cruel a letter ‘C.’ Thus you may determine by a single look the true natures of all those you encounter.

Fast-forward 100 years, give or take, to the year 2000 and you have ARQuake [PDF], an augmented reality version of the game Quake created in the Wearable Computer Lab, a part of the computer and information science department [PDF] at the University of South Australia.

ARQuake

Now, we might not be quite at the stage of being able to determine whether someone is truly good or evil using augmented reality, or indeed playing Quake in the streets, but the idea of being able to look at someone, or something, and find out additional information by looking through a smartphone or tablet is becoming more common.

However, with all the talk of AR, I’ve found the reality disappointing. With companies like Blippar, it’s currently most often used for marketing and advertising, and while I might appreciate being able to see your latest ad come to life on my magazine pages, I was ultimately hoping for something a little more exciting from the future.

Part of the problem, as I see it, is that the term is so broad and the possibilities so varied that it creates no area of focus, and no clear understanding from users or businesses.

In pursuit of having my faith restored, I caught up with notable businesses in the space, such as Layar, Taggar, Metaio and InfinityAR to see what the future had to hold through their eyes. Here are just a few of the current and near-term possibilities for augmented reality.

A sticky problem

Stickiness is the term used to define how successful a company or a service is at getting a user to return in future. In publishing, sticky content means repeat readers. In the context of apps and augmented reality, it means the need to convince people to open the app more than once or twice. Augmented reality might have been mooted for a long while, but its key problem today is the same as when Wikitude launched for Android back in 2008 – it’s still mostly gimmicks.

One company looking to get around this problem is Taggar, launched in December for iOS, which thinks that the answer to the stickiness problem lies in social integration and cloud hosting of content. The former should ensure word spreads about the service and adds a level of interaction, while the cloud base removes the need for on-device processing, freeing it up for access from virtually any device, potentially.

What this means for users of the Taggar app is that they can ‘tag’ any real-world object with pictures, images or videos. Other people can then come along and leave a response, or ‘tag over’ the original tag.

So far, the company has worked with brands on creating campaigns, like for the Jason Durulo album cover which will allow fans to leave tags that will be automatically ranked by ‘likes’. At the end of, say, two months (it’s currently undetermined), the fan with the most liked tag will win a prize, like seeing him in concert. It’s definitely a step beyond the standard, point-and-watch-a-movie-trailer uses we’ve seen from AR in the past, and the idea of having multiple tags on the same content hasn’t really been seen too many times before, but so far, so promo.

However, Charlotte Golunski, head of marketing for Taggar, said that there are plenty of other uses for it too:

The secret messaging has been really popular with the schoolchildren we’ve tested with so far, who have been leaving secret tags [on the pictures of teachers in the hallways].

Other people have wanted to use it as more of a review service. So, on something like Foursquare where someone wants to become the expert on something, we’ve seen people do that with different objects or a piece of artwork, to become the experts. So as soon as you scan it, you get to find out about it and they want to be the person at the top of the leader board.

While you could take that review scenario to apply to anything from artwork to restaurants, it’s also being used in more creative ways too, Golunski explained.

One blogger took a photograph and then used it to show all the other photographs he took of the same place from different angles, and to create a making-of video to show what went into taking the photo. Other bloggers then came along and tagged the image with shots of the same place but taken at a different time or in different weather conditions, Gorlunski said:

It’s like a bit of a tutorial, a bit of fan engagement. It was really about creating a deeper layer to the content without creating a whole new video and uploading loads more pictures. As soon as you scan one thing you can get all the different kinds of content, videos, stickers, pictures, drawings, without having to host them all on your Instagram or somewhere.

Clearly, ‘secret’ tags could also be used for things like creating a hidden guided tour around a city, or to create a treasure hunt of some kind, maybe even a creative marriage proposal. Gorlunski explains that people in their twenties and thirties have been using it for, ahem, adult content too.

Ruder content is particularly popular… a secret message you send to just one person, so maybe it’s your boyfriend or girlfriend and you’d have an image that you knew that was for them only, and they scan it and they see what the new thing is you’ve left for them.

This content can be updated – or deleted – at will, she added. However, it’s probably best to make sure it’s only a picture that you both have – while the tags are called secret, they are in fact all public. Next month, Taggar will be releasing an update that introduces the option to have private secret tags.

While Taggar already seems to offer a more compelling reason to use it than many others, it starts to make a lot more sense when you think about removing the smartphone from the equation. I can’t imagine constantly pulling my phone out of my pocket just to check if there are any secret tags in my surrounding world, but transpose it onto something like Google Glass where you’d never have to make the effort to discover the tags and interact with the content and I’m much more likely to use and engage with it.

The app only exists on iOS today, but the team is at work on versions for Android and Glass already, though it’s still early days on the latter. At the demo, I saw nutritional information pop out from a packet of popcorn simply by looking at it – combine this with something like Fitbit and you have an instant nutritional adviser telling you whether you’re okay to go ahead and treat yourself today or whether, actually, you’ve been treating yourself a bit too much lately and it might be better to pop it back on the shelf.

The next version of the app for Glass-like devices would use a full overlay display to place content directly over the real-world, rather than using the top-corner display afforded by current generation Glass.

Evolving technology

Layar is one of the best known and most downloaded (36 million downloads and counting) augmented reality browsers for smartphones. With it you can overlay your view of the real world with points-of-interest, Instagram photos and all sorts of other ‘Layars’ of information and content. The company also works with publishers to create interactive content for magazines, websites, promotions and that sort of thing. While it’s quite clearly applicable for publishers to deliver extra value – and that’s where the company is most successful – Layar also works with the automotive, real estate and education sectors too.

“Augmented reality as a term is pretty tough right now, not everybody understands it and those that do expect a lot from it and it doesn’t always deliver, because it’s hyped so much,” Maarten Lens-FitzGerald, co-founder of Layar explained.

As one of the longer standing businesses, Layar has seen a lot of changes since it began back in 2009. And frustrating early user-experience for AR aside, 36 million downloads shows some appetite for the technology from consumers, in addition to the company’s 70,000 B2B users. But what’s Lens-FitzGerald and the company looking ahead at now?

We were great in the beginning at telling you what’s next, because there was only what’s next. We started calling augmented reality the new mass medium, and we still do see it as the new Web and a very powerful medium. It just needs content and the right formats…It’s still a medium where the formats are very young, we are seeing early successes with campaigns, like magazines in the US that are doing full interactive issues that are one link per page.

With Layar we were always looking up and away and everybody was looking with us and loved to dream with us… and now we’ve learned to look down and be grounded and see what people need right now. The future is what you want right now, and that’s what we’re working on.

[Looking ahead though] I was just making a video of Layar on Glass, where I was reading a magazine and seeing how that would work, and that’s just so logical to see it come alive without using your hands… That is where I think right now, with iBeacon, with Glass, with all these connected devices and that new Web that is coming – AR is a very logical interface to lay over all these beacons or WiFi points or whatever else to connect to.

Ultimately, it’s about balancing the removal of technology (to make the experience more seamless) with what’s possible and genuinely useful for users.

“It’s about the context the user is in, and that is something that we, brought up on TV mass media, are not used to. We’re used to being fed all the content automatically… and now all of a sudden the content will only become available if I’m in the right context,” Lens-FitzGerald added.

Context is king

Layar may well be focusing on delivering interactive, contextual published content, but Isreali startup Infinity AR is working on bringing the idea of context together with augmented reality to essentially create a cheat sheet for life.

Infinity AR’s core proposition is a software platform designed to connect all manner of devices. However, rather than just plug everything together in the most basic of ways, it includes things like facial, voice and mood recognition – the latter two of which are powered through integration with Beyond Verbal’s Moodies ’emotion engine’. By combining information from different sources, it can present you with timely, context-relevant information, like whether you need to fill up your gas tank before you even set foot outside. Enon Landenberg, CEO of Infinity AR explained:

Some of it [is gathered] by GPS, some is computer vision, some of it is by contacts [but] what most of the augmented reality field is talking about is how we present it.

If you want to move from the gimmicky aspect of pointing your smartphone at a newspaper and seeing an article in video [form], you have to do something that seamlessly, in your day-to-day life, gives you another layer that you’re not exposed to. So we went back and looked at how we could bring you that other layer. The first thing we did is see what information we could gather from around you to understand what you need and what you want before you even ask for it.

Achieving this is no mean feat, but even mapping all the different types of information that could be gathered is only half of the story – devices have a key role in the equation. Landenberg explained that the company then divided each device into inputs and outputs, so your smartphone would be an input (microphone, GPS, etc) and an output device (screen, speaker), a smart TV – or even your car – could be both input and output devices. Conversely, your Nike FuelBand can only be an input device as it lacks a screen.

We mapped all the digital devices, including of course Google Glass, which is input and output, and Meta Glass which is a more complex input and output because it also has gestures.

Then what we understood was that once we had the input and output devices around us, we needed to understand the information. As an example, take a picture, once you take a picture from the input device [whatever that is], it’s just a collection of pixels, it’s nothing, unless you’re a human. If we want a computer to understand it we need to take the picture, the input and move it through different technologies like object recognition… If it’s not an object it’s a human, and we want to know who it is etc.

Once we understand the input we are receiving from your device, now we can start working and do our magic, and our magic is really connecting all these devices into one big brain. In parallel to receiving all the feeds from all the sensors, we are also using all your public information, social feeds, Facebook, Twitter, Foursquare, whatever. We’re using more and more layers of information about you and we’re learning [about] you; on our back-end we have an artificial intelligence platform that learns very well what you need and what you want. So I know that six out of the last ten restaurants you ate in are Italian so I understand you like Italian food, for example. And then we present back, using the most comfortable [appropriate] screen for you right now, the information you need right now. So if you’re leaving a meeting, I can tell you the best Italian restaurant on your Glass or on your smartphone.

In another scenario, Landenberg painted a picture of sitting in front of your TV at home and being informed via your smartphone that you have a meeting in the morning and that you don’t have enough fuel to reach your destination, so the alarm on your phone has been moved forward by 15 minutes to give you enough time to get to the nearest petrol station.

Take a look for yourself at what Inifinity AR is trying to achieve in the concept video below. It’s at this point that I can’t help but think of L Frank Baum’s ‘Character Marker’ glasses again. It’s pretty impressive, and perhaps concerning, stuff.

Landenberg argues, however, that the onus of responsibility is for people to properly understand what they’re publicly sharing, and points out that most people have far more pictures on social media channels than the government currently officially holds about them.

The face-database of what Facebook has about you as a person is 500 times better than what the government has, but the funny thing is that all your pictures were uploaded by you or your friends, they’re not official pictures. So you update when you go to a restaurant or to a conference and you’re sharing that information out there, and now the main thing for you is to understand better what you’re sharing and what the outcomes of that sharing could be.

Hindered by hardware, not humans

It’s not a potential privacy backlash that’s holding back some of the most exciting potential uses for AR, it’s partially down to the hardware, Landenberg says:

To get the best out of augmented reality you need hardware that allows you to present augmented reality properly. And although I’m a big fan, Google Glass is not an augmented reality device because it’s not a [full] screen through… you have to look to the top of your forehead to see the screen. It’s not an additional layer or visualization on your day-to-day life.

Nonetheless, Landenberg’s not worried about that right now as he thinks the rapid pace of technological evolution will make up for any potential deficits we have right now. Plus, Google’s not the only company working on smart glasses, he also sings the praises of Meta’s SpaceGlasses that have been designed for a full AR experience.

25 years ago when you started to see people walking and talking into big plastic devices with an antenna it looked really strange for you… and once people started to touch their screens five years ago that also looked weird for you. The changes are coming so fast that we get used to technology we like really, really fast, so from the hardware point of view I’m really not concerned. There are lots of formats – some of them are watches, some of them are glasses, some of them are wristbands – there are lots of formats of wearable computers. Everyone will find the device that is most suited to them.

Vuzix is another company that’s been working hard to bring the potential of augmented reality to life through development of hardware and software.

Other companies, like Metaio, have been working hand-in-hand to try and solve the hardware challenges facing AR today, but without building its own devices. Metaio’s Trak Lord explained a little more:

The barriers to [AR] are mostly in hardware, at the silicon level. Not unkindly, but the devices that we use, although amazing and powerful with lots of processors, were never optimized for augmented reality. It’s third-party software so everything just runs at the top in the CPU and what that does it slows things down, heats up the CPU, which means the phone gets hotter, and the battery dies quicker.

To some degree we need to work more to optimize this kind of technology at a silicon level. Metaio is actually [already] involved in that, we have a small team internally that works with OEMs and chipset vendors to try and [do] collaborative research to understand what the process is to optimize AR for silicon.

Out of this research, the company has designed an accelerator that it calls ‘the AR engine’ that would (ideally) go into any SoC (eg. Apple’s A7 chip found in the iPhone 5s) to increase initialization (boot time) and decrease battery consumption. Lord said in initial tests on a regular mobile silicon system, that kind of accelerator can reduce power consumption by more than 60 percent and increase initialization by “about 6,000 percent”.

Much of Metaio’s work is undertaken in partnerships with companies and brands. In the second half of last year, the company went about creating an augmented reality servicing and diagnostics manual for the Volkswagen’s XL1 concept car. Lord explained that the idea behind something like this is that, as a national dealer, Volkswagen needs to be able to fix this very specialist car at any of its dealerships, but training all the technicians how to fix it when the chances are they’d never see one would be a bit of a waste.

Augmented reality has potential in a lot of industries in this way, and is already being put to productive use in prototyping, product design, precision and efficiency measurements. Essentially anything where you can take a virtual asset that would be impossible or just plain impractical to conjure up or transport in real life, like large industrial components.

With augmented reality able to deliver information and instructions in a hands-free manner, is there potential for it in surgery? Not for Metaio, at least not now, Lord explained.

There have been some forays into hands-free healthcare and surgery, but we at Metaio don’t do that, we don’t believe the operating table is a great place to experiment with cutting edge technology, no pun intended. We’d much rather try and repair an engine than an organ, because when lives are counting on it it’s not something we’re comfortable with. The technology is going that way [though]…

That being said, I think there’s a lot of opportunity for it in medical education.

It’s through businesses, institutions and companies embracing AR for things like training, education and maintenance tasks where Lord thinks that AR might gradually be able to make its way into the mainstream, non-tech audiences. Although he thinks we’re not at a point where every day consumers are ready to be walking around with AR glasses.

Gimmicky or genuinely useful?

We’ve seen the possibilities of secret messages, hidden virtual guided tours around a city, and the possibility of a future where any of your devices will be able to deliver context-relevant, timely information, perhaps before you even knew you needed it.

It’s undeniable, a lot of the uses we’ve seen for augmented reality so far have been gimmicky, but it feels like with other technology maturing around it (batteries, precision of GPS, processing power in a range of devices, etc.), it’s now at a point where we’re beginning to see some of the true early potential.

It might not quite be ‘augmented reality’ in the same way as some of the other uses covered here, but Orcam’s system for people with severe sight difficulties looks like it could change people’s lives. By using glasses equipped with a small camera, object-recognition technology, gesture recognition, a whole bunch of processing and a bone-conduction earpiece, it can allow someone who is visually impaired to navigate the world with relative ease. It might not fit quite so neatly match the term AR as we’ve become accustomed to, but it’s certainly augmenting the reality of its users – the output in this case just happens to be audible rather than visual.

The potential of augmented reality represents different things for different users. For consumers, it could one day be the ultimate in convenience – delivering information on everything you could ever want to know about people, places, objects, politics (RedEdge is already working on its Augmented Advocacy app for Glass) and anything else right in front of your eyes, on your wrist, or via any other kind of device. For manufacturing or heavy industries, it’s like the ultimate dry-run. For publishers and marketeers, it’s an opportunity to engage with their target audience beyond the page. And for people that can barely see, it’s a chance to experience the world in ways they haven’t had the chance to before.

With applications of the technology now diversifying in many different directions, it’s clear that AR as an industry is making no bones about augmenting its own future.

Featured Image Credit – Robyn Beck/AFP/Getty Images

Get the TNW newsletter

Get the most important tech news in your inbox each week.