TNW València is officially sold out 🇪🇸 We will see you in 3 days

This article was published on May 28, 2013

Instinctive innovation: 10 experimental projects that completely rethink computer interfaces

Instinctive innovation: 10 experimental projects that completely rethink computer interfaces
Nick Summers
Story by

Nick Summers

Nick Summers is a technology journalist for The Next Web. He writes on all sorts of topics, although he has a passion for gadgets, apps and Nick Summers is a technology journalist for The Next Web. He writes on all sorts of topics, although he has a passion for gadgets, apps and video games in particular. You can reach him on Twitter, circle him on Google+ and connect with him on LinkedIn.

  • This post is brought to you by Jaguar. Experience F-TYPE.

    Minority Report. The sci-fi flick was released in cinemas over a decade ago, but viewers are still captivated by the idea of accessing and moving data with their hands.

    The idea still feels like a pipe-dream however, given that many of us still work with a traditional keyboard and mouse/trackpad setup in our office and home. The dream of swiping through the air or talking to a personal assistant like Jarvis from Iron Man feels exactly that – a dream, but nothing more.

    Many firms are exploring these ideas, however and pushing what’s possible with our present technological capabilities. The results are fascinating and often deviate from the input and interfaces that we’ve come to long for in Star Trek (holodeck please!)

    These are useful, experimental platforms that will influence the way people engage and work with technology. Better yet, some of them are available right now.

    Leap Motion

    It’s refreshing to point or swipe at a screen and see content react accordingly. There’s an immediate connection of both action and reaction that clearly reflects how the user would interact with a physical object in the ‘real’ world.

    Leap Motion is this and more. The device, announced last year, is unique because it’s tiny and unobtrusive. A tiny metallic bar sits between the keyboard and monitor, which anyone can then approach and start using simply by waving their hand.

    It’s incredibly accurate, offering detailed handwriting, precision pinch-to-zoom and all sorts of intuitive hand gestures that are both natural and concise. No standing up or arm-flapping required. Just quick and effortless interaction.


    A casual onlooker might not see the advantages of MYO straight-away. The user attached an inconspicuous armband, which is used to measure various muscle activity as they wave and point at the screen.

    The clear advantage over Leap Motion is that it isn’t location-specific. Promotional videos have shown the user walking away from a desktop computer and then altering the volume on the other side of the room, simply by moving his wrist in a circular motion.

    Professionals delivering a presentation can forego a remote and simply swipe two fingers in the air to move to the next slide. The use-cases are almost endless, although the clear limitation is that it’s following only one portion of the body. One arm, one control input.

    Oculus Rift

    The applications for hardware-enabled virtual reality experiences are mouth-watering. Being able to walk through a digitally rendered field and look left and right, at will, to see what’s around lends an entirely new level of depth and immersion. The opportunity to combine this with sound and touch feedback also hints at fully realized worlds for the user to explore.

    Oculus Rift is a head-mounted virtual reality headset being developed by Oculus VR, a company that raised $2.4 million on Kickstarter to develop and release the product to the public.

    It’s still early days and in truth, the hardware is far from perfect. Yet the promise of building an immersive, one-to-one first person perspective has been realized and that’s fascinating in its own right. These futuristic goggles are being aimed at gamers in particular, but the opportunity to use it for personal computing is also plain to see.

    Google Glass

    From the moment Google formally unveiled Glass at its I/O developer conference in 2012, everyone couldn’t stop talking about it. The idea of wearing a pair of perfectly normal glasses, fitted with a powerful computer and head-mounted display seemed impossible.

    Yet Google seems to have nailed it. Even better, the company plans to release the device to the public in the not-so-distant future. It offers a point-of-view camera capable of shooting photos and 720p HD video, as well as a small touchpad for navigating menus and the onscreen interface.

    Glass appears to be the epiphany of mobile computing. The device can be taken anywhere and is accessible at anytime. It’s alo small and relatively inconspicuous, which means the device is out-of-the-way when the user wants to focus on their surroundings.

    Kinect, version 2.0

    Motion controls have had a tough old-time in the video game industry. When the Nintendo Wii was launched, it heralded a new age of remote waggling in the living room. Third-party developers struggled to take advantage of the technology in a meaningful way, however and had to compete with counter-offers from both Microsoft and Sony.

    Kinect, a motion sensor that uses an infrared projector and camera to analyze the player’s movements, was a novel idea when it launched in 2010. The ability to track the entire body produced a couple of memorable experiences such as Dance Central and Child of Eden, but it suffered from frequent accuracy issues.


    Microsoft unveiled the next version of Kinect simultaneously with the Xbox One, its new video game console launching this year. The resolution has been upped to 1080p, and an ultra wide-angle lens means that it can be used in the even the smallest apartments.

    The kicker, however, is that like the original Kinect, Microsoft will also be launching it for Windows next year. The previous version was embraced by the modding community and resulted in a number of innovative and off-the-wall experiments. Even better hardware should produce more of the same.


    Still looking to re-enact Minority Report? G-speak, built by Oblong Industries, is the closest working product to realizing that dream. Users don a pair of specialized gloves that can then be used to interact with data through various arm movements and hand gestures.

    It integrates with large screens and multiple surfaces, encouraging large-scale collaborative projects and a more direct approach to problem-solving.

    The gloves themselves are a little unattractive, but it means that anyone can use the system without re-calibrating the hardware. It’s a little way off Tony Stark’s personal lab in Iron Man 3, but the groundwork is there to realize this pioneering form of interaction.

    Google Talking Shoe

    At South by Southwest (SXSW) 2013, Google showed up with an interactive playground and a pair of talking sneakers, nicknamed henceforth as ‘The Talking Shoe’.

    It’s not a consumer product – which is probably a good choice – but it does highlight the sort of wacky, off the wall thinking that even high-profile companies such as Google are coming up with.

    This high-tech pair of trainers comes equipped with a pressure sensor, accelerometer and gyroscope, which tracks the user’s movements to deliver progress reports, advice and general abuse, such as: “Sitting down on the job, are we?”

    It’s all a bit silly, but isn’t that what experimental projects are all about?

    Microsoft: Live, Work, Play

    Microsoft has built what it likes to call an ‘Envisioning Center’, where employees develop and prototype ideas that could be used by consumers in the next five to ten years. Back in March, the company released a promotional video offering a glimpse of the future, which involved an awful lot of touchscreens that are connected with one another.

    Forget wallpaper, as some of these screens will take up the entire wall in your living room, kitchen or bedroom. Users are seen clamping a Surface tablet into a large desk – similar to what an architect might use – which is equipped with a huge touchscreen for cross-platform creation.

    The same device, but fitted in the kitchen, expands on the concept of pinning artwork and notes to the refrigerator door, enabling users to bring up photos and word documents created on any device around the home.

    Each wall screen is also fitted with a Kinect-style webcam for analyzing objects in the room. One demonstration has the user raising an ingredient and asking what he should cook with it; cue a series of recipes and step-by-step instructions, displayed on a kitchen table-top.

    Touch screen devices are nothing new, but the idea of seamlessly combining them into one unified surface, alongside huge mounted wall screens, could easily change users’ behavior and workflow around the house.


    Remember those amazing wristwatches that James Bond used to wear? The Rolex with a small laser beam in Never Say Never Again, or the Seiko Quartz watch with a built-in telex for sending mission critical messages? Well, unfortunately those don’t exist.

    What we do have, however, is Pebble. It’s the first truly successful smartwatch, combining expansive functionality with attractive, robust hardware. Funded via Kickstarter – and breaking a few records in the process – Pebble offers a small e-ink display that communicates with an Android or iOS device over Bluetooth.

    The Pebble comes with a few apps pre-installed, but the company’s open SDK means that anyone should be able to push the platform forward with new and interesting software.


    One of the more interesting trends in the last few years has been the development of voice recognition software for various hardware ecosystems. Siri is one of the most notable, having been launched by Apple in October 2011 as a personal assistant for the iPhone and iPad. Users can use everyday words and phrases to execute tasks for a number of different applications, including reminders, email and weather.

    Google has introduced its own interpretation as part of its standalone Google Search app for iOS and Android. It’s fast, accurate and intuitive, to the point where Google has also decided to introduce it as part of the Chrome web browser on the desktop.

    Barking commands at a nearby smartphone or tablet can feel a little jarring at first, but the applications are numerous. Being able to prepare a dish in the kitchen and ask for the next instruction, for instance, without washing your hands and swiping across the screen is rather helpful.

    The aforementioned Kinect controller is also starting to use this technology for the living room; users will be able to simply say “Xbox, ESPN” to switchover to a live sports game without rooting around for the controller.

    Touch, speech, gestures. It’s a brave new world

    The emergence of all these platforms points to a future where a traditional keyboard and mouse might be the exception, rather than the rule, for interacting with technology.

    That’s not to say we’ll stop using laptops in the next 12 months, or we’ll all be shouting at our monitors in the office for nine hours straight, but there’s a clear opportunity to try new, experimental ideas with our current rate of technological advancements.

    The future of user interface is therefore bright, unknown and also pretty darn exciting.

  • Get the TNW newsletter

    Get the most important tech news in your inbox each week.