CYBER MONDAY WEEK 🤑 Get 30% off your TNW for Startups or Scaleups packages when you use code CYBER30 only until December 4 →

This article was published on November 18, 2013

Will the future of UI design turn us all into cyborgs?


Will the future of UI design turn us all into cyborgs?

Alex M. Chong and Jacky Li are product designers at Pivotal Labs. This post was originally published on Xtreme Labs’ blog, which was acquired by Pivotal Labs in October 2013.


For an adult with no prior exposure or experience, learning how to use a desktop computer can be a confusing challenge.

The desktop computing experience is neither intuitive nor innate to human beings – it requires significant training, time, and ideally early-age immersion in order to understand the paradigms of computing (both understanding how to physically interact, as well as conceptually understand virtual computing environments).

Since the adoption of the personal desktop computer in the 1980s, our efforts to naturalize the personal computing experience has been limited to humans as the variable factor when adapting to computing environments. It’s amazing that we have managed to retain the paradigms of how to sit at a computer since the early 80s – with desktop computing today resembling the exact same monitor/keyboard/mouse setup – with little physical interactive variation.

To illustrate how unnatural and unintuitive this archaic experience is, imagine the learning curve that a first-time user in their 50s experiences in order to understand this interface paradigm. There really is nothing fundamentally “natural” about the desktop computing experience – if anything, it is the furthest thing away from being a natural human function.

We are living in a fascinating time – it is only recently that we are finally breaking the paradigms of our traditional interface constraints set by the 80s. We are finally seeing new forms of portable computing devices – multitouch surfaces, powerful and lightweight mobile devices, and now the emerging market of wearable technology. We are entering an exciting world outside the constraints of physical and virtual environments.

It’s about time we return to our natural world.

The history of the user interface

A quick history lesson: computers, up until the mid 70s, were not much more than large glorified calculators. It was in the late 70s and early 80s when personal computing took a drastic leap forward: moving from command line interfaces (CLI), where typing was the primary communication with computing technology, to the graphical user interface (GUI), which was a more natural and emotionally compelling way to interact with a computer.

This made personal computing dramatically more accessible to average folk – giving people the ability to “see” into the computer world, creating virtual environments and live visual feedback, pushing us one step closer to a more human-like computing environment.

But there’s something odd here – in the 30 years since the release of the Apple Lisa (the first personal computer to offer a GUI in an inexpensive machine), little has changed regarding the physical experience when interacting with personal computers. Sure we invented the Internet, progressed from the first iteration of HTML to HTML 5, and developed transformative Web platforms to connect everyone – but our rigid adherence to the monitor/keyboard/mouse legacy kept our physical desktop experience the same in the early 2010s as it was in the early 1980s.

To further support how far we’ve deviated from the natural world, the study of ergonomics emerged as a way to save our bodies from injury as we try to adapt them to this unnatural environment.

The future of UI: Returning to nature

Flash forward to the late 2000s – or as Apple describes it, the “Post-PC era.” The introduction of widespread multi-touch and mobile computing marks the largest leap forward yet in human-computer interfaces. Mobile devices are designed to be lightweight, portable, and seamless with one’s everyday lifestyle. Suddenly, we’re in the middle of one of the most important technological revolutions, when our computing devices begin to adapt to our natural human function.

The first step into this realm has been the wide adoption of smartphone mobile technology – where checking your schedule is as easy as pulling out a piece of paper with your day plans scribbled on it. Knowing where to go (regardless of how bad your sense of direction) is again as easy as pulling out paper with directions written down. Knowledge appears as you pull out your phone for answers.

We are only beginning to discover the possibilities of a world where devices adapt to our natural human behaviors; a stark departure from “human-computing,” where technology supports and amplifies natural human function.

3 Components of natural user interfaces

In the next wave of this revolution, hardware will virtually disappear.

Take Google Glass as an example. The intention of Google Glass is to liberate us from needing to compute within the universe of a personal computer, by having an unobtrusive overlay to our natural sight and vision. This is computing as a support to our natural human functions, and is an example of Invisible Computing, one of three natural user interfaces.

  1. Invisible Computing
    Invisible computing is when hardware virtually disappears, as computing technology unobtrusively integrates with everyday, natural human function.
  2. Supportive Computing
    Supportive computing is computing technology that supports natural human function, rather than requires humans to adapt to computing functions.
  3. Adaptive Computing
    Adaptive computing and machine learning intelligently recognize and interpret human patterns to produce output based on relative context.

An example of a technology that has matured over time, is optics and optometry. Just think about corrective lenses: in the case of contact lenses, we place a thin film directly on our cornea, thereby altering light rays to converge absolutely perfectly onto our retina. Suddenly, with very little effort, we have perfect vision.

We tend to forget how phenomenal corrective lenses are due to their seamless integration into our everyday lifestyles and routine. To reflect on our current state of computing: imagine if in order to correct your vision, you required a keyboard and mouse to toggle your vision every time you needed to focus.

Mature technological applications seamlessly disappear as they integrate into our lives.

futureUIUX-infographic

What does this mean for UX/UI designers?

As computing technology advances, it pushes us UX/UI designers to constantly be on our toes while adapting accordingly. But as any UX/UI designer will tell you, in order to become a leader in the industry we must discuss and evaluate technological trends in order to effectively evolve with the exponential growth and changes within the industry.

To put into context: UX/UI has seen unprecedented growth as an industry in the past few years. This is the result of designers adapting to technological developments – and as we are introduced to new challenges ahead, we will think about natural human experiences beyond the everyday screen.

Perhaps one day the screen will no longer be relevant. Perhaps one day we will adopt a human-centric interface that can’t be mocked up in Photoshop. But when that day comes, we’ll be ready to meet new challenges – envisioning and designing for the future of technology.

Image credit: tokyoimagegroups/Shutterstock

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top