The Pipeline of Human-Machine Interfacing

Technology keeps getting closer to us. It's more efficient and immersive that way. But where will this march toward a narrower gap between man and machine inevitably lead to?

Toni Witt

September 3, 2021

The History of Human-Machine Interfaces

You could argue the relationship between humans and their creations started with the first stone tools. But soon came painting and writing, the first cases of extracting information from the mind onto external things. Then numbers and mathematics came about and society started organizing itself, accelerated by eventually the printing press.

When the telegraph and radio was invented in the 1800s, humans used technology to unlock new direct interactions with each other for the first time. Phonographs and cameras in the late 1800s allowed us to store different types of information into machines.

But when the computer arrived in the 1950s, we weren't just talking to each other or with our own stored creations. We started to interact with an external non-living 'mind' we created, at first with inefficient batch processing and punch cards then eventually command line processing. Ever since then the world of human-machine interfacing has exploded.

The personal computer was invented in the 1970s and shortly after came the internet. Not only that, but also GUIs (graphical user interfaces) and the underappreciated WIMP model (window, icon, menu, and pointer). Devices became smaller, phones weren't smart yet but they had buttons. Machines entered the domestic home.

The birth of smartphones and touchscreens in the 2000s made interacting with smaller devices more flexible. The internet made all forms of data available even on small devices and thus spawned the phone zombie. Here in the Netherlands, I see people riding bikes while using their phones with both hands more often than I should.

Moore's law marched on and in the 2010s we got smart watches and more powerful phones. Artificial intelligence made machines smarter than ever, giving us recommendations on what movie to watch or what canteen to buy on Amazon and even talking to us in chatbots (which were the thing in 2016. Now popup chatbots are usually just in the way). We also got VUIs, voice user interfaces, combined with large language models like OpenAI's GPT-3 to speak to us. We can now converse with a glorified cylinder (Alexa, Dot, etc) and manage your data, search the internet, ask dumb questions to try tricking it, listen to music, or control the functions of your house. The COVID-19 pandemic digitalized utilities we weren't expecting to leave the physical sphere for a while, from doctors appointments to conferences to art gallery exhibitions.

TED talk by Jeff Han on multi-touch interfaces. Note the audience reactions. Screens are nice but ageing rapidly - this talk is from 2006.

But we're still using touchscreens and QWERTY keyboards and WIMP GUIs, some of us still using only index fingers to type. Pattie Maes, an MIT professor who runs the Media Lab's Fluid Interfaces research group, defines this as a low bandwidth problem. How can a machine understand your situation if you can only communicate through a tiny keyboard?

So, what's next?

VR, while more mature than AR or haptics, is still considered a fancy toy for dedicated gamers. Augmented reality has only started to weave its way into screens with Pokémon Go, Snapchat's filters, and Zoom backgrounds.

But there's a lot of room to grow. Instead of having to pull out a phone and interrupt living in the moment, more developed AR will allow you to blend the real and digital worlds seamlessly. You aren't limited by small 2-dimensional boxes that, while considered 'smart', are pretty unknowledgeable about the context in which they're placed.

AR glasses will be able to help you find which lane to take, which items in the grocery store aisle are ideal for your personal health goals, what the names and basic facts about people you just met are. You'll be able to work on Photoshop and watch immersive 3-dimensional Netflix shows while sitting on the bus to work.

It'll be able to locate you to the nearest bathroom as given by other users. If you want to rewind and relive the last minute of that lecture you just half-dozed in, you can do that, too. Or share your live experience with friends.

Redoing your living room? Get the IKEA AR app to see how different furniture items actually look in your house.

Ever wanted to be inside Pac Man or The Walking Dead, fighting zombies and running from ghosts in real spaces in real size without the worry of getting eaten? Or, connect with your friends' glasses and play multiplayer.

A network of interconnected AR glasses can make a city truly smart: knowing if the bus that's coming is actually full or not, what the ratings on a restaurant are, calling 911 or 112 if there's an accident or crime in front of you before you can even react.

Or, if you want to take a vacation from reality altogether, you can put on a VR headset (or even have an AR headset darken enough to become fully immersive).

Moving the screen from your pocket to your face doesn't seem like a big difference, but it is. Augmented and virtual reality is a fundamental shift in the dynamics of interaction between humans and machines. Even without invading the brain with wires we can effectively enhance its functions: better memory, better spatial awareness, even having literal eyes in the back of your head (mini cameras that can stream to your field-of-view). It's likely soon that creators will be able to upload their custom applications or AR-overlays onto an 'AR app store' for everyone to use.

Haptics aren't quite there yet - simulating physical experiences without invading the nervous system is challenging. Although there are still interesting developments in the area, including using an array of ultrasound speakers whose waves interfere at a certain point to be strong enough to feel. But it's still only a light sensation, far from the physical experience of diving into a cold ocean or lifting weights in the gym or being romantic with your significant other.

Ultraleap's haptic ultrasound array

The More Distant Future

Dream engineering is being fiddled with: a research group from MIT made Dormio, a system that uses EEG, skin conductance, and heart rate to track your sleep state. When you enter a particularly creative phase between being awake and asleep called Hypnagogia, Dormio prompts your irrational yet creative half-asleep brain and records your answers. It then generates a dream report to go with your morning coffee and you can see all the crazy crap your mind bubbled up.

Courtesy of Dormio/Oscar Rosello, MIT Media Lab

If you have too much money, you can also buy a sensory deprivation tank - a pitch black silent tank full of extremely salty, body-temperature water that removes all of your body's senses. Combine this technique with psychedelics or a VR headset to truly leave this world (see the article from Matrise if you're interested, it's a great blog about VR, philosophy, and consciousness).

Machines Inside Humans

Finally, we come to the next step: the machines inside of humans. We already have semi-smart devices inside bodies for things like tracking gut or blood chemistry and heart beats. Brain implants today are limited to cognitive disorder treatment, but will this always be the case? If we want an evermore immersive, intelligent, and efficient experience, the only way forward is inward.

The pipeline of human-machine interfacing is a slippery slope if we're not careful.

The inevitable end of this pipeline is 'uploading consciousness,' where the difference between machine and human is eliminated. The human body is just a redundancy, and whatever processes previously occurred in the brain will migrate to other media and mixed with processes/intelligences that originally had nothing to do with the brain. The individual identity may not exist if there's a large network intelligence unbounded by human bodies but existing only as information stored in a server somewhere. But we can also travel at the speed of light, live sustainably and luxuriously, explore and exist in any place or form we want.

Is this a future you want? If not for yourself, for humanity?

Sources

  • thumbnail and main image courtesy of Dave Parker
  • https://www.ultraleap.com/enterprise/
  • Human-machine interface by Desney Tan, Britannica 2014
  • https://www.media.mit.edu/projects/sleep-creativity/overview/#faq-in-laymans-terms-how-does-dormio-work