Creative OrbitFree-flow thinking of Wigy Ramadhan. It is a collection of my borderless thinking about phenomena in the areas where I operate.




03. The Search for Less Distraction Interaction


15 July 2025 13:19 BST
London, UK



Emerging trend: screenless mobile devices’ interaction

It has come to my attention recently that many designers and tech developers are experimenting with reducing screen size on mobile devices. However, mobile devices are not merely mobile phones—there are many new devices designed to have fewer screens or even distinct characteristics that differentiate them from traditional mobile phones. This trend might result from recent AI developments. The improved reliability of features like voice commands and smart cameras with image recognition has made screenless devices somewhat more precise, helping us interact with devices without needing to look at a screen.

This phenomenon is worth examining more deeply, but this writing is not intended as a deep dive into the reasoning behind it. I'm still searching for answers myself and, while I have some assumptions, I won't use them as a baseline for sharing my opinions publicly.

Rather, this piece reviews how the latest screenless mobile device technology has evolved.
 





Scene from the HER movie, when Theodore (the main character) interacts with the screenless device
The Signals

Over the last two years, AI development has gained significant momentum, with tools like ChatGPT and voice assistants becoming more mainstream. The improved reliability of features like voice commands and smart cameras with image processing has made screenless devices more viable, helping users interact with technology without needing to look at a screen.

But there's another driving force behind this trend: a growing awareness that screens themselves have become sources of distraction. While smartphones were designed to be helpful tools, many users now find themselves constantly pulled into endless scrolling, notifications, and digital rabbit holes that fragment their attention.

Reducing Screen DependenceSome people believe reducing screen time helps with fatigue and mental exhaustion, sparking creative solutions like Special Projects' interface system that uses a phone case flipped backwards to cover most of the screen, leaving only a small window as the interface space. This type of intervention represents a fascinating middle ground: people aren't abandoning their smartphones entirely, but they're actively seeking ways to reduce their visual dependence on screens while maintaining essential functionality.

The Promise and Reality of Screenless DevicesCould screenless devices offer an alternative relationship with technology? The concept evokes scenarios like the movie Her, where voice-based interaction replaces visual interfaces entirely.

From my limited observation of the current market, fully screenless devices haven't yet achieved the cultural penetration of transformative technologies like the Walkman, iPod, or iPhone. However, this may be an unfair comparison given different market conditions and technological contexts.

What seems more promising is the potential for screenless devices to improve accessibility, potentially offering more inclusive ways for people with visual impairments or motor difficulties to interact with technology. Additionally, they address the growing desire among some users to maintain connectivity while reducing screen-based distractions.

But I believe we are witnessing the early stages of a meaningful shift in human-computer interaction.






The development so far..
There has been intriguing development in screenless devices lately, ranging from simple hacks like Special Projects' phone case approach that blocks the larger screen, or devices with smaller secondary screens just for showing notifications, to AI-focused devices that make artificial intelligence the central value proposition.

The radical design
Let's start with the most radical approach: AI-focused devices. Most of these are developed by independent companies or startups, such as Friend and Humane Pin. While Friend has no screen at all, Humane Pin features a holographic display that projects onto surfaces. Despite poor reviews, Humane Pin was acquired by HP and now operates under the HP-IQ brand.

These devices primarily use built-in microphones to capture ambient sound and voice commands, plus cameras for image processing. This projection system—demonstrated by beaming light onto the user's hands—offers an intriguing screen alternative, though the technology still needs significant optimization for reliability.

Conceptually, these devices feel hyper-futuristic and somewhat utopian, which is exciting. However, AI still faces limitations in small devices with extensive functionality—they can't yet match what computers can accomplish. Still, they demonstrate possibilities for moving beyond traditional screens and handheld devices.

Subtle transitionsA larger company like Meta doesn’t start by making radical devices like Humane Pin or Friend; instead, they build something incrementally, which doesn’t make us feel alienated with it. Meta works together with Ray-Ban to create Meta AI glasses, which initially are not marketed as an AI device but rather just glasses with a smart camera. The latest Meta x Rayban AI glasses are equipped with a microphone and speakers, allowing users to do voice commands and get voice feedback right away, reducing the need for a screen, and communication is more private as the speaker would work through your ear directly. Meta has shown a more reliable device than Humane Pin, according to the reviews. Major features of what Meta glasses promote are live language translation,  song identification, capturing pictures and videos, and sending messages. According to Meta Designers, whom I managed to meet, there is a case study where people with visual impairment use the glass to help them do things that they are unable to do.    

Despite smart glasses not being a new thing, Google tried to release their smart glasses way earlier in 2012[3], but they failed to commercialise them. One of the reasons for the failed commercialisation is due to its design didn’t look comfortable enough to be worn in public daily. Today, Meta moves faster and at the right time with a design that is more appealing and acceptable in the market. 

Has this story of how Google failed and Meta's success in launching their smart glass been proof of how radical design might not be good in terms of commercialisation? And markets are more open towards something incremental, subtle and familiar, which makes them adopt it more easily.

Deconstruct the normAnother approach that I find more playful is by deconstructing something, in a way that the development of the screenless device can also be started by not creating new devices, or embedding technology into wearable products (and start calling it smart). 

Special Projects has come up with an intervention, which looks very hacky, instead of coming up with a new whole device or system. I find it very interesting and radical in some sense, but also incremental in a way that we can all adopt it right away. 

Special Projects creates a concept of interaction on our phone, which aims to reduce phone usage; they call it Apperture. The way they do it is surprisingly very simple yet very playful: by flipping the phone case, allowing us to cover almost the whole screen, leaving a small window, the area that is used for giving opening for the camera as the only interface left for us to interact with.

This concept of using a small screen has actually been in place, as Samsung has its foldable phone, called the Samsung Galaxy Fold. I am not sure about Samsung's intention to have it, but the functionality of this additional small screen allows us to have quick access to some features of the phone without having to interact with the whole big screen. 

Nothing has also just released their new flagship phone, which also has additional small screens on the back side of it. Although the screen is not high resolution screen, and more like a pixelated screen, they called it a glyph interface. The main objective of it is to have the phone less distracting, and playful at the same time. Less distracting by it means, the feature is adjustable where you can only allow it to give a notification when, let’s say, only for any message call from your mom, therefore no other notification will be there.

Surprisingly, all of this intervention is inspired by our behaviour of flipping the phone onto the table when we are in the middle of a social interaction. I found this very inspiring in the way we observe people's behaviour and translate it into a design. It may not be as cool as designing a new device, but this does look more clever in a certain way.

Kid-friendly device
This device is not just developed for adults; some kids' devices are also trying to limit screen use to help the kids not be overstimulated. One of the examples is Yoto, a screen-free audio device that plays audiobook stories, music, activities, sound effects, radio and podcasts for children. It is using cards as input to play specific content. Although its design has a 16x16 pixel colour display for interactive content, this is just to show a simple visualisation like what Nothing Phone uses in their glyph interface.
 

Friend’s first commercial ads                                     

Humane Pin: Use a hologram to show the interface
  

Meta x Rayban: AI Glasses 

\

Special Projects - Apperture, 2025            



 

Nothing Phone - Glyph Interface



Yoto: audio player device for kids

 
e

Notes:

1. MKBHD, a famous tech reviewer on YouTube, has given bad reviews on the Humane Pin. He claimed it as the worst product he had ever reviewed. Click here to watch the video.

3. Google unveiled their first Smart Glass prototype in 2012, a missed opportunity or too early development
https://www.ft.com/content/ab339580-c08f-11e1-9372-00144feabdc0.




                                       
Terra, designed by Modem Works 2024, 
    It is a companion device for wandering without distraction





Iris, designed by Harry Colbert ,2025



How does this drive further development?  
Trends are shaped, in the product design and technology context, most of the time, it is influenced by science fiction. But there are two ways of shaping trends in practice: we either rework or reimagine what we have or develop something new. 

Reworking means developing on what exists before; it tends to be more likely incremental development rather than radical, and reemerges in today’s context with some modification. But that doesn’t mean incremental can’t be radical or novel. In this approach, we are changing, adapting, and improving something that already exists, which basically involves modifying older designs/technologies to fit new contexts. Special Projects’ Apperture proves that by deconstructing what is available can still find novelty. Also, what is good about reworking as a trend is offering familiarity. Making a new product easier to adopt.

Radical development typically begins by examining the technology itself, creatively and imaginatively, and starts by asking “what if” questions. Designers who are looking at this will approach things differently. However, they are not thinking much about the user pain points and needs at first; they start by looking at what if the users do things like this, what the user will respond, what would break the limit for the user to do certain actions if a technology like this exists. 

In the scope of screenless devices, there is a certain direction in how the new generation of designers and studios has recently started looking at screenless interaction as exploration. Some of them are more focused on how the user can enjoy the moment more and capture it as memories that can be revisited, so kind of a memento device.

Others are more focused on having a more intuitive interaction by embedding AI in the device to help the elderly or disabilities.

Some of the most attractive project to me is Iris, which is designed by Harry Colbert. Iris works as a device that not just captures pictures and video, but also ambient light, movement, colour depth, directional sound, and dynamic range, aiming to capture the full sensory context of a moment. This project is responding to the traditional camera that only captures pictures and sound, which is quite flat. Instead, it’s designed as an intuitive tool for staying present while recording richer memories. The camera itself doesn’t have a screen, but the docking device, which will be placed at home, does have a screen. The camera device needs to be placed on the docking device to be able to watch the recording.

Memory has also become an inspiration to my dear friend, Adonis Christodolou, who has designed a thought-activated pendant-shaped camera that empowers users to retroactively capture meaningful moments using their minds. The device incorporates EEG sensors that will be worn on the forehead, which will be the input to trigger the camera when we think the word “capture”. It is a very interesting interaction approach that goes beyond voice recognition or any hand gestures to interact with the device. While this clearly seems to lean towards transhumanism, ethics becomes his main concern, where he designs the program to process the video or image content to blur the faces of unregistered bystanders, as well as mute their voices.

Another designer worth taking a look at is Yue Wang, who designed a torch-like device with AI embedded in it, which also connects to smart home devices to help the elderly interact with objects around them. We all know that most elderly people struggle to interact with a mobile phone. So, having familiar and very intuitive things helps them interact with technology. Torch is chosen as a shape and interaction because many people would know how to use it, making it easier to adopt. The device works in a way that the user points the torch at an object the user wants to interact with, and the AI will help to set up a task for it. For example, the user will point toward a lamp and the AI will identify it as the lamp and give a response to the user by asking if the user wants to turn it on or turn it off. The device also works in a way to help the users identify objects, and give some information and instructions about them.

These kinds of projects obviously will shape how we design tech products in the near future. All of this project has been proven conceptually and technically.

The question remains: are we really ready to adapt to this world?


Hot Takes: Enjoying the Moment
All of these design developments in technology products are aiming to have fewer distractions from the digital world to enjoy the moments in the physical world more. The question is, are we desperate enough to run away from the digital world? 

We may feel the need to step away from it because it increasingly controls our lives, yet we rely on it more than ever.

And if our goal is to live more in the moment, why do we still need a device to take a picture or record the moment? Why can’t we truly leave it behind? If we mean to share the moment with others, will this device actually heal our loneliness?

If mindfulness is the goal, can screenless interaction truly offer us that balance?

I begin to look at these current trends, looking for hints on where this development is going to lead us. Instead, it gives me a lot of intriguing questions.

Maybe we just want an excuse to merge with technology — not to carry it, but to wear it. Wearables are only the beginning. The next step is implants, and yes, they’re already here… but I’ll leave that for another discussion

Returning to my earlier point: we are moving closer to transhumanism, and this is simply a step along that path. It feels contradictory when people say they want to escape the digital world because of their screen time, yet panic the moment their phone dies. Big Tech no longer hides it; they openly state that we are moving toward embedding technology into our bodies.

I think instead of we are trying to escape it, we want more control, we want more agency to choose, but we are all hopeless. We can’t get away from it.


Fin.

>> Next Article: learning from a community-led forestry