The materials we work with — the gap between design and development hasn’t really changed
I replied to a post on Discord about responsive design and it rekindled an age old topic of discussion around device based breakpoints. Since Ethan first posted about responsive design, way back in 2010 it always seemed clear to me that the web was a fluid medium and that devices and use cases could occur at pretty much any width and orientation (as OpenSignal’s work around 2013 showed through device fragmentation of the time). It also always seemed counterproductive in the days of device sniffing to try to serve a separate mobile website. So why bring this up again?
With being wedded to this notion of fluidity and loving the universality of the web, I always felt in the minority and I think part of that is a problem that we still very much have now: our tools. Going to those formative days of experimenting with responsive design, why did having ‘mobile’, ‘tablet’ and ‘desktop’ make sense when they’re all the same codebase, just aimed at different widths of viewport?
There’s a few things at play: the design tools we use still haven’t moved from some form of artboard all these years later and that rightly or wrongly, people outside of design and engineering understand and are comfortable with these terms. While Figma was undoubtedly a massive step up from Photoshop, it still kept the same model, much as we had in Fireworks and other tools of the day. Easy for folks to adapt to for sure, but still maintaining distance from the end product — it’s a picture of a website or app.
When designing with code directly in the browser, I’ve always found embracing this fluidity to be really empowering, working with the constraints of the platform rather than against it through abstraction. Having a separate ‘mobile’ and ‘desktop’ design for a responsive site takes away any sense of how the design evolves or flows as more space is available. More than that, the language bothered me.
Even in those times of trying to detect and segment audiences, this was done by user agent, a notoriously flaky bit of meta data available on the header of every request to a server. With an up to date library of them you could be reasonably certain what was a phone and not…reasonably. When we wanted to do more with ‘mobile web’, we tried to detect touch and found that flaky at best. Through all of these things the industry tried, they weren’t certain and probably made our lives harder as a result. Even in recent years, things haven’t improved all that much.
On the engineering side, even today, when presented with designs for ‘mobile’ and ‘desktop’ designs, objects might move around to better suit the constraint of the screen real estate ignoring the impact that can have. Thinking fluidly, the underlying document (the DOM in HTML) doesn’t change and we largely rely on CSS to move things around within reason, without that relationship with the underlying code, we end up showing and hiding elements or some other unwieldy response. This showing and hiding or using JS to effectively move their order in the DOM can often cause accessibility issues and more than anything…just feels unnecessary.
You’ll notice I wrap the terms mobile and desktop in quotes and that’s deliberate. Because we don’t know much about the user’s browser from their initial request, we don’t know what they’re using or what it can do. We tend to work with media queries in CSS to add in a ‘breakpoint’…and here’s the issue…this is just the width of their viewport, nothing else…There’s loads more we can do with media queries other than just look at viewport width but that’s the dominant method even now. So by ‘mobile’ we mean ‘a narrow viewport that most likely would be on some kind of phone but not exclusively’ and by ‘desktop’ we mean ‘wide viewport that’s bigger than a common iPad so we’re going to assume it’s a big monitor’. With that comes other assumptions.
With the sense that a narrow viewport is some kind of mobile device, we assume that it will have a touch interface. In all likelihood it probably would but not exclusively. Unless we’re checking and can verify that, we don’t know. As viewports get bigger, we assume a ‘tablet’. As phones diverged into all kinds of sizes, you can rotate a larger one to landscape orientation and that crosses into many ‘tablet’ breakpoints with the issue of being really shallow, and we typically don’t check for that in media queries. We still assume it’s touch based though. Entering the ‘desktop’ space and, purely based on viewport width alone, we project onto this layout the notion that everyone will be sat at a desk with a mouse and keyboard…possibly in an ergonomic chair and possibly with a snack. The breadth of what we might consider to be desktop devices is broad from smaller, older and more limited machines (as I understand you might find in parts of the NHS) through to a massive 4K HDR Apple display as a designer…not forgetting the range of laptops available. Some are touch based. Some higher end tablets attempt to position themselves as lightweight laptops so they have a keyboard but are still essentially a tablet…there’s huge variety in ecosystems our audience uses and yet we try to oversimplify but actually make it more difficult — so why do we make this so hard on ourselves?
Some kind of innovation comes along like ‘snap mode’ when you can have browser windows side by side — being narrow doesn’t automatically make the layout a phone in this case. If you have a huge cinema display or wide gaming monitor, you’re unlikely to have your viewport the full width of your screen, so the width of the viewport you use might fall into either tablet or desktop territory. Foldable phones come along and come into ‘tablet’ territory in a more square-like presentation. The point here is that change is inevitable.
There always felt like two paths: worrying and trying to cater directly for devices (which you would on an app) or build in a more resilient universal way that anything could use. The latter always felt more natural and real. You can talk in terms like ‘mobile’ with stakeholders because we have that shared understanding of what is meant. When it comes to designing for the web, we have a viewport that can be many things so we should roll with it.
If every device could be touch-enabled would you design anything differently? If you couldn’t guarantee a fast connection, would that make you reevaluate any decisions? If a new device is released do you scramble to design for it (as often happens, Twitter would be full of people asking “what’s the media query for the new iPhone??”) or just check your work in it? Way back in the early responsive days at the BBC we had the ‘cut the mustard’ test to work out to what fidelity we could serve content to a device which was about capability detection, not what the device claimed to be. While there are far fewer low powered feature phones around these days, that approach is inclusive. We make fewer assumptions. Even if our data shows X% use the best devices, is that because those without have a bad experience and stay away? It doesn’t tell the whole story. Typically it doesn’t need an elaborate strategy to cater for pretty much all use cases. That universal layer that serves your content to everyone that you can layer up. Progressive Enhancement is far from new but seems to’ve fallen by the wayside, the notion of layering up our interfaces based on capability.
Over the years, alongside all of this, the browser scene has changed. Gone are the dark times of Internet Explorer and in some ways the monoculture of Chromium (and related) largely dominates. Some of the issues and discussions that we used to have are no longer relevant, but to me this one about the divide between the product people use and the one we design in boxes in some software is what bothers me, as that hasn’t really changed, for all its bells and whistles.
We haven’t touched upon the fact that we can use container queries for a far more interesting, dynamic responsive web…but because this inherent disconnect between our design tools and the material of the web makes it difficult for a designer to work this way — to work out how a component might show up in different contexts. It’s a web-native concept that can’t easily be translated into the traditional design tool model.
Why does this matter?
There’s no sign that this will change. This difference, or in some cases gulf, exists and it’ll continue in the world of AI considering A2UI, MCP Apps and anything that comes next which will likely be far more dynamic and contextual. Maybe this is a sign that design tools as we know them might die out because they hadn’t adapted to the materials we’ve been working with for a while, so might not be needed on the next evolution. I think we need to acknowledge this gap has always been here and for some reason hasn’t closed much at all, for all the tokenisation and allusion to components we’ve layered on to how we design — we’re still not close to the material and working in abstractions.