In text boxes in some applications, enter submits the entered text, and ctrl-enter forces a newline (not at my computer, but I think Slack does this). In others, it's the other way around (pretty sure GitHub does this for comments).
I don't know how we got here and I don't know how to fix it, but "bring back idiomatic design" doesn't help when we don't have enough idioms. I'm not even sure if those two behaviors are wrong to be inconsistent: you're probably more likely to want fancier formatting in a PR review comment than a chat message. But as a user, it's frustrating to have to keep track of which is which.
Decades ago, Return and Enter were two different keys for that reason: Return to insert a line break, Enter to submit your input.
Given the reduction to a single key, the traditional GUI rule is that Enter in a multiline/multi-paragraph input doesn’t submit like it does in other contexts, but inserts a line break (or paragraph break), while Ctrl+Enter submits.
Chat apps, where single-paragraph content is the typical case, tend to reverse this. Good apps make this configurable.
Before that, page-mode terminals used <Return> to move to first field on a subsequent line (like a line-based <Tab>) and sent the page only on <Enter> or <Fn-key>. This made for quick navigation w/ zero ambiguity.
Teams does both - normally it’s Enter to submit and Shift+Enter for a new line, but when you open the formatting tools it switches. They at least do have a message indicating which key combo inputs a new line, but it still gets me on occasion.
The Signal desktop app does both too, I guess, but in a way that actually makes sense. Enter sends a message since IMs tend to be short one-liners. Shift-Enter inserts a line break.
But if you click an arrow on the top of the text box, it expands to more than half of the height of the window, and now Enter does a line break and Shift-Enter sends. Which makes a lot of sense because now you're in "message composer" / "word processor" mode.
If you turn on Markdown formatting, shift+enter adds a new line, unless you’re in a multi-line code block started with three backticks, and then enter adds a new line and shift+enter sends the message.
I can see why someone thought this was a good idea, but it’s just not.
Thats funny because I thought it was shift-enter that creates a newline in a field where an enter submits. Just shows the fractured nature of this whole thing.
This is my thinking. Ctrl-Enter is usually "submit the form this input is a part of" in my experience, especially if you're in a multilinear text input (or textarea).
I've seen Enter, Shift-Enter, Ctrl-Enter, and Alt-Enter, (and on macOS, Cmd-Enter and Option-Enter), depending on the application. Total circus. I think this is actually a weakness of the standard keyboard: Keyboards should at the very least separate "submit form / enter" from "newline / carriage return" with different physical keys, but good luck changing that one, given the strong legacy of combining these functions.
For Slack at least you have the option to change that back to use Enter for new line (which is what I do), but other software is not that generous. I think Grafana introduced yet another way, Shift-Enter to submit, that I alway mix up.
macOS is slightly more consistent among apps that use system controls, but the more custom the app, or the more React Native or Electron it is, the less predictable it is
Infuriatingly, some apps try to be smart — only one line, return submits; more than one line, return is a new line, and command-return submits; but command-return on just one line beeps an error.
Years of muscle memory are useless, so now I’m reaching for the mouse when I need to be clear about my intent
So much is solved when developers just use the provided UI controls, so much well-studied and carefully implemented behavior comes for free
A single-line text box that has no possibility of multi-line text (so, not a chat interface), such as search, an address bar, something that's obviously "submit one item" (e.g. "submit a word"), etc.
In single-text-input contexts, like search fields and the browser address field, and things like Save As dialogs. It’s the general expectation for dialogs with an OK or default button, just like Escape cancels the dialog.
Anything which supports multi-line input shouldn't submit on enter it should submit on button press so anyone can use it instantly without learning or remembering anything.
Then make it easier for users to learn that they can enter more quickly with control+enter which you can advertise via tooltip or adjacent text.
Better that 100% find it trivially usable even if only 75% learn they can do it faster
That isn’t workable for chat apps, at the very least on mobile. And that’s the most-used text entry interface that users nowadays grow up with. So I think you need to make an exception for such applications.
Most software is not designed by intelligent and thoughtful people anymore. It is designed by hastily promoted middle manager PM/Product type people who, as has been mentioned elsewhere, simply were not around when thoughtful human interface design was borderline mandatory for efficiency’s sake.
There is incompetence and there is also malevolence in the encouragement of dark patterns by the revenue side of the business.
It’s amazing how many blank stares I get when I, as mobile engineer, tell stakeholders that we shouldn’t just implement some random interface idea they thought up in the shower and we instead need design input!
“But why can’t you just do it?” Because I recognise the importance of consistent UX and an IA that can actually be followed.
Just like developers, (proper) designers solve problems, an we need to stop asking them for faster bikes.
There's a time and a place for it. If you already know exactly what the program needs to do, then sure, design a user interface. If you are still exploring the design space then it's better to try things out as quickly as possible even if the ui is rough.
The latter is an interesting mindset to advocate for. In almost every other engineering discipline, this would be frowned upon. I suspect wisdom could be gained by not discounting better forethought to be honest.
However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.
Racing like in Formula 1 is extremely different from normal product design: each Formula 1 car has a user base of exactly 1: the driver that is going to use it. Not even the cars from the same team are identical for that reason. The driver can basically dictate the UX design because there is never any friction with other users.
Also, turnaround times from idea to final product can be insane at that level. These teams often have to accomplish in days what normally takes months. But they can pull it off by having every step of the design and manufacturing process in house.
There exist other ways to do the research. „Try things out“ is often not just a signal of „we don‘t know what to do“, but also a signal of „we have no idea how to properly measure the outcomes of things we try“.
But that’s the point, no? Prototyping is useful but beyond a proof of concept, you still need a suitable user interface. I have no problems if there’s a rationale behind UI changes, but often we have stakeholders telling us to do something inconsistent just so their pet project can be presented to the user. That’s not design.
Yep, there's some bad incentives and some rushed work, but calling it mostly incompetence or malice kind of ignores how much the underlying system has changed
This is reductionist and myopic. I've personally been through building forms online and it's hell to try to find consensus on perhaps the most common forms used online.
Let's take a credit card form:
- Do I let the user copy and paste values in?
- Do I let them use IE6?
- Do I need to test for the user using an esotoric browser (Brave) with an esoteric password manager (KeePassXC)?
- Do I make it accessible for someone's OpenClaw bot to use it?
- Do I make it inaccessible to a nefarious actor who uses OpenClaw to use it?
Cybernetic natural selection should take care of this over time, but the rate of random mutations in software systems is much higher than in biological systems. Would be interested in modeling the equilibrium dynamics of this
As the author identifies, the idioms come from the use of system frameworks that steer you towards idiomatic implementations.
The system UI frameworks are tremendously detailed and handle so many corner cases you'd never think of. They allow you to graduate into being a power user over time.
Windows has Win32, and it was easier to use its controls than rolling your own custom ones. (Shame they left the UI side of win32 to rot)
macOS has AppKit, which enforces a ton. You can't change the height of a native button, for example.
iOS has UIKit, similar deal.
The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.
The author may have identified that "the idioms come from the use of system frameworks", but they absolutely got wrong just about everything about why apps are not consistent on the web (e.g. I was baffled by their reasons listed under "this lack of homogeneity is for two reasons" section).
First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac). So, as you point out regarding the Win32 API, developers had essentially one way to do things, or at least the far easiest way to do things. Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
The web started out as a document sharing system, and it only gradually and organically turned over to an app system. There was simply no single default, "easiest" way to do things (and despite that, I remember when it seemed like the web converged all at once onto Bootstrap, because it became the easiest and most "standard" way to do things).
In other words, I totally agree with you. You can have all the "standard idioms" that you want, but unless you have a single company providing and writing easy to use, default frameworks, you'll always have lots of different ways of doing things.
Well, and worse, Windows was itself a hive of inconsistency. The most obvious example of UI consistency failing as an idea was that Microsoft's own teams didn't care about it at all. People my age always have rose tinted glasses about this. Even the screenshot of Word the author chose is telling because Office rolled its own widget toolkit. No other Windows apps had menus that looked like that, with the stripe down the left hand side, or that kind of redundant menu-duplicating sidebar. They made many other apps that ignored or duplicated core UI paradigms too. Visual Studio, Encarta, Windows Media Player... the list went on and on.
The Windows I remember was in some ways actually less consistent than what we have now. It was common for apps to be themeable, to use weirdly shaped windows, to have very different icon themes or button colors, etc. Every app developer wanted to have a strong brand, which meant not using the default UI choices. And Microsoft's UI guidelines weren't strong enough to generate consistency - even basic things like where the settings window could be found weren't consistent. Sometimes it was Edit > Preferences. Sometimes File > Settings. Sometimes zooming was under View, sometimes under Window.
The big problem with the web and the newer web-derived mobile paradigms is the conflation between theme and widget library, under the name "design system". The native desktop era was relatively good at keeping these concepts separated but the web isn't, the result is a morass of very low effort and crappy widgets that often fail at the subtle details MS/Apple got right. And browsers can't help because every other year designers decide that the basic behaviors of e.g. text fields needs to change in ways that wouldn't be supported by the browser's own widgets.
“Brand” and “branding” is arguably the most important thing -not- mentioned in the article. The commercial incentives to differentiate are powerful enough to kick a lot of UX out of the way.
Now that all we do is “experience” a “journey,” it’s more about the user doing what the app wants instead of the other way around
> First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac).
That's overemphasising the differences considerably: on the whole Windows really did copy the Macintosh UI with great attention to detail and considerable faithfulness, the fact that MS had its own PARC people notwithstanding. MS was among other things an early, successful and enthusiastic Macintosh ISV, and it was led by people who were appropriately impressed by the Mac:
> This Mac influence would show up even when Gates expressed dissatisfaction at Windows’ early development. The Microsoft CEO would complain: “That’s not what a Mac does. I want Mac on the PC, I want a Mac on the PC”.
https://books.openbookpublishers.com/10.11647/obp.0184/ch6.x... It probably wouldn't be exaggerating all that wildly to say that '80s-'90s Microsoft was at the core of its mentality a Mac ISV, a good and quite orthodox Mac ISV, with a DOS cash-cow and big ambitions. (It's probably also not a coincidence that pre-8 Windows diverges more freely from the Mac model on the desktop and filesystem UI side than in regards to the application user interface.) And where Windows did diverge from the Mac those differences often ended up being integrated into the Macintosh side of the "desktop era": viz. the right-click context menu and (to a lesser extent) the old, 1990s Office toolbar. And MS wasn't the only important application-software house which came to Windows development with a Mac sensibility (or a Mac OS codebase).
I partially agree with you, but additionally there's a whole set of employees who would be clearly redundant in any given company if that company decided to just use a simple, idiomatic, off the shelf UI system. Or even to implement one but without attempting to reinvent well understood patterns.
One reason so many single-person products are so nice is because that single developer didn't have the time and resources to try to re-think how buttons or drop downs or tabs should work. Instead, they just followed existing patterns.
Meanwhile when you have 3 designers and 5 engineers, with the natural ratio of figma sketch-to-production ready implementation being at least an order of magnitude, the only way to justify the design headcount is to make shit complicated.
But every company I worked at in the past 10 years or so eventually coalesced around a singular "design system" managed by one person or a small core team. But that just goes back to my original point - every company had their own design system, and there is not a single, industry-wide set of "rails".
The bigger issue I see with "got to keep lots of designers employed" problem is the series of pointless, trend-following redesigns you'd see all the time. That said, I've seen many design departments get absolutely slaughtered at a lot of web/SaaS companies in the past 3 years. A lot of the issue designers were working on in the web and mobile for the 25 years prior are now essentially "solved problems", and so, except for the integration of AI (where I've seen nearly every company just add a chat box and that AI star icon), it looks like there is a lot less to do.
Conventions already existed in DOS (CUA) and MacOS. The point is, every operating system had its user interface conventions, and there was a strong move from at least the mid-1980s to roughly the mid-2000s that applications should conform to the respective OS conventions. The cross-platform aspect of the web and then of mobile destroyed that.
Yeah the author conveniently ignores the fact that the UX of Mac apps was radically different to that of PC apps, so it’s not that designers/developers were somehow more enlightened back then, it’s just that they were “on rails”
> Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
Most people only uses one computer. Inconsistency between platforms have no bearing on users. But inconsistency of applications on one platform is a nightmare for training. And accessibility suffers.
I don't disagree, but my point was about the author's incorrect diagnosis of the reason (and solution) for the problem, not that the problem doesn't exist.
As a sibling commenter put it, previously developers had "rails" that were governed by MS and Apple. The very nature of the web means no such rails exist, and saying "hey guys, let's all get back to design idioms!" is not going to fix the problem.
That’s not the only reasons. When you are used to how your operating system does things consistently, as a developer you naturally want your application to also behave like you’re used to in that environment.
This eroded on the web, because a web page was a bit of a different “boxed” environment, and completely broke down with the rise of mobile, because the desktop conventions didn’t directly translate to touch and small screens, and (this goes back to your point) the developers of mobile OSs introduced equivalent conventions only half-heartedly.
For example, long-press could have been a consistent idiom for what right-click used to be on desktop, but that wasn’t done initially and later was never consistently promoted, competing with Share menus, ellipsis menus and whatnot.
> The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.
This feels like the root cause to me as well. Or more specifically, the web does have idioms, the problem is that those idioms are still stuck in 1980 and assume the web is a collection of science papers with hyperlinks and the occasional image, data table and submittable form.
This is where the "favourites" list and the ability to select any text on a web pages came from.
Web apps not only have to build an application UI completely from scratch, they also have to do it on top of a document UI that "wants" to do something completely different.
Modern browsers have toned down those idioms and essentially made it "easier to fight them", but didn't remove or improve them.
"The Web" has evolved into a pretty bad UI API. I kind of wish that the web stuck to documents with hyperlinks, and something else emerged as a cross-platform application SDK. Combining them both into HTTP/CSS/JS was a mistake IMO.
The web was designed for interactive documents,not desktop applications. The layout engine was inspired by typesetting (floating, block) and lot of components only make sense for text (<i>, <span>, <strong>,...). There's also no allowance for dynamic data (virtualization of lists) and custom components (canvas and svgs are not great in that regard).
> building for modern desktop platforms is horrible, the framework-less web is being used there too.
I think it's more related to PM wanting to "brand" their product and developers optimizing things for themselves (in the short term), not for their users.
Guys, I found out about this technology called Cascading Style Sheets recently and I think it's the missing piece we've been looking for. It lets you declaratively specify layout in a composable, hierarchical system based on something called the Document Object Model in a way that minimizes both clientside and serverside processing, based on these things called "stylesheets".
The best part is, it's super easy to customize them, read others for inspiration or to see how they did something, or even ship multiple per site to deal with different user preferences. Through this "forms" api, and little-known browser features like url-fragments, target/attribute selector, and style combinators, plus "the checkbox hack" you can build extremely responsive UIs out of it by "cascading" UI updates through your site! When do you think they're going to add it to next.js?
I'm tentatively calling this new UI paradigm "no-framework" or "no package manager", not sure yet https://i.imgur.com/OEMPJA8.png
> There are hundreds of ways that different websites ask you to pick dates
Ugh, date pickers. So many of these violently throw up when I try to do the obvious thing: type in the damn date. Instead they force me to click through their inane menu, as if the designer wanted to force me into a showcase of their work. Let your power users type. Just call your user’s attention back to the field if they accidentally typed 03/142/026.
No no, I find that having to click back through almost 40 years’ worth of months to get to my birthday allows for a nice pause to consider the fleeting and ever-accelerating nature of life.
> You can usually click the year and then pick that first.
Even then, clicking the year will often lead to a tiny one-page list of 10 years, which you can either page back in or click the decade to get shown a list of decades to pick from. So: click 2026, click 2020s, click 19XXs, click a year, click a month, click a birthday.
Such an interface makes at least some sense for "pick a date in the near future". When I'm booking an airline flight, I usually appreciate having a calendar interface that lets me pick a range for the departure and return dates. But it makes no sense for a birthday.
This is still a partial solution as the user needs to know that their locale is being used and know how their locale is configured to understand the format. This is most problematic on shared computers or kiosks, especially when traveling.
Is is the device display language, the keyboard input language, my geo location, my browser language, my legal location, my browser-preferred website language, the language I set last time, the language of the domain (looking at amazon.co.uk), the language that was auto-selected last time for me on mobile or... something else entirely?
I mean, once in a different country, you either experience the locale shock once then adapt, or you've seen it before and kind of know what to expect.
And for the rest of the users who have no idea about locales, using whatever locale they have on their computer might be technically incorrect for some of them, but at least they're somewhat used to that incorrectness already, as it's likely been their locale for a while and will remain so.
This is the equivalent of requiring all your text to be in Esperanto because dealing with separate languages is a pain.
"Normal" people never use YYYY-MM-DD format. The real world has actual complexity, tough, and the reason you see so many bugs and problems around localization is not that there aren't good APIs to deal with it, it's that it's often an after thought, doesn't always provide economic payoff, and any individual developer is usually focused on making sure it "looks good" I'm whatever locale they're familiar with.
I hate how websites that are trying to verify my age make me scroll through 13, 18, or 21 years that I could not legitmately select if I want to use the site.
> Every single button is clearly visually a button and says exactly what it does. And each one has a little underline to indicate its keyboard shortcut. Isn’t that nice?
Something not mentioned here (that came from the Mac world as I understand it): everywhere that the text ends with an ellipsis, choosing that action will lead to further UI prompts. The actions not written this way can complete immediately when you click the button, as they already have enough information.
UX has really gone downhill. This is particularly true of banking websites.
Also, the trend of hiding scrollbars, huge wasted spaces, making buttons look really flat, confusing icons, confusing ways of using drop downs rather than using the select/option html controls etc have all made the whole experience far inferior to where desktop UI was even decades ago
The number of JavaScript dropdown replacements that don't work correctly with the keyboard is stunning. It always amazes me how many forms fail at this basic usability aspect. The browser has homogeneous form controls built in, just use them!
> Prefer words to icons. Use only icons that are universally understood.
Underrated. Except for dyslexic people, and the most obvious icon forms, I am pretty sure most people are just better and faster at recognising single words at a glance than icons.
I'm somewhat dubious about that for icons with actual recognizable pictures, but a lot of icon attempts today are stylized to death, with just a line, bent and broken in a couple places and maybe if you're lucky juxtaposed with the occasional dot.
If there's no text description even on mouseover (or touchscreen, with no cursor...) discovery is more or less trial and error (or perhaps more akin to Russian Roulette if the permissions involve being able to do real damage).
Scratch your head and hope there are existing support questions searchable about what on Earth the programmer could have meant to convey...
...except for HN "unvote"/"undown" feedback which is especially unfortunate due to the shared prefix. Every time I upvote something I squint at the unvote/undown to make sure I didn't misclick.
I am pretty sure icons are easier and faster to recognize, except when you make them (too) small. In particular, they probably are easier in the long run, as long as they don't change position. But in a context where things change or you need a lot of buttons, words probably win.
This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.
> This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.
Only if there are few icons. If every item in that menu in the screenshot of Windows had an icon, and all icons were monochrome only, you'd never quickly find the one you want.
The reason icons in menu items work is because they are distinctive and sparse.
That's what I tend to do too, but sometimes space requirements win.
But of course, a good design is adapted to its user: frequent/infrequent is an important dimension, as is the time willing to learn the UI. E.g., many (semi) pro audio and video tools have a huge number of options, and they're all hidden under colorful little thingies and short-cuts.
Space is important there, because you want as many tracks and Vu meters and whatever on your screen as possible. Their users are interested in getting the most out of them, so they learn it, and it pays off.
The behavior science also changed a lot of things. People study behavior, patterns, what can sell more, what looks more intuitive. If something looks a bit different from the others, it will sell better. If something look the same way as the previous one, why should the client buy it? The client needs to see a difference, it can be only a little bit more flashy, but it must be different.
20 years, later, this is the result.
Especially now, in the AI era, where each person can make a relatively working app from the sofa, without any knowledge of UI/UX principles.
Much of this is foisted upon us by visual designers who wandered into product design. It's a category error the profession has never quite corrected. (maybe more controversially, it's caused by having anyone with the word "designer" in their title on a project that doesn't need such a person - this category is larger than anyone thinks)
At some point UX became a synonym of manipulating users into doing things, and I wonder if it can ever go back.
It might have started in an innocent way, all those A/B tests about call-to-action button color, etc. But it became a full scale race between products and product managers (Whose landing page is best at converting users?, etc.) and somewhere in this race we just lost the sense of why UX exists. Product success is measured in conversion rates, net promoter score, bounce rates, etc. (all pretty much short-term metrics, by the way), and are optimized with disregard to the end-user experience. I mean, what was originally meant by UX. It is now completely turned on its head.
Like I said, I wonder if there is way back of if we are stuck in the rat race. The question is how to quit it.
Yall remember https://en.wikipedia.org/wiki/Mystery_meat_navigation? Back in 2004-ish era, there was an explosion of very creative interaction methods due to flash and browser performance improvements, and general hardware improvements which led to "mystery meat navigation" and the community's pushback.
Since then, the "idiomatic design" seems to have been completely lost.
When Apple transitioned from skeuomorphic to flat design this was a huge issue. It was difficult to determine what was a button on iOS and whether you tapped it (and the removal of loading gifs across platforms further aggravated problems like double submits).
Another absurdity with iOS is the number of ways you can gesture. It started simply, now it is complex to the point where the OS can confuse one gesture for another.
I have a lot of gripes with Apple's various design decisions over the years, but they're at least consistent across their apps, which is the point of TFA.
Mystery gesture navigation is also now on by default and terrible on Android, too. It's awful with children or older folks (or even me!) who trigger it by accident all the time. Some of it I was able to disable on my children's iPads. It's still frustrating that easy to accidentally trigger but impossible to discover gestures are the default and also frustrating that we have the very last iPad generation with a button.
> Suppose you’re logging into a website, and it asks: “do you want to stay logged in?”
Then the website has made its first mistake, and should delete that checkbox entirely, because the correct answer is always "yes". If you don't want to be logged in, either hit the logout button, or use private browsing. It is not the responsibility of individual websites to deal with this.
>using GMail is nothing like using GSuites is nothing like using Google Docs
G Suite (no s) was the old name for Google Workspace. Google Workspace includes GMail, Google Docs, Google Sheets, Google Calendar, etc., so it doesn't really make sense to say that Google Workspace has a different UX than Google Docs, if Google Docs is part of Google Workspace.
Disclosure: I work at Google, but not one of the listed products.
designers are creatives and will always believe the visual elements of a design need to be updated, refreshed, modernized etc.. then we get flavour of the month nand new trends in visual language and ui design that things must be updated to.
As soon as UI design became a creative visual thing rather than a functional thing , everything started to go crazy in UI land..
That is because they know the users. Users are very sensitive to this: if the outside wasn't changed then the internals cannot be much improved. You see this with cars, cars need a new design otherwise customer will think nothing much changed. Customer will usually buy newer over better because they think newer must have improvements, and styling signals new. Same with computers, all the disappointments when apple releases a new macbook without changing the exterior....
One of my pet peeves is that increasingly frequently, pressing Enter to submit a web form doesn’t even universally work anymore. Instead you have to tab to the submit button, and (depending on the web page) have to press Space or Enter to actuate it.
Another annoyance is that many web forms (and desktop apps based on web tech) don’t automatically place the keyboard focus in an input field anymore when first displayed. This is also an antipattern on mobile, that even on screens that only have one or two text inputs, and where the previous action clearly expressed that you want to perform a step that requires entering something, you first have to tap on the input field for the keyboard to appear, so that you can start entering the requested information.
> One of my pet peeves is that increasingly frequently, pressing Enter to submit a web form doesn’t even universally work anymore. Instead you have to tab to the submit button, and (depending on the web page), have to press Space or Enter to actuate it.
The other day I used Safari on a newly setup macOS machine for the first time in probably a decade. Of course wanted to browse HN, and eventually wanted to write a comment. Wrote a bunch of stuff, and by muscle memory, hit tab then enter.
Guess what happened instead of "submitted the comment"? Tab on macOS Safari apparently jumps up to the addressbar (???), and then of course you press Enter so it reloads the page, and everything you wrote disappears. I'm gonna admit, I did the same time just minutes later again, then I gave up using Safari for any sort of browsing and downloaded Firefox instead.
I would argue that behavior is idiomatic for macOS but not idiomatic for web browsers. Keyboard navigation of all elements has never been the default in macOS. Tab moves between input fields, but without turning on other settings, almost never moved between other elements because macOS was a mouse first OS from its earliest days. Web browsers often broke this convention, but Safari has from day one not used tab for full keyboard navigation by default.
And this highlights something that I think the author glosses over a little but is part of why idioms break for a lot of web applications. A lot of the keyboard commands we're used to issue commands to the OS and so their idioms are generally defined by the idioms of the OS. A web application, by nature of being an application within an application, has to try to intercept or override those commands. It's the same problem that linux (and windows) face with key commands shared by their terminals and their GUIs. Is "ctrl-c" copy or interrupt? Depends on what has focus right now, and both are "idiomatic" for their particular environments. macOS neatly sidesteps this for terminals because "ctrl-c" was never used for copy, it was always "cmd-c".
Incidentally, what you're looking for in Safari is either "Press Tab to highlight each item on a webpage" setting in the Advanced settings tab. By default with that off, you would use "opt-Tab" to navigate to all elements.
I’m a decade+ linux power user and I still do insane things like pipe outputs into vim so I can copy paste without having to remember tmux copy paste modes when I have vertical panes open.
Terminal UX existed before the CUA guidelines from IBM. People complains about Ctrl + Shift + C behavior when it exists only in one category of application, terminal emulators.
My hope is that since tools like Google Stitch have made fancy looking design free that it will become obvious how functionally worthless fancy looking design always was. It used to signal that a site paid a lot of money and was therefore legitimate. Now it signals nothing.
All of these people who keep saying that webapps can replace desktop applications were simply never desktop power users. They don’t know what they don’t know.
Yeah it would be nice if the web accessibility guidelines also focused on actually using the thing normally. For example: offsetting the scrollbar from the right edge of the screen by 1px should be punishable by death.
> Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled <button> elements.
I've been hearing that for the entire Internet era yet people continue to reinvent scrollbars, text boxes, buttons, checkboxes and, well, every input element. And I don't know why.
What this article is really talking about is conventions not idioms (IMHO). You see a button and you know how it works. A standard button will behave in predictable ways across devices and support accessibility and not require loading third-party JS libraries.
Also:
> Notwithstanding that, there are fashion cycles in visual design. We had skeuomorphic design in the late 2000s and early 2010s, material design in the mid 2010s, those colorful 2D vector illustrations in the late 2010s, etc.
I'm glad the author brought this up. Flat design (often called "material design" as it is here) has usability issues and this has been discussed a lot eg [1].
The concept here is called affordances [2], which is where the presentation of a UI element suggests how it's used, like being pressed or grabbed or dragged. Flat design and other kinds of minimalism tend to hide affordances.
It seems like this is a fundamental flaw in human nature that crops up everywhere: people feel like they have to do something different because it's different, not because it's better. It's almost like people have this need to make their mark. I see this all the time in game sequels that ruin what was liked by the original, like they're trying to keep it "fresh".
Not sure how you can put the genie back in the bottle, every app wants to have its own design so how can you enforce them to all obey the same design principles? You simply can't.
Am I the only one who doesn't know what that "Keep me signed in" checkbox is for? I mean, I was a web developer for many years and I rarely encountered this checkbox in the wild, don't remember implementing it even once. I mean the choice itself is very ambiguous. It is supposed to mean that the login session will only live for the duration of the current browser session if I uncheck it. But for a user (and for me too) that does not mean much, what is the duration of the session if my browser runs open for weeks, what if we are on mobile where tabs never close and tabs and history is basically the same thing (UX-wise). If I decide to uncheck it for security reasons (for example when I'm on someone else's device) I want to at least know when exactly or after what action the session will be cleared out, and as a user I have zero awareness or control there.
I don't advocate for removal of this checkbox but I would at least re-consider if that pattern is truly a common knowledge or not :)
Worked at Figma for 5 years. The author uses Figma as an example, but I think misses the point. They're so close though. Note these quotes:
> Both are very well-designed from first principles, but do not conform to what other interfaces the user might be familiar with
> The lack of homogeneous interfaces means that I spend most of my digital time not in a state of productive flow
There are generally two types of apps - general apps and professional tools. While I highly agree with the author that general apps should align with trends, from a pure time-spent PoV Figma is a professional tool. The design editor in particular is designed for users who are in it every day for multiple hours a day. In this scenario, small delays in common actions stack up significantly.
I'll use the Variables project in Figma as an example (mainly because that was my baby while I was there). Variables were used on the order of magnitude of billions. An increase in 1s in the time it took to pick a variable was a net loss of around 100 human years in aggregate. We could have used more standardized patterns for picking them (ie illustrator's palette approach), or unified patterns for picking them (making styles and variables the same thing), but in the end we picked slightly different behavior because at the end of the day it was faster.
In the end it's about minimizing friction of an experience. Sometimes minimizing friction for one audience impacts another - in the case of Figma minimizing it for pro users increased the friction for casual users, but that's the nature of pro tools. Blender shouldn't try and adopt idiomatic patterns - it doesn't make sense for it, as it would negatively impact their core audience despite lowering friction for casual users. You have to look at net friction as a whole.
This is a really huge and a fundamental flaw in AI-driven design. AI-driven design is completely inconsistent. If you re-ran an AI generated layout, even with the same prompt, the output for a user interface will look completely different between two runs.
Shows a picture of Office 2000 and says "The visuals feel a little ugly and dated: it’s blocky, the font isn’t great, and the colors are dull."
Are you serious? Nothing has come close to it. Yeah we have higher resolution screens, but everything else is much less legible and accessible than that screenshot.
UIs are inconsistent even in the same app. Nevermind plugins or suites. It would be great if menus were customizable so you could plug in your own template.
I prefer to avoid customizing apps. I want to be able to sit down at a fresh install (or someone else's) and not spend time learning their preferences.
When someone asks me for a checkbox so they can have my app work their way instead and everyone else can do theirs, the hair stands up on the back of my neck. The check boxes are hard to discover unless you put them front and center, in which case they remain there forever serving no purpose.
I would rather redesign the entire interface, either to find the right answer that works for everyone, or to learn what makes one class of users different from another. The check box is a mode, and nodes are to be avoided if I possibly can.
I realize that this puts me at odds with a whole class of users who want to make their box do their thing. It's your box and you should do what you want. And I really love style sheets for that. Rather than cobbling together my own set of possible preferences you should have something Turing complete. Go nuts with it.
I think most non-Linux users haven't made a fresh install in 5-10 years. Preferences files and apps get transferred when you buy a new computer or update your os.
I was pleased how much was passed over from my last phone. I got the same brand so it's not surprising, but wow it is so much better than The Good Old Days (tm).
I remember the old days being surprisingly smooth. There was some verizon tool that transferred all my contacts from the dumb phone to my first smart phone.
Idiomatic design will never come back. The reason being companies believe (correctly) that they design language is part of their brand. The uniqueness is, basically, the point.
That was one of the problem with the original Material framework: every app looked too similar making it hard to distinguish one from another. Google was concerned about people associating bad third party app with itself.
They added more customizability in Material 2 (or was it 3?), but yeah at that point some of the damage was done.
"Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled <button> elements."
Tell me you know nothing about web development without saying you know nothing about web dev ...
1. React is an irrelevant implementation detail. You can have a plain HTML button in a button component, or you can have an image or whatever else. React has nothing to do with the design choices.
2. React is also how you get consistent design across a major web app. Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful! React components (with CSS classes) are a way for a site like Amazon to make all their buttons orange (although I don't actually know if Amazon uses React specifically). But again, whether they look and act like standard buttons comes down to Amazon's design choices ... not whether their tech stack includes React or not.
Look idiomatic design is incredibly important to web design. One of the most popular web design/usability books, Don't Make Me Think, is all about idiomatic design!
But ultimately it's a design choice, which has very little, if anything at all, to do with which development tools you use.
> React is also how you get consistent design across a major web app. Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful! React components (with CSS classes) are a way for a site like Amazon to make all their buttons orange (although I don't actually know if Amazon uses React specifically).
I don't understand this point specifically. I make all buttons on a site have the same theme without needing a framework, library or build-step!
Why is React (or any other framework) needed? I mean, you say specifically "React is also how you get consistent design across a major web app.", but that ain't true.
It depends on the type of site/app you are building. If you are building a basic website (not a web application), or a simple application, you don't need React (or a similar framework like Vue or Angular). You might not even need Javascript at all.
However, as you build more complex and interactive applications, you need "framework", like React. It's essential to simply handle the complexity of such applications. You will not find a major web app that is built with out a framework (or if it is, the owners will essentially have to create their own framework).
When you're using such tools, they are how you enforce consistent UI. Take Tailwind, the hugely popular CSS framework (I believe its #1). They have nothing to do with Javascript ... but even they willl tell you (https://v3.tailwindcss.com/docs/reusing-styles#extracting-co...):
"If you need to reuse some styles across multiple files, the best strategy is to create a component if you’re using a front-end framework like React, Svelte, or Vue ..."
The author is completely mistaken in thinking React ... or even that layer of web technology at all (the development layer) ... has anything to do with what he is complaining about. It has everything to do with design choices, which are almost completely separate from which framework a site picks.
Yes you can, on a small/simple site. But on a serious web application sticking to plain HTML/CSS will be far too limiting, in many ways.
There's a reason why 99.9% of web apps use JavaScript, and with it a tool (framework) like React, Astro, Angular, or Vue. And if you're using such tools, you use them (eg. you use React "components") to create a consistent UI across the site.
But again, which tool you use to develop a site has very little to do with what design choices you make. A React dev with no designer to guide him might pick the most popular date picker component for React, and have the React community influence design that way, but ... A) if everyone picks the most popular tool, it becomes more idiomatic (it's not doing this that creates divergence), and B) if there is a human designer, they can pick from 20+ date picker libraries AND they can ask the dev team to further customize them.
It's designers (or developers playing at being designers) that result in wacky new UI that's not idiomatic. It has (almost) nothing to do with React and that layer of tooling, and if anything those tools lead to more idiomatic design.
> Tell me you know nothing about web development without saying you know nothing about web dev
This Twitterism really bugs me.
You took the time to write a really detailed response (much appreciated, you convinced me). There’s no need to explicitly dunk on the OP. Though if you really want to be a little mean (a little bit is fair imo), I think it should be closer to level of creativity of the rest of your comment. Call them ignorant and say you can’t take them seriously or something. The twitterism wouldn’t really stand on its own as a comment.
It bugs me that the author is "dunking on" React without knowledge on the matter (React is the tool you use to enforce consistent UI on a site; it has almost nothing at all to do with a design decision to have inconsistent UI). So I guess I "dunked on him" in response.
But ... too wrongs don't make a right. I'd remove the un-needed smarminess, if it wasn't already too late to edit.
In text boxes in some applications, enter submits the entered text, and ctrl-enter forces a newline (not at my computer, but I think Slack does this). In others, it's the other way around (pretty sure GitHub does this for comments).
I don't know how we got here and I don't know how to fix it, but "bring back idiomatic design" doesn't help when we don't have enough idioms. I'm not even sure if those two behaviors are wrong to be inconsistent: you're probably more likely to want fancier formatting in a PR review comment than a chat message. But as a user, it's frustrating to have to keep track of which is which.
Decades ago, Return and Enter were two different keys for that reason: Return to insert a line break, Enter to submit your input.
Given the reduction to a single key, the traditional GUI rule is that Enter in a multiline/multi-paragraph input doesn’t submit like it does in other contexts, but inserts a line break (or paragraph break), while Ctrl+Enter submits.
Chat apps, where single-paragraph content is the typical case, tend to reverse this. Good apps make this configurable.
Before that, page-mode terminals used <Return> to move to first field on a subsequent line (like a line-based <Tab>) and sent the page only on <Enter> or <Fn-key>. This made for quick navigation w/ zero ambiguity.
don't get me started on backspace vs delete...
^H^H^H^H^?^?^?
Carriage return and line feed go way back. Tty stands for teletype. A computer was the job description of a person.
It’s turtles all the way down.
Teams does both - normally it’s Enter to submit and Shift+Enter for a new line, but when you open the formatting tools it switches. They at least do have a message indicating which key combo inputs a new line, but it still gets me on occasion.
Slack is similar Shift enter in normal text. Enter in a code block, shift enter sends in a code block.
The Signal desktop app does both too, I guess, but in a way that actually makes sense. Enter sends a message since IMs tend to be short one-liners. Shift-Enter inserts a line break.
But if you click an arrow on the top of the text box, it expands to more than half of the height of the window, and now Enter does a line break and Shift-Enter sends. Which makes a lot of sense because now you're in "message composer" / "word processor" mode.
In Slack it can get even worse.
If you turn on Markdown formatting, shift+enter adds a new line, unless you’re in a multi-line code block started with three backticks, and then enter adds a new line and shift+enter sends the message.
I can see why someone thought this was a good idea, but it’s just not.
This is a user preferences setting for what it's worth.
Thats funny because I thought it was shift-enter that creates a newline in a field where an enter submits. Just shows the fractured nature of this whole thing.
This is my thinking. Ctrl-Enter is usually "submit the form this input is a part of" in my experience, especially if you're in a multilinear text input (or textarea).
I've seen Enter, Shift-Enter, Ctrl-Enter, and Alt-Enter, (and on macOS, Cmd-Enter and Option-Enter), depending on the application. Total circus. I think this is actually a weakness of the standard keyboard: Keyboards should at the very least separate "submit form / enter" from "newline / carriage return" with different physical keys, but good luck changing that one, given the strong legacy of combining these functions.
For Slack at least you have the option to change that back to use Enter for new line (which is what I do), but other software is not that generous. I think Grafana introduced yet another way, Shift-Enter to submit, that I alway mix up.
macOS is slightly more consistent among apps that use system controls, but the more custom the app, or the more React Native or Electron it is, the less predictable it is
Infuriatingly, some apps try to be smart — only one line, return submits; more than one line, return is a new line, and command-return submits; but command-return on just one line beeps an error.
Years of muscle memory are useless, so now I’m reaching for the mouse when I need to be clear about my intent
So much is solved when developers just use the provided UI controls, so much well-studied and carefully implemented behavior comes for free
Apart from a chat interface, when should enter ever submit your text?
A single-line text box that has no possibility of multi-line text (so, not a chat interface), such as search, an address bar, something that's obviously "submit one item" (e.g. "submit a word"), etc.
In a multiline text box, enter should NOT submit the form. Chat interfaces violate this rule and it results in lots of premature chat submissions.
Precisely. 'member CUA?
In single-text-input contexts, like search fields and the browser address field, and things like Save As dialogs. It’s the general expectation for dialogs with an OK or default button, just like Escape cancels the dialog.
A search box, I think
Anything which supports multi-line input shouldn't submit on enter it should submit on button press so anyone can use it instantly without learning or remembering anything.
Then make it easier for users to learn that they can enter more quickly with control+enter which you can advertise via tooltip or adjacent text.
Better that 100% find it trivially usable even if only 75% learn they can do it faster
That isn’t workable for chat apps, at the very least on mobile. And that’s the most-used text entry interface that users nowadays grow up with. So I think you need to make an exception for such applications.
Most software is not designed by intelligent and thoughtful people anymore. It is designed by hastily promoted middle manager PM/Product type people who, as has been mentioned elsewhere, simply were not around when thoughtful human interface design was borderline mandatory for efficiency’s sake.
There is incompetence and there is also malevolence in the encouragement of dark patterns by the revenue side of the business.
It’s amazing how many blank stares I get when I, as mobile engineer, tell stakeholders that we shouldn’t just implement some random interface idea they thought up in the shower and we instead need design input!
“But why can’t you just do it?” Because I recognise the importance of consistent UX and an IA that can actually be followed.
Just like developers, (proper) designers solve problems, an we need to stop asking them for faster bikes.
> “But why can’t you just do it?”
The answer should be "because users will hate it and use a competing product that's better designed".
A shame that it isn't actually true any more.
There's a time and a place for it. If you already know exactly what the program needs to do, then sure, design a user interface. If you are still exploring the design space then it's better to try things out as quickly as possible even if the ui is rough.
The latter is an interesting mindset to advocate for. In almost every other engineering discipline, this would be frowned upon. I suspect wisdom could be gained by not discounting better forethought to be honest.
However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.
Racing like in Formula 1 is extremely different from normal product design: each Formula 1 car has a user base of exactly 1: the driver that is going to use it. Not even the cars from the same team are identical for that reason. The driver can basically dictate the UX design because there is never any friction with other users.
Also, turnaround times from idea to final product can be insane at that level. These teams often have to accomplish in days what normally takes months. But they can pull it off by having every step of the design and manufacturing process in house.
There exist other ways to do the research. „Try things out“ is often not just a signal of „we don‘t know what to do“, but also a signal of „we have no idea how to properly measure the outcomes of things we try“.
But that’s the point, no? Prototyping is useful but beyond a proof of concept, you still need a suitable user interface. I have no problems if there’s a rationale behind UI changes, but often we have stakeholders telling us to do something inconsistent just so their pet project can be presented to the user. That’s not design.
Yep, there's some bad incentives and some rushed work, but calling it mostly incompetence or malice kind of ignores how much the underlying system has changed
This is reductionist and myopic. I've personally been through building forms online and it's hell to try to find consensus on perhaps the most common forms used online.
Let's take a credit card form:
- Do I let the user copy and paste values in?
- Do I let them use IE6?
- Do I need to test for the user using an esotoric browser (Brave) with an esoteric password manager (KeePassXC)?
- Do I make it accessible for someone's OpenClaw bot to use it?
- Do I make it inaccessible to a nefarious actor who uses OpenClaw to use it?
I could go on...
Balancing accessibility and usability is hard.[0]
[0] Steve Yegge's platform rant - https://gist.github.com/chitchcock/1281611
Cybernetic natural selection should take care of this over time, but the rate of random mutations in software systems is much higher than in biological systems. Would be interested in modeling the equilibrium dynamics of this
Software is now media, not tooling. Media tends to come with a lot of baked in perverse incentives.
As the author identifies, the idioms come from the use of system frameworks that steer you towards idiomatic implementations.
The system UI frameworks are tremendously detailed and handle so many corner cases you'd never think of. They allow you to graduate into being a power user over time.
Windows has Win32, and it was easier to use its controls than rolling your own custom ones. (Shame they left the UI side of win32 to rot)
macOS has AppKit, which enforces a ton. You can't change the height of a native button, for example.
iOS has UIKit, similar deal.
The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.
The author may have identified that "the idioms come from the use of system frameworks", but they absolutely got wrong just about everything about why apps are not consistent on the web (e.g. I was baffled by their reasons listed under "this lack of homogeneity is for two reasons" section).
First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac). So, as you point out regarding the Win32 API, developers had essentially one way to do things, or at least the far easiest way to do things. Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
The web started out as a document sharing system, and it only gradually and organically turned over to an app system. There was simply no single default, "easiest" way to do things (and despite that, I remember when it seemed like the web converged all at once onto Bootstrap, because it became the easiest and most "standard" way to do things).
In other words, I totally agree with you. You can have all the "standard idioms" that you want, but unless you have a single company providing and writing easy to use, default frameworks, you'll always have lots of different ways of doing things.
Well, and worse, Windows was itself a hive of inconsistency. The most obvious example of UI consistency failing as an idea was that Microsoft's own teams didn't care about it at all. People my age always have rose tinted glasses about this. Even the screenshot of Word the author chose is telling because Office rolled its own widget toolkit. No other Windows apps had menus that looked like that, with the stripe down the left hand side, or that kind of redundant menu-duplicating sidebar. They made many other apps that ignored or duplicated core UI paradigms too. Visual Studio, Encarta, Windows Media Player... the list went on and on.
The Windows I remember was in some ways actually less consistent than what we have now. It was common for apps to be themeable, to use weirdly shaped windows, to have very different icon themes or button colors, etc. Every app developer wanted to have a strong brand, which meant not using the default UI choices. And Microsoft's UI guidelines weren't strong enough to generate consistency - even basic things like where the settings window could be found weren't consistent. Sometimes it was Edit > Preferences. Sometimes File > Settings. Sometimes zooming was under View, sometimes under Window.
The big problem with the web and the newer web-derived mobile paradigms is the conflation between theme and widget library, under the name "design system". The native desktop era was relatively good at keeping these concepts separated but the web isn't, the result is a morass of very low effort and crappy widgets that often fail at the subtle details MS/Apple got right. And browsers can't help because every other year designers decide that the basic behaviors of e.g. text fields needs to change in ways that wouldn't be supported by the browser's own widgets.
“Brand” and “branding” is arguably the most important thing -not- mentioned in the article. The commercial incentives to differentiate are powerful enough to kick a lot of UX out of the way.
Now that all we do is “experience” a “journey,” it’s more about the user doing what the app wants instead of the other way around
> First, what he calls "the desktop era" wasn't so much a desktop era as a Windows era - Windows ran the vast majority of desktops (and furthermore, there were plenty of inconsistencies between Windows and Mac).
That's overemphasising the differences considerably: on the whole Windows really did copy the Macintosh UI with great attention to detail and considerable faithfulness, the fact that MS had its own PARC people notwithstanding. MS was among other things an early, successful and enthusiastic Macintosh ISV, and it was led by people who were appropriately impressed by the Mac:
> This Mac influence would show up even when Gates expressed dissatisfaction at Windows’ early development. The Microsoft CEO would complain: “That’s not what a Mac does. I want Mac on the PC, I want a Mac on the PC”.
https://books.openbookpublishers.com/10.11647/obp.0184/ch6.x... It probably wouldn't be exaggerating all that wildly to say that '80s-'90s Microsoft was at the core of its mentality a Mac ISV, a good and quite orthodox Mac ISV, with a DOS cash-cow and big ambitions. (It's probably also not a coincidence that pre-8 Windows diverges more freely from the Mac model on the desktop and filesystem UI side than in regards to the application user interface.) And where Windows did diverge from the Mac those differences often ended up being integrated into the Macintosh side of the "desktop era": viz. the right-click context menu and (to a lesser extent) the old, 1990s Office toolbar. And MS wasn't the only important application-software house which came to Windows development with a Mac sensibility (or a Mac OS codebase).
I partially agree with you, but additionally there's a whole set of employees who would be clearly redundant in any given company if that company decided to just use a simple, idiomatic, off the shelf UI system. Or even to implement one but without attempting to reinvent well understood patterns.
One reason so many single-person products are so nice is because that single developer didn't have the time and resources to try to re-think how buttons or drop downs or tabs should work. Instead, they just followed existing patterns.
Meanwhile when you have 3 designers and 5 engineers, with the natural ratio of figma sketch-to-production ready implementation being at least an order of magnitude, the only way to justify the design headcount is to make shit complicated.
But every company I worked at in the past 10 years or so eventually coalesced around a singular "design system" managed by one person or a small core team. But that just goes back to my original point - every company had their own design system, and there is not a single, industry-wide set of "rails".
The bigger issue I see with "got to keep lots of designers employed" problem is the series of pointless, trend-following redesigns you'd see all the time. That said, I've seen many design departments get absolutely slaughtered at a lot of web/SaaS companies in the past 3 years. A lot of the issue designers were working on in the web and mobile for the 25 years prior are now essentially "solved problems", and so, except for the integration of AI (where I've seen nearly every company just add a chat box and that AI star icon), it looks like there is a lot less to do.
Conventions already existed in DOS (CUA) and MacOS. The point is, every operating system had its user interface conventions, and there was a strong move from at least the mid-1980s to roughly the mid-2000s that applications should conform to the respective OS conventions. The cross-platform aspect of the web and then of mobile destroyed that.
Yeah the author conveniently ignores the fact that the UX of Mac apps was radically different to that of PC apps, so it’s not that designers/developers were somehow more enlightened back then, it’s just that they were “on rails”
> Developers weren't so much "following design idioms" as "doing what is easy to do on Windows".
Most people only uses one computer. Inconsistency between platforms have no bearing on users. But inconsistency of applications on one platform is a nightmare for training. And accessibility suffers.
I don't disagree, but my point was about the author's incorrect diagnosis of the reason (and solution) for the problem, not that the problem doesn't exist.
As a sibling commenter put it, previously developers had "rails" that were governed by MS and Apple. The very nature of the web means no such rails exist, and saying "hey guys, let's all get back to design idioms!" is not going to fix the problem.
That’s not the only reasons. When you are used to how your operating system does things consistently, as a developer you naturally want your application to also behave like you’re used to in that environment.
This eroded on the web, because a web page was a bit of a different “boxed” environment, and completely broke down with the rise of mobile, because the desktop conventions didn’t directly translate to touch and small screens, and (this goes back to your point) the developers of mobile OSs introduced equivalent conventions only half-heartedly.
For example, long-press could have been a consistent idiom for what right-click used to be on desktop, but that wasn’t done initially and later was never consistently promoted, competing with Share menus, ellipsis menus and whatnot.
> The web has nothing. You gotta roll your own, and it'll be half-baked at best. And since building for modern desktop platforms is horrible, the framework-less web is being used there too.
This feels like the root cause to me as well. Or more specifically, the web does have idioms, the problem is that those idioms are still stuck in 1980 and assume the web is a collection of science papers with hyperlinks and the occasional image, data table and submittable form.
This is where the "favourites" list and the ability to select any text on a web pages came from.
Web apps not only have to build an application UI completely from scratch, they also have to do it on top of a document UI that "wants" to do something completely different.
Modern browsers have toned down those idioms and essentially made it "easier to fight them", but didn't remove or improve them.
"The Web" has evolved into a pretty bad UI API. I kind of wish that the web stuck to documents with hyperlinks, and something else emerged as a cross-platform application SDK. Combining them both into HTTP/CSS/JS was a mistake IMO.
> You can't change the height of a native button, for example.
You can definitely do so, it's just not obvious or straightforward in many contexts.
The web was designed for interactive documents,not desktop applications. The layout engine was inspired by typesetting (floating, block) and lot of components only make sense for text (<i>, <span>, <strong>,...). There's also no allowance for dynamic data (virtualization of lists) and custom components (canvas and svgs are not great in that regard).
> building for modern desktop platforms is horrible, the framework-less web is being used there too.
I think it's more related to PM wanting to "brand" their product and developers optimizing things for themselves (in the short term), not for their users.
Bootstrap was nice.
Guys, I found out about this technology called Cascading Style Sheets recently and I think it's the missing piece we've been looking for. It lets you declaratively specify layout in a composable, hierarchical system based on something called the Document Object Model in a way that minimizes both clientside and serverside processing, based on these things called "stylesheets".
The best part is, it's super easy to customize them, read others for inspiration or to see how they did something, or even ship multiple per site to deal with different user preferences. Through this "forms" api, and little-known browser features like url-fragments, target/attribute selector, and style combinators, plus "the checkbox hack" you can build extremely responsive UIs out of it by "cascading" UI updates through your site! When do you think they're going to add it to next.js?
I'm tentatively calling this new UI paradigm "no-framework" or "no package manager", not sure yet https://i.imgur.com/OEMPJA8.png
> There are hundreds of ways that different websites ask you to pick dates
Ugh, date pickers. So many of these violently throw up when I try to do the obvious thing: type in the damn date. Instead they force me to click through their inane menu, as if the designer wanted to force me into a showcase of their work. Let your power users type. Just call your user’s attention back to the field if they accidentally typed 03/142/026.
No no, I find that having to click back through almost 40 years’ worth of months to get to my birthday allows for a nice pause to consider the fleeting and ever-accelerating nature of life.
You can usually click the year and then pick that first. But the fact that so many people don't instantly get that shows how poorly designed it is.
> You can usually click the year and then pick that first.
Even then, clicking the year will often lead to a tiny one-page list of 10 years, which you can either page back in or click the decade to get shown a list of decades to pick from. So: click 2026, click 2020s, click 19XXs, click a year, click a month, click a birthday.
Such an interface makes at least some sense for "pick a date in the near future". When I'm booking an airline flight, I usually appreciate having a calendar interface that lets me pick a range for the departure and return dates. But it makes no sense for a birthday.
Is 03/04/2026 March 4th or the 3rd of April?
If you have an international audience that’s going to mess someone up.
Better yet require YYYY-MM-DD.
As they type it, start displaying what it is. If, as you type "03/", it says "March", and that's not what you want, you now know what format it wants.
(And yes, always accept YYYY-MM-DD format, please.)
<input type="date"> is automatically formatted based on the user's locale.
This is still a partial solution as the user needs to know that their locale is being used and know how their locale is configured to understand the format. This is most problematic on shared computers or kiosks, especially when traveling.
I don't even know my locale.
Is is the device display language, the keyboard input language, my geo location, my browser language, my legal location, my browser-preferred website language, the language I set last time, the language of the domain (looking at amazon.co.uk), the language that was auto-selected last time for me on mobile or... something else entirely?
I mean, once in a different country, you either experience the locale shock once then adapt, or you've seen it before and kind of know what to expect.
And for the rest of the users who have no idea about locales, using whatever locale they have on their computer might be technically incorrect for some of them, but at least they're somewhat used to that incorrectness already, as it's likely been their locale for a while and will remain so.
> Better yet require YYYY-MM-DD
This is the equivalent of requiring all your text to be in Esperanto because dealing with separate languages is a pain.
"Normal" people never use YYYY-MM-DD format. The real world has actual complexity, tough, and the reason you see so many bugs and problems around localization is not that there aren't good APIs to deal with it, it's that it's often an after thought, doesn't always provide economic payoff, and any individual developer is usually focused on making sure it "looks good" I'm whatever locale they're familiar with.
Iso 8601![0]
[0] https://en.wikipedia.org/wiki/ISO_8601
Or:
- Use localization context to show the right order for the user
- Display context to the user that makes obvious what the order is
- Show the month name during/immediately after input so the user can verify
I've seen some that had a drop-down for the month name. But since it was native, I could type the month name and my browser selected the right one.
This has a solved problem for a long time
I hate how scrolling through a list of years to enter my birthday forces me to confront my mortality
I hate how websites that are trying to verify my age make me scroll through 13, 18, or 21 years that I could not legitmately select if I want to use the site.
Most of these I just say I am 200 years old or so.
> Every single button is clearly visually a button and says exactly what it does. And each one has a little underline to indicate its keyboard shortcut. Isn’t that nice?
Something not mentioned here (that came from the Mac world as I understand it): everywhere that the text ends with an ellipsis, choosing that action will lead to further UI prompts. The actions not written this way can complete immediately when you click the button, as they already have enough information.
UX has really gone downhill. This is particularly true of banking websites.
Also, the trend of hiding scrollbars, huge wasted spaces, making buttons look really flat, confusing icons, confusing ways of using drop downs rather than using the select/option html controls etc have all made the whole experience far inferior to where desktop UI was even decades ago
Hiding scrollbars is a deeply annoying trend. I don't understand the rationale. Because someone thought it looks aesthetically cooler?
The number of JavaScript dropdown replacements that don't work correctly with the keyboard is stunning. It always amazes me how many forms fail at this basic usability aspect. The browser has homogeneous form controls built in, just use them!
> Prefer words to icons. Use only icons that are universally understood.
Underrated. Except for dyslexic people, and the most obvious icon forms, I am pretty sure most people are just better and faster at recognising single words at a glance than icons.
I'm somewhat dubious about that for icons with actual recognizable pictures, but a lot of icon attempts today are stylized to death, with just a line, bent and broken in a couple places and maybe if you're lucky juxtaposed with the occasional dot. If there's no text description even on mouseover (or touchscreen, with no cursor...) discovery is more or less trial and error (or perhaps more akin to Russian Roulette if the permissions involve being able to do real damage). Scratch your head and hope there are existing support questions searchable about what on Earth the programmer could have meant to convey...
...except for HN "unvote"/"undown" feedback which is especially unfortunate due to the shared prefix. Every time I upvote something I squint at the unvote/undown to make sure I didn't misclick.
I'm still shocked that the links are so dang close together on mobile. You don't even need the proverbial fat fingers.
I am pretty sure icons are easier and faster to recognize, except when you make them (too) small. In particular, they probably are easier in the long run, as long as they don't change position. But in a context where things change or you need a lot of buttons, words probably win.
This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.
> This is why you need both. Icons are faster to recognize, but words tell you what the icons need. So you need the words at first to discover the icons, then the icons serve as valuable tools for scanning and quickly locating the click target that you are looking for.
Only if there are few icons. If every item in that menu in the screenshot of Windows had an icon, and all icons were monochrome only, you'd never quickly find the one you want.
The reason icons in menu items work is because they are distinctive and sparse.
That's what I tend to do too, but sometimes space requirements win.
But of course, a good design is adapted to its user: frequent/infrequent is an important dimension, as is the time willing to learn the UI. E.g., many (semi) pro audio and video tools have a huge number of options, and they're all hidden under colorful little thingies and short-cuts.
Space is important there, because you want as many tracks and Vu meters and whatever on your screen as possible. Their users are interested in getting the most out of them, so they learn it, and it pays off.
The behavior science also changed a lot of things. People study behavior, patterns, what can sell more, what looks more intuitive. If something looks a bit different from the others, it will sell better. If something look the same way as the previous one, why should the client buy it? The client needs to see a difference, it can be only a little bit more flashy, but it must be different. 20 years, later, this is the result.
Especially now, in the AI era, where each person can make a relatively working app from the sofa, without any knowledge of UI/UX principles.
We've lost some common features:
* Undo & redo
* Help files & context sensitive F1
* Hints on mouse hover
* Keyboard shortcuts & shortcut customisation
* Main menus
* Files & directories
* ESC to close/back
* Drag n drop
Revelation features when they first became common. Now mostly gone on mobile and websites.
Much of this is foisted upon us by visual designers who wandered into product design. It's a category error the profession has never quite corrected. (maybe more controversially, it's caused by having anyone with the word "designer" in their title on a project that doesn't need such a person - this category is larger than anyone thinks)
But I'm not convinced the old consistency was purely a design victory... it was also a result of heavy constraints
Lately I've occasionally been running into round check boxes that look like radio buttons. Why????
UX want to put their own spin on things. I’ve noticed this repeatedly.
UX has gotten from something with a cause to being the cause for something
I think the answer is they just don't know.
iOS decided square checkboxes were ugly, and design patterns are flowing from mobile->desktop these days.
Squircles[1].
[1] https://en.wikipedia.org/wiki/Squircle
At some point UX became a synonym of manipulating users into doing things, and I wonder if it can ever go back.
It might have started in an innocent way, all those A/B tests about call-to-action button color, etc. But it became a full scale race between products and product managers (Whose landing page is best at converting users?, etc.) and somewhere in this race we just lost the sense of why UX exists. Product success is measured in conversion rates, net promoter score, bounce rates, etc. (all pretty much short-term metrics, by the way), and are optimized with disregard to the end-user experience. I mean, what was originally meant by UX. It is now completely turned on its head.
Like I said, I wonder if there is way back of if we are stuck in the rat race. The question is how to quit it.
Yall remember https://en.wikipedia.org/wiki/Mystery_meat_navigation? Back in 2004-ish era, there was an explosion of very creative interaction methods due to flash and browser performance improvements, and general hardware improvements which led to "mystery meat navigation" and the community's pushback.
Since then, the "idiomatic design" seems to have been completely lost.
Is this what the hamburger button is made of?
I mean, your guess is as good as mine as to what options the corresponding menu will actually contain, so....
Interesting that Apple is praised.
> that a link? Maybe!
When Apple transitioned from skeuomorphic to flat design this was a huge issue. It was difficult to determine what was a button on iOS and whether you tapped it (and the removal of loading gifs across platforms further aggravated problems like double submits).
Another absurdity with iOS is the number of ways you can gesture. It started simply, now it is complex to the point where the OS can confuse one gesture for another.
I have a lot of gripes with Apple's various design decisions over the years, but they're at least consistent across their apps, which is the point of TFA.
Mystery gesture navigation is also now on by default and terrible on Android, too. It's awful with children or older folks (or even me!) who trigger it by accident all the time. Some of it I was able to disable on my children's iPads. It's still frustrating that easy to accidentally trigger but impossible to discover gestures are the default and also frustrating that we have the very last iPad generation with a button.
> Suppose you’re logging into a website, and it asks: “do you want to stay logged in?”
Then the website has made its first mistake, and should delete that checkbox entirely, because the correct answer is always "yes". If you don't want to be logged in, either hit the logout button, or use private browsing. It is not the responsibility of individual websites to deal with this.
>using GMail is nothing like using GSuites is nothing like using Google Docs
G Suite (no s) was the old name for Google Workspace. Google Workspace includes GMail, Google Docs, Google Sheets, Google Calendar, etc., so it doesn't really make sense to say that Google Workspace has a different UX than Google Docs, if Google Docs is part of Google Workspace.
Disclosure: I work at Google, but not one of the listed products.
If GSuites was a typo for GSites (e.g., informal for Google Sites classic), then the sentence in TFA could work.
IDK if such was the intent, of course.
designers are creatives and will always believe the visual elements of a design need to be updated, refreshed, modernized etc.. then we get flavour of the month nand new trends in visual language and ui design that things must be updated to.
As soon as UI design became a creative visual thing rather than a functional thing , everything started to go crazy in UI land..
That is because they know the users. Users are very sensitive to this: if the outside wasn't changed then the internals cannot be much improved. You see this with cars, cars need a new design otherwise customer will think nothing much changed. Customer will usually buy newer over better because they think newer must have improvements, and styling signals new. Same with computers, all the disappointments when apple releases a new macbook without changing the exterior....
One of my pet peeves is that increasingly frequently, pressing Enter to submit a web form doesn’t even universally work anymore. Instead you have to tab to the submit button, and (depending on the web page) have to press Space or Enter to actuate it.
Another annoyance is that many web forms (and desktop apps based on web tech) don’t automatically place the keyboard focus in an input field anymore when first displayed. This is also an antipattern on mobile, that even on screens that only have one or two text inputs, and where the previous action clearly expressed that you want to perform a step that requires entering something, you first have to tap on the input field for the keyboard to appear, so that you can start entering the requested information.
> One of my pet peeves is that increasingly frequently, pressing Enter to submit a web form doesn’t even universally work anymore. Instead you have to tab to the submit button, and (depending on the web page), have to press Space or Enter to actuate it.
The other day I used Safari on a newly setup macOS machine for the first time in probably a decade. Of course wanted to browse HN, and eventually wanted to write a comment. Wrote a bunch of stuff, and by muscle memory, hit tab then enter.
Guess what happened instead of "submitted the comment"? Tab on macOS Safari apparently jumps up to the addressbar (???), and then of course you press Enter so it reloads the page, and everything you wrote disappears. I'm gonna admit, I did the same time just minutes later again, then I gave up using Safari for any sort of browsing and downloaded Firefox instead.
I would argue that behavior is idiomatic for macOS but not idiomatic for web browsers. Keyboard navigation of all elements has never been the default in macOS. Tab moves between input fields, but without turning on other settings, almost never moved between other elements because macOS was a mouse first OS from its earliest days. Web browsers often broke this convention, but Safari has from day one not used tab for full keyboard navigation by default.
And this highlights something that I think the author glosses over a little but is part of why idioms break for a lot of web applications. A lot of the keyboard commands we're used to issue commands to the OS and so their idioms are generally defined by the idioms of the OS. A web application, by nature of being an application within an application, has to try to intercept or override those commands. It's the same problem that linux (and windows) face with key commands shared by their terminals and their GUIs. Is "ctrl-c" copy or interrupt? Depends on what has focus right now, and both are "idiomatic" for their particular environments. macOS neatly sidesteps this for terminals because "ctrl-c" was never used for copy, it was always "cmd-c".
Incidentally, what you're looking for in Safari is either "Press Tab to highlight each item on a webpage" setting in the Advanced settings tab. By default with that off, you would use "opt-Tab" to navigate to all elements.
System Settings -> Keyboard -> and toggle Keyboard navigation.
I'm not sure why this isn't the default, but this allows for UI navigation via keyboard on macOS, including Safari.
> You don’t want to have to remember to use CTRL + Shift + C in certain circumstances or right-click → copy in others, that’d be annoying.
laughs in linux wouldn’t that be nice.
I’m a decade+ linux power user and I still do insane things like pipe outputs into vim so I can copy paste without having to remember tmux copy paste modes when I have vertical panes open.
This is the kind of thing why I still prefer Windows as a UI.
Terminal UX existed before the CUA guidelines from IBM. People complains about Ctrl + Shift + C behavior when it exists only in one category of application, terminal emulators.
Plan 9 fixes this.
With some irony, one thing Substack doesn't afford is zooming in to images on mobile.
Firefox on Android can override this via a toggle in the Accessibility settings. Maybe other browsers have something similar?
My hope is that since tools like Google Stitch have made fancy looking design free that it will become obvious how functionally worthless fancy looking design always was. It used to signal that a site paid a lot of money and was therefore legitimate. Now it signals nothing.
This is a good point, but there's usually a long tail on transitions like this.
This kinda hurt. The world is in a rush to be the ASAP, so nobodys interest is to do design good, it needs to be fast. And now we have this sh*tshow.
The web needs a HIG.
All of these people who keep saying that webapps can replace desktop applications were simply never desktop power users. They don’t know what they don’t know.
Yeah it would be nice if the web accessibility guidelines also focused on actually using the thing normally. For example: offsetting the scrollbar from the right edge of the screen by 1px should be punishable by death.
I think HIG means "Human Interface Guidelines" here. Seems to be an Apple thing.
I wish more people would avoid or at least introduce abbreviations that may be unfamiliar to the audience.
Microsoft had one too: WIG!
https://news.ycombinator.com/item?id=22475521
There are quite a number of them: <https://en.wikipedia.org/wiki/Human_interface_guidelines#Exa...>
I had to laugh when I read this:
> Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled <button> elements.
I've been hearing that for the entire Internet era yet people continue to reinvent scrollbars, text boxes, buttons, checkboxes and, well, every input element. And I don't know why.
What this article is really talking about is conventions not idioms (IMHO). You see a button and you know how it works. A standard button will behave in predictable ways across devices and support accessibility and not require loading third-party JS libraries.
Also:
> Notwithstanding that, there are fashion cycles in visual design. We had skeuomorphic design in the late 2000s and early 2010s, material design in the mid 2010s, those colorful 2D vector illustrations in the late 2010s, etc.
I'm glad the author brought this up. Flat design (often called "material design" as it is here) has usability issues and this has been discussed a lot eg [1].
The concept here is called affordances [2], which is where the presentation of a UI element suggests how it's used, like being pressed or grabbed or dragged. Flat design and other kinds of minimalism tend to hide affordances.
It seems like this is a fundamental flaw in human nature that crops up everywhere: people feel like they have to do something different because it's different, not because it's better. It's almost like people have this need to make their mark. I see this all the time in game sequels that ruin what was liked by the original, like they're trying to keep it "fresh".
[1]: https://www.nngroup.com/articles/flat-design/
[2]: https://geekyants.com/blog/affordances-in-ui-design
And while we're at it, stop with the popups and notifications.
I don't care about the new features in a browser update. Ideally, nothing at all has changed.
I don't want a "tour" of the software I just installed. I, presumably, installed it to do something, and I just want to do that thing.
I don't want to have to select a preference for how a specific action is performed in your software. If it's not what I expected, I will learn it.
And for the love of GOD, nobody wants to subscribe to your newsletter.
Day-to-day usability doesn't bring much "wow" factor to a sales pitch.
Not sure how you can put the genie back in the bottle, every app wants to have its own design so how can you enforce them to all obey the same design principles? You simply can't.
Am I the only one who doesn't know what that "Keep me signed in" checkbox is for? I mean, I was a web developer for many years and I rarely encountered this checkbox in the wild, don't remember implementing it even once. I mean the choice itself is very ambiguous. It is supposed to mean that the login session will only live for the duration of the current browser session if I uncheck it. But for a user (and for me too) that does not mean much, what is the duration of the session if my browser runs open for weeks, what if we are on mobile where tabs never close and tabs and history is basically the same thing (UX-wise). If I decide to uncheck it for security reasons (for example when I'm on someone else's device) I want to at least know when exactly or after what action the session will be cleared out, and as a user I have zero awareness or control there.
I don't advocate for removal of this checkbox but I would at least re-consider if that pattern is truly a common knowledge or not :)
I've never seen one that actually works. It seems like whether or not I check them, the next time I log into [every site] I have to re log in.
Really? I don't think I've ever seen one that doesn't work. What sites are you using where you encounter this? Have you checked your cookie settings?
Worked at Figma for 5 years. The author uses Figma as an example, but I think misses the point. They're so close though. Note these quotes:
> Both are very well-designed from first principles, but do not conform to what other interfaces the user might be familiar with
> The lack of homogeneous interfaces means that I spend most of my digital time not in a state of productive flow
There are generally two types of apps - general apps and professional tools. While I highly agree with the author that general apps should align with trends, from a pure time-spent PoV Figma is a professional tool. The design editor in particular is designed for users who are in it every day for multiple hours a day. In this scenario, small delays in common actions stack up significantly.
I'll use the Variables project in Figma as an example (mainly because that was my baby while I was there). Variables were used on the order of magnitude of billions. An increase in 1s in the time it took to pick a variable was a net loss of around 100 human years in aggregate. We could have used more standardized patterns for picking them (ie illustrator's palette approach), or unified patterns for picking them (making styles and variables the same thing), but in the end we picked slightly different behavior because at the end of the day it was faster.
In the end it's about minimizing friction of an experience. Sometimes minimizing friction for one audience impacts another - in the case of Figma minimizing it for pro users increased the friction for casual users, but that's the nature of pro tools. Blender shouldn't try and adopt idiomatic patterns - it doesn't make sense for it, as it would negatively impact their core audience despite lowering friction for casual users. You have to look at net friction as a whole.
This is a really huge and a fundamental flaw in AI-driven design. AI-driven design is completely inconsistent. If you re-ran an AI generated layout, even with the same prompt, the output for a user interface will look completely different between two runs.
You can steer it towards reusable components, though.
Find a run you like, and build off that.
Apple was doing a pretty good job until whatever happened with v 26.
On the web, the rise of component libraries and consistent theming is promising.
They were not. Their own apps on iOS are wildly inconsistent.
That windows 2000/win 95 interface was peak windows design.
Shows a picture of Office 2000 and says "The visuals feel a little ugly and dated: it’s blocky, the font isn’t great, and the colors are dull."
Are you serious? Nothing has come close to it. Yeah we have higher resolution screens, but everything else is much less legible and accessible than that screenshot.
UIs are inconsistent even in the same app. Nevermind plugins or suites. It would be great if menus were customizable so you could plug in your own template.
I prefer to avoid customizing apps. I want to be able to sit down at a fresh install (or someone else's) and not spend time learning their preferences.
When someone asks me for a checkbox so they can have my app work their way instead and everyone else can do theirs, the hair stands up on the back of my neck. The check boxes are hard to discover unless you put them front and center, in which case they remain there forever serving no purpose.
I would rather redesign the entire interface, either to find the right answer that works for everyone, or to learn what makes one class of users different from another. The check box is a mode, and nodes are to be avoided if I possibly can.
I realize that this puts me at odds with a whole class of users who want to make their box do their thing. It's your box and you should do what you want. And I really love style sheets for that. Rather than cobbling together my own set of possible preferences you should have something Turing complete. Go nuts with it.
I think most non-Linux users haven't made a fresh install in 5-10 years. Preferences files and apps get transferred when you buy a new computer or update your os.
I was pleased how much was passed over from my last phone. I got the same brand so it's not surprising, but wow it is so much better than The Good Old Days (tm).
I remember the old days being surprisingly smooth. There was some verizon tool that transferred all my contacts from the dumb phone to my first smart phone.
... and please stop doing paralax...
Such a nice way to give more depth to your content. </s>
(2023)
Idiomatic design will never come back. The reason being companies believe (correctly) that they design language is part of their brand. The uniqueness is, basically, the point.
That was one of the problem with the original Material framework: every app looked too similar making it hard to distinguish one from another. Google was concerned about people associating bad third party app with itself.
They added more customizability in Material 2 (or was it 3?), but yeah at that point some of the damage was done.
"Avoid JavaScript reimplementations of HTML basics, e.g. React Button components instead of styled <button> elements."
Tell me you know nothing about web development without saying you know nothing about web dev ...
1. React is an irrelevant implementation detail. You can have a plain HTML button in a button component, or you can have an image or whatever else. React has nothing to do with the design choices.
2. React is also how you get consistent design across a major web app. Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful! React components (with CSS classes) are a way for a site like Amazon to make all their buttons orange (although I don't actually know if Amazon uses React specifically). But again, whether they look and act like standard buttons comes down to Amazon's design choices ... not whether their tech stack includes React or not.
Look idiomatic design is incredibly important to web design. One of the most popular web design/usability books, Don't Make Me Think, is all about idiomatic design!
But ultimately it's a design choice, which has very little, if anything at all, to do with which development tools you use.
> React is also how you get consistent design across a major web app. Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful! React components (with CSS classes) are a way for a site like Amazon to make all their buttons orange (although I don't actually know if Amazon uses React specifically).
I don't understand this point specifically. I make all buttons on a site have the same theme without needing a framework, library or build-step!
Why is React (or any other framework) needed? I mean, you say specifically "React is also how you get consistent design across a major web app.", but that ain't true.
It depends on the type of site/app you are building. If you are building a basic website (not a web application), or a simple application, you don't need React (or a similar framework like Vue or Angular). You might not even need Javascript at all.
However, as you build more complex and interactive applications, you need "framework", like React. It's essential to simply handle the complexity of such applications. You will not find a major web app that is built with out a framework (or if it is, the owners will essentially have to create their own framework).
When you're using such tools, they are how you enforce consistent UI. Take Tailwind, the hugely popular CSS framework (I believe its #1). They have nothing to do with Javascript ... but even they willl tell you (https://v3.tailwindcss.com/docs/reusing-styles#extracting-co...):
"If you need to reuse some styles across multiple files, the best strategy is to create a component if you’re using a front-end framework like React, Svelte, or Vue ..."
The author is completely mistaken in thinking React ... or even that layer of web technology at all (the development layer) ... has anything to do with what he is complaining about. It has everything to do with design choices, which are almost completely separate from which framework a site picks.
>>> Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color? It'd be awful!
Speaking as a user not a developer, it'd be lovely.
> Can you imagine if every button on every site was the same Windows button gray color, regardless of the site's color?
Not a webdev, but can't you just use CSS on the <button> element for that?
Yes you can, on a small/simple site. But on a serious web application sticking to plain HTML/CSS will be far too limiting, in many ways.
There's a reason why 99.9% of web apps use JavaScript, and with it a tool (framework) like React, Astro, Angular, or Vue. And if you're using such tools, you use them (eg. you use React "components") to create a consistent UI across the site.
But again, which tool you use to develop a site has very little to do with what design choices you make. A React dev with no designer to guide him might pick the most popular date picker component for React, and have the React community influence design that way, but ... A) if everyone picks the most popular tool, it becomes more idiomatic (it's not doing this that creates divergence), and B) if there is a human designer, they can pick from 20+ date picker libraries AND they can ask the dev team to further customize them.
It's designers (or developers playing at being designers) that result in wacky new UI that's not idiomatic. It has (almost) nothing to do with React and that layer of tooling, and if anything those tools lead to more idiomatic design.
> Tell me you know nothing about web development without saying you know nothing about web dev
This Twitterism really bugs me.
You took the time to write a really detailed response (much appreciated, you convinced me). There’s no need to explicitly dunk on the OP. Though if you really want to be a little mean (a little bit is fair imo), I think it should be closer to level of creativity of the rest of your comment. Call them ignorant and say you can’t take them seriously or something. The twitterism wouldn’t really stand on its own as a comment.
Sorry for the nitpicky rant.
I think that's a fair criticism.
It bugs me that the author is "dunking on" React without knowledge on the matter (React is the tool you use to enforce consistent UI on a site; it has almost nothing at all to do with a design decision to have inconsistent UI). So I guess I "dunked on him" in response.
But ... too wrongs don't make a right. I'd remove the un-needed smarminess, if it wasn't already too late to edit.