Tagged “progressiveenhancement”
How would you build Wordle with just HTML & CSS? by Scott Jehl
Scott proposes an interview question relating to web standards and intelligent use of JavaScript.
How would you attempt to build Wordle (...or some other complex app) if you could only use HTML and CSS? Which features of that app would make more sense to build with JavaScript than with other technologies? And, can you imagine a change or addition to the HTML or CSS standard that could make any of those features more straight-forward to build?
Discussing any approaches to this challenge will reveal the candidate's broad knowledge of web standards–including new and emerging HTML and CSS features–and as a huge benefit, it would help select for the type of folks who are best suited to lead us out of the JavaScript over-reliance problems that are holding back the web today.
I hate interviews (and the mere thought of interviews), but I could handle being asked a question like this.
The new HTML search element
My work colleague Ryan recently drew my attention to the new HTML search
element. This morning I read Scott O’Hara’s excellent primer. Scott worked on implementing <search>
, and his article cleared up my questions around what it is and when we can start using it.
Firstly <search>
is not a “search input” – it’s not a replacement for any existing input
elements. Instead it’s a native HTML element to create a search
landmark, something that until now we could only achieve by applying role="search"
to another element.
Landmarks are an important semantic structure allowing screen reader users to orient themselves and jump to important areas of a web page. Existing landmark-signalling elements you might know include <header>
, <main>
, <footer>
. So you would use <search>
to wrap around a search function, thus providing additional accessibility. And it lets you do so with a native HTML element instead of re-purposing another element by adding ARIA properties, per the first rule of ARIA use. It’d look something like this:
So as Scott himself admits:
To be brutally honest, this is not the most important element that’s ever been added to the HTML specification. It is however a nice little accessibility win.
Do I have a use for this?
If you have a search function or search page and currently miss the opportunity to offer a search landmark you could do so and improve the user experience.
Can I use the <search>
element today?
As Scott mentions, it’s not yet available in browsers (although it likely will arrive soon). So if you added <search>
(just as I’ve typed it there) to a page, it wouldn’t currently create a search landmark. So you could wait for a while before using the element. Alternatively, because HTML’s design is intentionally geared toward a progressive enhancement mindset, you could take Jeremy Keith’s approach and safely use the following today:
...
Jeremy knows that when browsers encounter an HTML element they don’t know, they don’t break but rather treat it as an anonymous element and carry on. So he includes <search>
to start adopting the new element today, but bolts on role=search
temporarily to manually provide the landmark until browsers understand search
. He’ll then remove the role=search
part once support for search
is widespread.
Lean “plugin subscription form” by Chris Ferdinandi
I enjoyed this two-part tutorial from Chris arising from his critique of a subscription form plugin which includes the entire React library to achieve what could be done with a lightweight HTML and vanilla JavaScript solution. Chris advocates a progressively-enhanced approach. Instead of rendering the form with JavaScript he renders it in HTML and argues that not only is there no need for the former approach – because forms natively work without JavaScript – but also it only introduces fragility where we could provide resilience.
Part one: Using a wrecking ball when a hammer would suffice
Part two: More HTML, less JavaScript
I also recreated the front-end parts in a codepen to help this sink in.
Lastly, as someone who always favours resilience over fragility I wanted to take a moment to consider the part that Chris didn’t cover – why JavaScript is required at all. I guess it’s because, being a plugin, this is intended for portability and you as author have decided you want the error and success messaging to be “white-labelled” on the consuming website rather than for the user to be taken to a separate website. You don’t know the context’s stack so to provide a universally-supported solution you use JavaScript to handle the messaging, which then means you need to use JS to orchestrate everything else – preventing default submission, validating, fetch-based submission and API response handling.
Let's talk about web components (by Brad Frost)
Brad breaks down the good, bad and ugly of web components but also makes a compelling argument that we should get behind this technology.
I come from the Zeldman school of web standards, am a strong proponent of progressive enhancement, care deeply about accessibility, and want to do my part to make sure that the web lives up to its ideals. And I bet you feel similar. It’s in that spirit that I want to see web components succeed. Because web components are a part of the web!
I’m with you, Brad. I just need to find practical ways to make them work.
WebC
WebC, the latest addition to the Eleventy suite of technologies, is focused on making Web Components easier to use. I have to admit, it took me a while to work out the idea behind this one, but I see it now and it looks interesting.
Here are a few of the selling points for WebC, as I see them.
With WebC, web components are the consumer developer’s authoring interface. So for example you might add a badge component into your page with <my-badge text='Lorem ipsum'></my-badge>
.
Using web components is great – especially for Design Systems – because unlike with proprietary component frameworks, components are not coupled to a single technology stack but rather are platform and stack-agnostic, meaning they could be reused across products.
From the component creator perspective: whereas normally web components frustratingly require writing accompanying JavaScript and depending on JavaScript availability even for “JavaScript-free” components, with WebC this is not the case. You can create “HTML-only” components and rely on them to be rendered to screen. This is because WebC takes your .webc
component file and compiles it to the simplest output HTML required.
WebC is progressive enhancement friendly. As mentioned above, your no-JS HTML foundation will render. But going further, you can also colocate your foundational baseline beside the scripts and CSS that enhance it within the same file.
This ability to write components within a single file (rather than separate template and script files) is pretty nice in general.
There are lots of other goodies too such as the ability to scope your styles and JavaScript to your component, and to set styles and JS to be aggregated in such a way that your layout file can optionally load only the styles and scripts required for the components in use on the current page.
Useful resources:
The ARIA presentation role
I’ve never properly understood when you would need to use the ARIA presentation
role. This is perhaps in part because it is often used inappropriately, for example in situations where aria-hidden
would be more appropriate. However I think the penny has finally dropped.
It’s fairly nuanced stuff so I’ll forgive myself this time!
You might use role=presentation
in JavaScript which progressively enhances a basic JS-independent HTML foundation into a more advanced JS-dependent experience. In such cases you might want to reuse the baseline HTML but remove semantics which are no longer appropriate.
As an example, Inclusive Components Tabbed Interfaces starts with a Table of Contents marked up as an unordered list of links. However in enhanced mode the links take on a new role as tabs in a tablist
so role=presentation
is applied to their containing <li>
elements so that the tab list is announced appropriately and not as a plain list.
Thoughts on HTML over the wire solutions
Max Böck just tweeted his excitement about htmx:
htmx (and similar "HTML over the wire" approaches) could someday replace Javascript SPAs. Cool to see a real-world case study on that, and with promising results
There’s similar excitement at my place of work (and among the Rails community in general) about Turbo. It promises:
the speed of an SPA without having to write any JS
These new approaches are attractive because they let us create user interfaces that update the current page sans reload – usually communicating with the server to get swap-in content or update the database – but by writing HTML rather than JavaScript. Developers have long wished for HTML alone to handle common interactive patterns so a set of simple, declarative conventions really appeals. Writing less JavaScript feels good for performance and lightening maintenance burden. Furthermore the Single Page App (SPA) approach via JS frameworks like React and Vue is heavier and more complicated than most situations call for.
However I have some concerns.
I’m concerned about the “no javascript” language being used, for example in articles titled Hotwire: reactive Rails with no JavaScript. Let’s be clear about what Turbo and htmx are in simple material terms. As Reddit user nnuri puts it in do Hotwire and htmx have a commitment to accessibility? the approach is based on:
a JavaScript library in the client's browser that manipulates the DOM.
Your UI that uses htmx or Turbo is dependent on that JS library. And JS is the most brittle part of the stack. So you need to think about resilience and access. The htmx docs has a section on progressive enhancement but I’m not convinced it’s part of the design.
Secondly if you have client-side JS that changes content and state, that brings added accessibility responsibilities. When the content or state of a page is changed, you need to convey this programatically. We normally need to handle this in JavaScript. Do these solutions cover the requirements of accessible JS components, or even let you customise them to do add the necessary state changes yourself? For example when replacing HTML you need to add aria-live (see also Léonie Watson on accessible forms with ARIA live regions).
Another concern relates to user expectations. Just because you can do something doesn’t mean you should. For example, links should not be used to do the job of a button
. If you do, they need role=button
however this is inadvisable because you then need to recreate (and will likely miss) the other features of a button, and will also likely confuse people due to mismatches between perceived affordance and actual functionality. Additionally, as Jeremy Keith has written, links should not delete things.
In general I feel the message of the new HTML over the wire solutions is very weighted toward developer experience but doesn’t make user experience considerations and implications clear. Due to unanswered questions regarding accessibility I worry that firstly they’re not natively mature in their understanding and approach on that front, and secondly that their framing of benefits is likely to make accessibility ignored due to engineers thinking that they can totally rely on the library.
I’d be really pleased if my concerns could be allayed because in general I like the approach.
Update 30/1//22
I decided to revisit a book I read back in 2007 – Jeremy Keith’s Bulletproof Ajax. I had forgotten this, but it actually contains a section titled “Ajax and accessibility”. It acknowledged that reconciling the two is challenging and despite listing ideas for mitigating issues, it admitted that the situation was not great. However since 2007 – specifically since around 2014 – WAI-ARIA has been a completed W3C recommendation and provides a means of making web pages more accessible, particularly when dealing with dynamic content.
I don’t often have cause to use more than a few go-to ARIA attributes, however here’s my understanding of how you might approach making Ajax-driven content changes accessible by using ARIA.
To do: write this section.
References:
Sites which don’t work without JavaScript enabled still benefit from progressive enhancement
At work I and our team just had an interesting realisation about a recent conversation. We had been discussing progressive enhancement for custom toggles and a colleague mentioned that the web app in question breaks at a fundamental level if the user has disabled JavaScript, displaying a message telling them to change their settings in order to continue. He used this to suggest that any efforts to provide a no-JavaScript experience would be pointless. And this fairly absolute (and on-the-surface, sensible) statement caught me off-guard and sent me and the others down a blind alley.
I remember replying “yes, but even still we should try to improve the code by introducing good practices” and that feeling a little box-ticky.
However in retrospect I realise that we had temporarily made the mistake of conflating “JavaScript enabled” with “JavaScript available” – which are separate possibilities.
When considering resilience around JavaScript, we can consider the “factors required for JavaScript to work” as layers:
- is JavaScript enabled in the user’s browser?
- is the JavaScript getting through firewalls? (it recently didn’t for one of our customers on the NHS’s network)
- has the JavaScript finished loading?
- does the user’s browser support the JavaScript features the developers have used (i.e. does the browser “cut the mustard”?)
- is the JavaScript error-free? It’s easy for some malformed JSON to creep in and break it…
And the point my colleague made relates to Layer 1 only. And that layer – JavaScript being disabled by the user – is actually the least likely explanation for a JavaScript-dependent feature not working.
So it's really important to remember that when we build things with progressive enhancement we are not just addressing Layer 1, but Layers 2—5 too (as well as other layers I’ve probably forgotten!)
How we think about browsers, on GitHub’s blog
Keith Cirkel of Github has written about how they think about browsers and it’s interesting. In summary Github achieve:
- improved performance;
- exploiting new native technologies; and
- universal user access/inclusion
…via a progressive enhancement strategy that ensures a basic experience for all but delivers an enhanced experience to most. Their tooling gets a bit deep/exotic in places but I think the basic premise is:
- decide on what our basic experience is, then use native HTML combined with a bare minimum of other stuff to help old browsers deliver that; and
- exploit new JS features in our enhanced experience (the one most people will get) to make it super lean and fast
Pretty cool.
Does the HTML details element solve progressively-enhanced disclosures?
The HTML details
element continues to gain fans and get developers’ juices flowing. Scott Jehl recently tweeted:
I love the details/summary HTML elements. So versatile. My favorite part is being able to show a collapsed state from the start without worrying about potential operability issues if JavaScript fails to run (since its behavior doesn't need it).
Scott goes on to describe how creating disclosure widgets (controls that hide and show stuff) with resilience in mind is so much more difficult when not using <details>
since it can require complex progressive enhancement techniques. At the very least these involve making content available by default in case JavaScript fails, then hiding it when the disclosure widget script loads successfully, ideally without a jarring flash of content in between.
Like Scott says, the <details>
element is different because you can have the content collapsed (hidden) by default without worrying about JavaScript and workarounds since the hidden content can be toggled open natively. That‘s a real superpower… and also makes you wonder: how many different places and different ways might we use this super-element?
GitHub’s use of details
Back in 2019, GitHub caused a flutter by going all-in on <details>
to make various interesting UI elements interactive without JS. Muan has co-created a number of components for Github where <details>
is used to, for example, open menus. They also shared notes from a talk on this subject. And Chris Coyier for one was impressed and intrigued.
Zach Leatherman’s details-utils
I’ve previously noted Zach Leatherman’s details-utils – a great example of using a web component to enhance an existing HTML element, in this case <details>
. The enhancements include:
- animated open/close
- a quantum aspect ideal for responsive design – closed by default on narrow screens, open by default on wide
- and more
And Zach has already used it on the navigation menus on jamstack.org and netlify.com, amongst other use cases.
Notes of caution
- The details element and in-page search by Manuel Matuzovic
- A details element as a burger menu is not accessible on Cloud Four’s blog
- The details and summary elements, again by Scott O’Hara
- Disclosure widgets by Adrian Roselli
- Details and summary are not…
- Details content showing up in find (Ctrl+F)
Alternative approaches
Using a custom disclosure widget put together with JavaScript and ARIA is not the end of the world. In fact I recently tried my hand at a disclosure widget web component and early impressions are that the combination of fast, async ES modules plus native DOM discovery (which you get with web components) might alleviate the “flicker of content” issue I mentioned at the start.
Summing up
I’d been cautious about using details
for more than cases matching its intended usage but had started thinking the time was right to take it further – possibly using Zach’s web component. However based on the findings shared in the above Notes of caution section I’ve decided to stay safe to keep the user experience predictable and accessible. The behaviour when the user does an in-page search, and the current browser inconsistencies in announcing the summary
as an expand/collapse button tell me that a custom JS and ARIA disclosure widget is better for the custom UI cases.
Web components as progressive enhancement, by Cloud Four
By enhancing native HTML instead of replacing it, we can provide a solid baseline experience, and add progressive enhancement as the cherry on top.
Great article by Paul Herbert of Oregon’s Cloud Four. Using a web component to enhance an existing HTML element such as <textarea>
(rather than always creating a custom element from scratch) feels very lean, resilient and maintainable.
Off the top of my head I could see this being a nice approach for other custom form controls.
Zach Leatherman also took this approach with the <details>
element in quite exciting ways and is using it in production on Netlify’s marketing websites. I’m a bit cautious of jumping on that one just yet, though, because it’s plays more fast-and-loose with the intended purpose of the element and in so doing probably might present some accessibility issues. Still really interesting though.
Enhance! by Jeremy Keith—An Event Apart video (on Vimeo)
A classic talk by Jeremy Keith on progressive enhancement and the nature of the web and its technologies.
Collapsible sections, on Inclusive Components
It’s a few years old now, but this tutorial from Heydon Pickering on how to create an accessible, progressively enhanced user interface comprised of multiple collapsible and expandable sections is fantastic. It covers using the appropriate HTML elements (buttons) and ARIA attributes, how best to handle icons (minimal inline SVG), turning it into a web component and plenty more besides.
Progressively enhanced burger menu tutorial by Andy Bell
Here’s a smart and comprehensive tutorial from Andy Bell on how to create a progressively enhanced narrow-screen navigation solution using a custom element. Andy also uses Proxy
for “enabled” and “open” state management, ResizeObserver
on the custom element’s containing header
for a Container Query like solution, and puts some serious effort into accessible focus management.
One thing I found really interesting was that Andy was able to style child elements of the custom element (as opposed to just elements which were present in the original unenhanced markup) from his global CSS. My understanding is that you can’t get styles other than inheritable properties through the Shadow Boundary so this had me scratching my head. I think the explanation is that Andy is not attaching the elements he creates in JavaScript to the Shadow DOM but rather rewriting and re-rendering the element’s innerHTML
. This is an interesting approach and solution for getting around web component styling issues. I see elsewhere online that the innerHTML
based approach is frowned upon however Andy doesn’t “throw out” the original markup but instead augments it.
Adapting Stimulus usage for better Progressive Enhancement
A while back, Jake Archibald tweeted:
Don't render buttons on the server that require JS to work.
The idea is that user interface elements which depend on JavaScript (such as buttons) should be rendered on the client-side, i.e. with JavaScript.
In the context of a progressive enhancement mindset, this makes perfect sense. Our minimum viable experience should work without JavaScript due to the fragility of a JavaScript-dependent approach so should not include script-triggering buttons which might not work. The JavaScript which applies the enhancements should not only listen for and act upon button events, but should also be responsible for actually rendering the button.
This is how I used to build JavaScript interactions as standard, however sadly due to time constraints and framework conventions I don’t always follow this best practice on all projects.
At work, we use Stimulus. Stimulus has a pretty appealing philosophy:
Stimulus is designed to enhance static or server-rendered HTML—the “HTML you already have”
However in their examples they always render buttons on the server; they always assume the JavaScript-powered experience is the baseline experience. I’ve been pondering whether that could easily be adapted toward better progressive enhancement and it seems it can.
My hunch was that I should use the connect()
lifecycle method to render a button
into the component (and introduce any other script-dependent markup adjustments) at the earliest opportunity. I wasn’t sure whether creating new DOM elements at this point and fitting them with Stimulus-related attributes such as action
and target
would make them available via the standard Stimulus APIs like server-rendered elements but was keen to try. I started by checking if anyone was doing anything similar and found a thread where Stimulus contributor Javan suggested that DIY target creation is fine.
I then gave that a try and it worked! Check out my pen Stimulus with true progressive enhancement. It’s a pretty trivial example for now, but proves the concept.
Container Queries in Web Components | Max Böck
Max’s demo is really clever and features lots of interesting web component related techniques.
I came up with this demo of a book store. Each of the books is draggable and can be moved to one of three sections, with varying available space. Depending on where it is placed, different styles will be applied to the book.
Some of the techniques I found interesting included:
- starting with basic HTML for each book and its image, title, and author elements rather than an empty custom element, thereby providing a resilient baseline
- wrapping each book in a custom
book-element
tag (which the browser would simply treat like adiv
in the worst case scenario) - applying the
slot
attribute to each of the nested elements, for exampleslot="title"
- including a
template
withid="book-element"
at the top of the HTML. This centralises the optimal book markup, which makes for quicker, easier, and less disruptive maintenance. (Atemplate
is parsed but not rendered by the browser. It is available solely to be referenced and used by JavaScript) - including slots within the
template
, such as<slot name="title">
- putting a
style
block within thetemplate
. These styles target the book component only, and include container query driven responsiveness - targetting the
<book-element>
wrapper element in CSS via the:host
selector, and applyingcontain
to set it as a container query context - targetting a
slot
in the component CSS using (for example)::slotted(img)
Thoughts
Firstly, in the basic HTML/CSS, I might ensure images are display: block
and use div
rather than span
for a better baseline appearance should JavaScript fail.
Secondly, even though this tutorial is really nice, I still find myself asking: why use a Web Component to render a book rather than a server-side solution when the latter removes the JS dependency? Part of the reason is no doubt developer convenience—people want to build component libraries in JavaScript if that’s their language of choice. Also, it requires less backend set-up and leads to a more portable stack. And back-end tools for component-based architectures are generally less mature and feature-rich then those for the front-end.
One Web Component specific benefit is that Shadow DOM provides an encapsulation mechanism to style, script, and HTML markup. This encapsulation provides private scope that both prevents the content of the component from being affected by the external document, and keeps its CSS and JS from leaking out… which might be nice for avoiding the namespacing you’d otherwise have to do.
I have a feeling that Web Components might make sense for some components but be neither appropriate nor required for others. Therefore just because you use Web Components doesn’t mean that you suddenly need to feel the need to write or refactor every component that way. It’s worth bearing in mind that client-side JavaScript based functionality comes with a performance cost—the user needs to wait for it to download. So I feel there might be a need to exercise some restraint. I want to think about this a little more.
Other references
Ruthlessly eliminating layout shift on netlify.com, by Zach Leatherman
I love hearing about clever front-end solutions which combine technologies and achieve multiple goals. In Zach’s post we hear how Netlify’s website suffered from layout shift when conditionally rendering dismissible promo banners, and how he addressed this by rethinking the problem and shifting responsibilities around the stack.
Here’s my summary of the smart ideas covered in the post:
- decide on the appropriate server-rendered content… in this case showing rather than hiding the banner, making the most common use case faster to load
- have the banner “dismiss” button’s event handling script store the banner’s
href
in the user’s localStorage as an identifier accessible on return visits - process lightweight but critical JavaScript logic early in the
<head>
… in this case a check for this banner’s identifier existing in localStorage - under certain conditions – in this case when the banner was previously seen and dismissed – set a “state” class (
banner--hide
) on the<html>
element, leading to the component being hidden seamlessly by CSS - build the banner as a web component, the first layer of which being a custom element
<announcement-banner>
and the second a JavaScript class to enhance it - delegate responsibility for presenting the banner’s “dismiss” button to the same script responsible for the component’s enhancements, meaning that a broken button won’t be presented if that script were to break.
So much to like in there!
Here are some further thoughts the article provoked.
Web components FTW
It feels like creating a component such as this one as a web component leads to a real convergence of benefits:
- tool-free, async loading of the component JS as an ES module
- fast, native element discovery (no need for a
document.querySelector
) - enforces using a nice, idiomatic class providing encapsulation and high-performing native callbacks
- resilience and progressive enhancement by putting all your JS-dependent stuff into the JS class and having that enhance your basic custom element. If that JS breaks, you still have the basic element and won’t present any broken elements.
Even better, you end up with framework-independent, standards-based component that you could share with others for reuse elsewhere, just like Zach did.
Multiple banners
I could see there being a case where there are multiple banners during the same time period. I guess in that situation the localStorage banner
value could be a stringified object rather than a simple, single-URL string.
Setting context on the root
It’s really handy to have a way to exert just-in-time control over the display of a server-rendered element in a way that avoids flashes of content… and adding a class to the <html>
element offers that. In this approach, we run the small amount of JavaScript required to test a local condition (e.g. checking for a value in localStorage) really early. That lets us process our conditional logic before the element is rendered… although this also means that it’s not yet available in the DOM for direct manipulation. But adding a class to the HTML element means that we can pre-prepare CSS to use that class as a contextual selector for hiding the element.
We’re already familiar with the technique of placing classes on the root element from libraries like modernizr and some font-loading approaches, but this article serves as a reminder that we can employ it whenever we need it.
Handling the close button
Zach’s approach to handling the banner’s dismiss button was interesting. He makes sure that it’s not shown unless the web component’s JavaScript runs successfully which is great, but rather than inject it with JavaScript he includes it in the initial HTML but hidden with CSS, and his method of hiding is opacity
.
We use opacity to toggle the close button so that it doesn’t reflow the component when it’s enabled via JavaScript.
I think what Zach’s saying is that the alternatives – inserting the button with JS, or toggling the hidden
attribute or its CSS counterpart display:none
– would affect geometry causing the browser to perform layout… whereas modifying opacity does not.
I love that level of diligence! Typically I prefer to delegate responsibility for inserting JS-dependent buttons to JavaScript because in comparison to including a button in the server-rendered HTML then hiding it, it feels more resilient and a more maintainable separation of concerns. However as always the best solution depends on the situation.
If I were going down Zach’s route I think I’d replace opacity
with visibility
since the latter hiding method removes the hidden element from the document which feels more accessible, while still avoiding triggering the reflow that display
would.
Side-thoughts
In a server-side scripted application – one using Rails or PHP, for example – you could alternatively handle persisting state with cookies rather than localStorage… allowing you to test for the presence of the cookie on the server then handle conditional rendering of the banner on the server too, rather than needing classes which trigger hiding. I can see an argument for that. Thing is though, not everyone’s working in that environment. Zach has provided a standalone solution.
References
- Zach’s Herald of the dog web component
- CSS Triggers of reflow and repaint
- Minimising layout thrashing
One web component to rule them all? (on Filament Group, Inc.)
Scott Jehl has taken a refreshingly Progressive Enhancement -centric look at Web Components.
this pattern provides a nice hook for adding progressive enhancements to already-meaningful HTML contained in these custom elements, leaving them resilient in the case of of script loading failures and allowing the page to start rendering before the JS happens to run.
He goes further and creates a factory for creating Web Components which allows adding to a single element many small behavioural script enhancements that may or may not relate to each other.
There are also some great notes on polyfilling and the performance upgrade provided by lifecycle callbacks.
And Scott’s wc-experiments repo contains some interesting demos.
Building a resilient frontend using progressive enhancement (on GOV.UK)
GOV.UK’s guidance on developing using progressive enhancement is pretty great in all departments. It begins with this solid advice:
you should start by making your page work with just HTML, before adding anything else like Cascading Style Sheets (CSS) and JavaScript. This is because HTML is the most resilient layer. If the HTML fails there’s no web page. Should the CSS or JavaScript fail, the HTML will still render correctly.
I particularly like the section where they address the misconception that a resilient baseline is only required in places where the user has explicitly disabled JavaScript and therefore not worth worrying about.
You should not assume the reason for designing a service that works without CSS or JavaScript is because a user chooses to switch these off. There are many situations when extra layers can fail to load or are filtered.
As their subsequent list of scenarios illustrates, a user turning JavaScript off is probably the least likely of a range of reasons why extra layers on top of HTML can fail.
Relatedly, I’ve often found that Everyone has JavaScript, right? serves as a great go-to reference for these sorts of conversations around resilience.
Progressively enhanced JavaScript In Real Life
Over the last couple of days I’ve witnessed a good example of progressive enhancement “In Real Life”. And I think it’s good to log and share these validations of web development best practices when they happen so that their benefits can be seen as real rather than theoretical.
A few days ago I noticed that the search function on my website wasn’t working optimally. As usual, I’d click the navigation link “Search” then some JavaScript would reveal a search input and set keyboard focus to it, prompting me to enter a search term. Normally, the JavaScript would then “look ahead” as I type characters, searching the website for matching content and presenting (directly underneath) a list of search result links to choose from.
The problem was that although the search input was appearing, the search result suggestions were no longer appearing as I typed.
Fortunately, back when I built the feature I had just read Phil Hawksworth’s Adding Search to a Jamstack site which begins by creating a non-JavaScript baseline using a standard form
which submits to Google Search (scoped to your website), passing as search query the search term you just typed. This is how I built mine, too.
So, just yesterday at work I was reviewing a PR which prompted me to search for a specific article on my website by using the term “aria-label”. And although the enhanced search wasn’t working, the baseline search functionally was there to deliver me to a Google search result page (site:https://fuzzylogic.me/ aria-label
) with the exact article I needed appearing top of the search results. Not a rolls-royce experience, but perfectly serviceable!
Why had the enhanced search solution failed? It was because the .json
file which is the data source for the lookahead search had at some point allowed in a weird character and become malformed. And although the site’s JS was otherwise fine, this malformed data file was preventing the enhanced search from working.
JavaScript is brittle and fails for many reasons and in many ways, making it different from the rest of the stack. Added to that there’s the “unavailable until loaded” aspect, or as Jake Archibald put it:
all your users are non-JS while they’re downloading your JS.
The best practices that we as web developers have built up for years are not just theoretical. Go watch a screen reader user browse the web if you want proof that providing descriptive link text rather than “click here”, or employing headings and good document structure, or describing images properly with alt
attributes are worthwhile endeavours. Those users depend on those good practices.
Likewise, JavaScript will fail to be available on ocassion, so building a baseline no-JS solution will ensure that when it does, the show still goes on.
Browser Support Heuristics
In web development it’s useful when we can say “if the browser supports X, then we know it also supports Y”.
There was a small lightbulb moment at work earlier this year when we worked out that:
if the user’s browser supports CSS Grid, then you know you it also supports custom properties.
Knowing this means that if you wrap some CSS in an @supports(display:grid)
then you can also safely use custom properties within that block.
I love this rule of thumb! It saves you looking up caniuse.com for each feature and comparing the browser support.
This weekend I did some unplanned rabbit-holing on the current state of (and best practices for using) ES modules in the browser, as-is and untranspiled. That revealed another interesting rule of thumb:
any browser that supports
<script type="module">
also supportslet
andconst
,async/await
, the spread operator, etc.
One implication of this is that if you currently build a large JavaScript bundle (due to being transpiled down to ES 3/5 and including lots of polyfills) and ship this to all browsers including the modern ones… you could instead improve performance for the majority of your visitors by configuring your bundler to generate two bundles from your code then doing:
// only one of these will be used.
<script type="module" src="lean-and-modern.js"></script>
<script nomodule src="bulky-alternative-for-old-browsers.js"></script>
I might make a little page or microsite for these rules of thumb. They’re pretty handy!
Three CSS Alternatives to JavaScript Navigation (on CSS-Tricks)
In general this is a decent article on non-JavaScript-based mobile navigation options, but what I found most interesting is the idea of having a separate page for your navigation menu (at the URL /menu, for example).
Who said navigation has to be in the header of every page? If your front end is extremely lightweight or if you have a long list of menu items to display in your navigation, the most practical method might be to create a separate page to list them all.
I also noted that the article describes a method where you can “spoof” a slide-in hamburger menu without JS by using the checkbox hack. I once coded a similar “HTML and CSS -only” hamburger menu, but opted instead to use the :target
pseudo-class in combination with the adjacent sibling selector, as described by Chris Coyier back in 2012.
Striking a Balance Between Native and Custom Select Elements (on CSS-Tricks)
We’re not going to try to replicate everything that the browser does by default with a native select element. We’re going to literally use a select element when any assistive tech is used. But when a mouse is being used, we’ll show the styled version and make it function as a select element.
This custom-styled select solution satisfies those who insist on a custom component but retains all the built-in accessibility we get from native form controls. I also really like the use of a @media (hover: hover)
media query to detect an environment with hover (such as a computer with a mouse rather than a mobile browser on a handheld device).
We’ve ruined the Web. Here’s how we fix it. (This is HCD podcast)
During the COVID situation, people have an urgent need to access critical information online. But in 2020, the average webpage is rammed full of large JavaScript files, huge images etc, and as a result is slow to load. This problem is likely to be most keenly felt by those who don’t have the luxury of fast internet – potentially the same people who need access to that critical information the most.
Here’s a brilliant discussion between Gerry McGovern and Jeremy Keith on that problem, suggesting tactics to help fix things such as performance budgets, introducing tactics at the design stage to mimic slow connections and other access constraints, optimising for return visits, progressive enhancement and more.
Loved this!
(via @adactio)
In the same vein as Jeremy Keith’s recent blog post, Hydration, which calls out some of the performance and user experience problems associated with current Server Side Rendering approaches, I think Jake Archibald is absolutely bang on the money here.
Hydration (Adactio: Journal)
The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.
Jeremy Keith notes that these days JavaScript frameworks like React can be used in different ways: not solely for creating an SPA or for complex client-site state management, but perhaps for JavaScript that is run on the server. A developer might choose React because they like the way it encourages modularity and componentisation. This could be a good thing if frameworks like Gatsby and Next.js were to use progressive enhancement properly.
In reality, the system of server-side rendering of non-interactive HTML that is reliant on a further payload of JavaScript for hydration leads to an initial loading experience that is “jagged and frustrating”.
Jeremy argues that this represents a worst-of-both-worlds situation and that its alleged “progressive enhancement via improved separation of concerns” is missing the point.
Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.
(via @adactio)
Building an accessible show/hide disclosure component with vanilla JS (Go Make Things)
A disclosure component is the formal name for the pattern where you click a button to reveal or hide content. This includes things like a “show more/show less” interaction for some descriptive text below a YouTube video, or a hamburger menu that reveals and hides when you click it.
Chris’s article provides an accessible means of showing and hiding content at the press of a button when the native HTML details
element isn’t suitable.
Progressively Enhanced JavaScript with Stimulus
I’m dipping my toes into Stimulus, the JavaScript micro-framework from Basecamp. Here are my initial thoughts.
I immediately like the ethos of Stimulus.
The creators’ take is that in many cases, using one of the popular contemporary JavaScript frameworks is overkill.
We don’t always need a nuclear solution that:
- takes over our whole front end;
- renders entire, otherwise empty pages from JSON data;
- manages state in JavaScript objects or Redux; or
- requires a proprietary templating language.
Instead, Simulus suggests a more “modest” solution – using an existing server-rendered HTML document as its basis (either from the initial HTTP response or from an AJAX call), and then progressively enhancing.
It advocates readable markup – being able to read a fragment of HTML which includes sprinkles of Stimulus and easily understand what’s going on.
And interestingly, Stimulus proposes storing state in the HTML/DOM.
How it works
Stimulus’ technical purpose is to automatically connect DOM elements to JavaScript objects which are implemented via ES6 classes. The connection is made by data–
attributes (rather than id
or class
attributes).
data-controller
values connect and disconnect Stimulus controllers.
The key elements are:
- Controllers
- Actions (essentially event handlers) which trigger controller methods
- Targets (elements which we want to read or write to, mapped to controller properties)
Some nice touches
I like the way you can use the connect()
method – a lifecycle callback invoked whenever a given controller is connected to the DOM - as a place to test browser support for a given feature before applying a JS-based enhancement.
Stimulus also readily supports the ability to have multiple instances of a controller on the page.
Furthermore, actions and targets can be added to any type of element without the controller JavaScript needing to know or care about the specific element, promoting loose coupling between HTML and JavaScript.
Managing State in Stimulus
Initial state can be read in from our DOM element via a data-
attribute, e.g. data-slideshow-index
.
Then in our controller object we have access to a this.data
API with has()
, get()
, and set()
methods. We can use those methods to set new values back into our DOM attribute, so that state lives entirely in the DOM without the need for a JavaScript state object.
Possible Limitations
Stimulus feels a little restrictive if dealing with less simple elements – say, for example, a data table with lots of rows and columns, each differing in multiple ways.
And if, like in our data table example, that element has lots of child elements, it feels like there might be more of a performance hit to update each one individually rather than replace the contents with new innerHTML
in one fell swoop.
Summing Up
I love Stimulus’s modest and progressive enhancement friendly approach. I can see me adopting it as a means of writing modern, modular JavaScript which fits well in a webpack context in situations where the interactive elements are relatively simple and not composed of complex, multidimensional data.
Releasing early, releasing often… and avoiding paralysis by analysis
My name is Laurence Hughes and I’m a perfectionist. But I’m working on it.
I’ve suffered from the painful affliction of perfectionism for as long as I can remember (I blame the mother…) and although I’m much better than I used to be, I don’t think it ever fully leaves you.
Good things undoubtedly come from sweating the details, but a perfectionist streak left unchecked can cause creative endeavours to take longer than they otherwise would, sucking out the initial spark and motivation.
A related affliction – and this particularly applies to coding – is spending so long considering the many different ways to tackle a problem that you end up at a temporary standstill… i.e. paralysis by analysis.
At work, developing software for Greenhill, the perfectionist streak is kept in check because we follow the release early, release often philosophy and our development sprints just don’t allow time for it.
Release early, release often works well when we’re building applications for clients because it leads to a tighter feedback loop between the customers, the client and ourselves; and also leads to new features being released more regularly.
On this, my own website, I’m going to take that principle further and hopefully release even earlier and more often.
So – things may look rough and ready or even broken (temporarily) when you visit this site now or in the future. And in this case I’m OK with that. This isn’t a client’s site – it’s mine. The Minimum Viable Product (or baseline before progressive enhancement) is that visitors can read and navigate the content. Right now, at re-launch, it’s as simple as that. There’s no stylesheet as yet. It was more important for me to get the ball rolling than anything else. Hopefully as you read this, if the site in a state of flux (read: looks like a riot) you’ll still be able to enjoy the content! I have plenty of nice features in the pipeline but they’ll happen bit-by-bit, one step at a time.
This site is a work in progress – a bit like myself!
See all tags.