Tagged “javascript”
Lightning Fast Web Performance course by Scott Jehl
I purchased Scott’s course back in 2021 and immediately liked it, but hadn’t found the space to complete it until now. Anyway, I’m glad I have as it’s well structured and full of insights and practical tips. I’ll use this post to summarise my main takeaways.
Having completed the course I have much more rounded knowledge in the following areas:
- Why performance matters to users and business
- Performance-related metrics and “moments”: defining fast and slow
- Identifying performance problems: the tools and how to use them
- Making things faster, via various good practices and fixes
I’ll update this post soon to add some key bullet-points for each of the above headings.
How would you build Wordle with just HTML & CSS? by Scott Jehl
Scott proposes an interview question relating to web standards and intelligent use of JavaScript.
How would you attempt to build Wordle (...or some other complex app) if you could only use HTML and CSS? Which features of that app would make more sense to build with JavaScript than with other technologies? And, can you imagine a change or addition to the HTML or CSS standard that could make any of those features more straight-forward to build?
Discussing any approaches to this challenge will reveal the candidate's broad knowledge of web standards–including new and emerging HTML and CSS features–and as a huge benefit, it would help select for the type of folks who are best suited to lead us out of the JavaScript over-reliance problems that are holding back the web today.
I hate interviews (and the mere thought of interviews), but I could handle being asked a question like this.
Use the dialog element (reasonably), by Scott O’Hara
Here’s an important update on native modal dialogues. TL;DR – it’s now OK to use dialog
.
Last year I posted that Safari now supported the HTML dialog
element meaning that we were within touching distance of being able to adopt it with confidence. My caveat was:
However first I think we’d need to make an informed decision regarding our satisfaction with support, based on the updated advice in Scott O’Hara’s article Having an Open Dialog.
(Accessibility expert Scott O’Hara has been diligently testing the dialog
element for years.)
However the happy day has arrived. The other day Scott posted Use the dialog element (reasonably). It includes this advice:
I personally think it’s time to move away from using custom dialogs, and to use the
dialog
element instead.
That’s an important green-light!
And this of course means that we can stop DIYing modal dialogues from div
s plus super-complicated scripting and custom ARIA, and instead let a native HTML element do most of the heavy lifting for us.
From a Design System perspective, I’d previously suggested to my team that when we revisit our Modal component we should err toward a good custom dialogue library first, however I’m now likely to recommend we go for native dialog
instead. Which is great!
Web Components Guide
This new resource on Web Components from Keith Cirkel and Kristján Oddsson of GitHub (and friends) is looking great so far.
As I recently tweeted, I love that it’s demoing “vanilla” Web Components first rather than using a library for the demos.
So far I’ve found that the various web component development frameworks (Lit etc) are cool, but generally I’d like to see more demos that create components without using abstractions. The frameworks include dependencies, opinions and proprietary syntax that (for me at least) make an already tricky learning curve more steep so they’re not (yet) my preferred approach. Right now I want to properly understand what’s going on at the web standards level.
Aside from tutorials this guide also includes a great Learn section which digs into JavaScript topics such as Classes and Events and why these are important for Web Components.
I hope that in future the guide will cover testing Web Components too.
Lastly, I like the Embed Mastodon Toot web component tutorial, and to help it sink in (and save the code for posterity) I’ve chucked the code into a pen.
Lean “plugin subscription form” by Chris Ferdinandi
I enjoyed this two-part tutorial from Chris arising from his critique of a subscription form plugin which includes the entire React library to achieve what could be done with a lightweight HTML and vanilla JavaScript solution. Chris advocates a progressively-enhanced approach. Instead of rendering the form with JavaScript he renders it in HTML and argues that not only is there no need for the former approach – because forms natively work without JavaScript – but also it only introduces fragility where we could provide resilience.
Part one: Using a wrecking ball when a hammer would suffice
Part two: More HTML, less JavaScript
I also recreated the front-end parts in a codepen to help this sink in.
Lastly, as someone who always favours resilience over fragility I wanted to take a moment to consider the part that Chris didn’t cover – why JavaScript is required at all. I guess it’s because, being a plugin, this is intended for portability and you as author have decided you want the error and success messaging to be “white-labelled” on the consuming website rather than for the user to be taken to a separate website. You don’t know the context’s stack so to provide a universally-supported solution you use JavaScript to handle the messaging, which then means you need to use JS to orchestrate everything else – preventing default submission, validating, fetch-based submission and API response handling.
Full disclosure
Whether I’m thinking about inclusive hiding, hamburger menus or web components one UI pattern I keep revisiting is the disclosure widget. Perhaps it’s because you can use this small pattern to bring together so many other wider aspects of good web development. So for future reference, here’s a braindump of my knowledge and resources on the subject.
A disclosure widget is for collapsing and expanding something. You might alternately describe that as hiding and showing something. The reason we collapse content is to save space. The thinking goes that users have a finite amount of screen estate (and attention) so we might want to reduce the space taken up by secondary content, or finer details, or repeated content so as to push the page’s key messages to the fore and save the user some scrolling. With a disclosure widget we collapse detailed content into a smaller snippet that acts as a button the user can activate to expand the full details (and collapse them again).
Adrian Roselli’s article Disclosure Widgets is a great primer on the available native and custom ARIA options, how to implement them and where each might be appropriate. Adrian’s article helpfully offers that a disclosure widget (the custom ARIA flavour) can be used as a base in order to achieve some other common UI requirements so long as you’re aware there are extra considerations and handle those carefully. Examples include:
- link and disclosure widget navigation
- table with expando rows
- accordion
- hamburger navigation
- highly custom
select
alternatives whenlistbox
is innapropriate because it needs to include items that do not have theoption
role - a toggle-tip
Something Adrian addresses (and I’ve previously written about) is the question around for which collapse/expand use cases we can safely use the native details
element. There’s a lot to mention but since I’d prefer to present a simple heuristic let’s go meta here and use a details
:
Use details
for basic narrative content and panels but otherwise use a DIY disclosure
It’s either a bad idea or at the very least “challenging” to use a native `details` for:
- a hamburger menu
- an accordion
In terms of styling terms it’s tricky to use a `details` for:
- a custom appearance
- animation
The above styling issues are perhaps not insurmountable. It depends on what level of customisation you need.
Note to self: add more detail and links to this section when I get the chance.
I’ve also noticed that Adrian has a handy pen combining code for numerous disclosure widget variations.
Heydon Pickering’s Collapsible sections on Inclusive Components is excellent, and includes consideration of progressive enhancement and an excellent web component version. It’s also oriented toward multiple adjacent sections (an accordion although it doesn’t use that term) and includes fantastic advice regarding:
- appropriate markup including screen reader considerations
- how best to programmatically switch state (such as open/closed) within a web component
- how to make that state accessible via an HTML attribute on the web component (e.g.
<toggle-section open=true>
) - how that attribute is then accessible outside the component, for example to a button and script that collapses and expands all sections simultaneously
There’s my DIY Disclosure widget demo on Codepen. I first created it to use as an example in a talk on Hiding elements on the web, but since then its implementation has taken a few twists and turns. In its latest incarnation I’ve taken some inspiration from the way Manuel Matuzovic’s navigation tutorial uses a template
in the markup to prepare the “hamburger toggle” button.
I’ve also been reflecting on how the hidden
attribute’s boolean nature is ideal for a toggle button in theory – it’s semantic and therefore programattically conveys state – but how hiding with CSS can be more flexible, chiefly because hidden
(like CSS’s display
) is not animatible. If you hide with CSS, you could opt to use visibility: hidden
(perhaps augmented with position
so to avoid taking up space while hidden) which similarly hides from everyone in terms of accessibilty.
As it happens, the first web component I created was a disclosure widget. It could definitely be improved by some tweaks and additions along the lines of Heydon Pickering’s web component mentioned above. I’ll try to do that soon.
Troubleshooting
For some disclosure widget use cases (such as a custom link menu often called a Dropdown) there are a few events that typically should collapse the expanded widget. One is the escape key. Another is when the user moves focus outside the widget. One possible scenario is that the user might activate the trigger button, assess the expanded options and subsequently decide none are suitable and move elsewhere. The act of clicking/tapping elsewhere should collapse the widget. However there’s a challenge. In order for the widget to be able to fire unfocus
so that an event listener can act upon that, it would have to be focused in the first place. And in Safari – unlike other browsers – buttons do not automatically receive focus when activated. (I think Firefox used to be the same but was updated.) The workaround is to set focus manually via focus()
in your click event listener for the trigger button.
WebC
WebC, the latest addition to the Eleventy suite of technologies, is focused on making Web Components easier to use. I have to admit, it took me a while to work out the idea behind this one, but I see it now and it looks interesting.
Here are a few of the selling points for WebC, as I see them.
With WebC, web components are the consumer developer’s authoring interface. So for example you might add a badge component into your page with <my-badge text='Lorem ipsum'></my-badge>
.
Using web components is great – especially for Design Systems – because unlike with proprietary component frameworks, components are not coupled to a single technology stack but rather are platform and stack-agnostic, meaning they could be reused across products.
From the component creator perspective: whereas normally web components frustratingly require writing accompanying JavaScript and depending on JavaScript availability even for “JavaScript-free” components, with WebC this is not the case. You can create “HTML-only” components and rely on them to be rendered to screen. This is because WebC takes your .webc
component file and compiles it to the simplest output HTML required.
WebC is progressive enhancement friendly. As mentioned above, your no-JS HTML foundation will render. But going further, you can also colocate your foundational baseline beside the scripts and CSS that enhance it within the same file.
This ability to write components within a single file (rather than separate template and script files) is pretty nice in general.
There are lots of other goodies too such as the ability to scope your styles and JavaScript to your component, and to set styles and JS to be aggregated in such a way that your layout file can optionally load only the styles and scripts required for the components in use on the current page.
Useful resources:
The ARIA presentation role
I’ve never properly understood when you would need to use the ARIA presentation
role. This is perhaps in part because it is often used inappropriately, for example in situations where aria-hidden
would be more appropriate. However I think the penny has finally dropped.
It’s fairly nuanced stuff so I’ll forgive myself this time!
You might use role=presentation
in JavaScript which progressively enhances a basic JS-independent HTML foundation into a more advanced JS-dependent experience. In such cases you might want to reuse the baseline HTML but remove semantics which are no longer appropriate.
As an example, Inclusive Components Tabbed Interfaces starts with a Table of Contents marked up as an unordered list of links. However in enhanced mode the links take on a new role as tabs in a tablist
so role=presentation
is applied to their containing <li>
elements so that the tab list is announced appropriately and not as a plain list.
Thoughts on HTML over the wire solutions
Max Böck just tweeted his excitement about htmx:
htmx (and similar "HTML over the wire" approaches) could someday replace Javascript SPAs. Cool to see a real-world case study on that, and with promising results
There’s similar excitement at my place of work (and among the Rails community in general) about Turbo. It promises:
the speed of an SPA without having to write any JS
These new approaches are attractive because they let us create user interfaces that update the current page sans reload – usually communicating with the server to get swap-in content or update the database – but by writing HTML rather than JavaScript. Developers have long wished for HTML alone to handle common interactive patterns so a set of simple, declarative conventions really appeals. Writing less JavaScript feels good for performance and lightening maintenance burden. Furthermore the Single Page App (SPA) approach via JS frameworks like React and Vue is heavier and more complicated than most situations call for.
However I have some concerns.
I’m concerned about the “no javascript” language being used, for example in articles titled Hotwire: reactive Rails with no JavaScript. Let’s be clear about what Turbo and htmx are in simple material terms. As Reddit user nnuri puts it in do Hotwire and htmx have a commitment to accessibility? the approach is based on:
a JavaScript library in the client's browser that manipulates the DOM.
Your UI that uses htmx or Turbo is dependent on that JS library. And JS is the most brittle part of the stack. So you need to think about resilience and access. The htmx docs has a section on progressive enhancement but I’m not convinced it’s part of the design.
Secondly if you have client-side JS that changes content and state, that brings added accessibility responsibilities. When the content or state of a page is changed, you need to convey this programatically. We normally need to handle this in JavaScript. Do these solutions cover the requirements of accessible JS components, or even let you customise them to do add the necessary state changes yourself? For example when replacing HTML you need to add aria-live (see also Léonie Watson on accessible forms with ARIA live regions).
Another concern relates to user expectations. Just because you can do something doesn’t mean you should. For example, links should not be used to do the job of a button
. If you do, they need role=button
however this is inadvisable because you then need to recreate (and will likely miss) the other features of a button, and will also likely confuse people due to mismatches between perceived affordance and actual functionality. Additionally, as Jeremy Keith has written, links should not delete things.
In general I feel the message of the new HTML over the wire solutions is very weighted toward developer experience but doesn’t make user experience considerations and implications clear. Due to unanswered questions regarding accessibility I worry that firstly they’re not natively mature in their understanding and approach on that front, and secondly that their framing of benefits is likely to make accessibility ignored due to engineers thinking that they can totally rely on the library.
I’d be really pleased if my concerns could be allayed because in general I like the approach.
Update 30/1//22
I decided to revisit a book I read back in 2007 – Jeremy Keith’s Bulletproof Ajax. I had forgotten this, but it actually contains a section titled “Ajax and accessibility”. It acknowledged that reconciling the two is challenging and despite listing ideas for mitigating issues, it admitted that the situation was not great. However since 2007 – specifically since around 2014 – WAI-ARIA has been a completed W3C recommendation and provides a means of making web pages more accessible, particularly when dealing with dynamic content.
I don’t often have cause to use more than a few go-to ARIA attributes, however here’s my understanding of how you might approach making Ajax-driven content changes accessible by using ARIA.
To do: write this section.
References:
Tabs: truth, fiction and practical measures
My colleague Anda and I just had a good conversation about tabs, and specifically the company’s tabs component. I’ve mentioned before that our tabs are unconventional and potentially confusing, and Anda was interested to hear more.
What’s the purpose of a tabbed interface?
A tabbed interface is a space-saving tool for collapsing parallel content into panels, with one panel visible at a time but all accessible on-demand. While switching between tab panels the user is kept within the same wider context i.e. the same page, rather than being moved around.
Conventional tabbed interfaces
Here are some great examples of tabs components.
- Inclusive Components – Tabbed Interfaces
- Tabs component in GOV.UK Design System
- ARIA Tabs by Adrian Roselli
Tabs are a device intended to improve content density. They should deliver a same-page experience. Activating a tab reveals its corresponding tab panel. Ideally the approach employs progressive enhancement, starting as a basic Table of Contents. There’s quite a lot of advanced semantics, state and interactivity under the hood.
Faux tabs
But in our Design System at work, ours are currently just the “tabs” with no tab panels, and each “tab” generally points to another page rather than somewhere on the same page. In other words it’s a navigation menu made to look like a tabbed interface.
I’m not happy with this from an affordance point of view. Naming and presenting something as one thing but then having it function differently leads to usability problems and communication breakdowns. As the Inclusive Components Tabbed Interfaces page says:
making the set of links in site navigation appear like a set of tabs is deceptive: A user should expect the keyboard behaviors of a tabbed interface, as well as focus remaining on a tab in the current page. A link pointing to a different page will load that page and move focus to its document (body) element.
Confused language causes problems
One real-life problem with our tabs is that they have been engineered as if they are conventional tabs, however since the actual use case is often navigation the semantics are inappropriate.
We currently give each “tab” the ARIA tab
role, defined as follows:
The ARIA
tab
role indicates an interactive element inside atablist
that, when activated, displays its associatedtabpanel
.
But our tabs have no corresponding tabpanel
; they don’t use JavaScript for a single-page experience balancing semantics, interactivity and state as is conventional. They’re just navigation links. And this mismatch of tabs-oriented ARIA within a non-tabs use case will do more harm than good. It’s an accessibility fail.
A stop-gap solution
If content for one or more tabpanel
is provided, apply the complicated ARIA attributes for proper tabs. If not, don’t. This means we allow component consumers to either create i) a real tabbed interface, or ii) “a nav menu that looks like tabs” (but without any inappropriate ARIA attributes). I don’t agree with the latter as a design approach, but that’s a conversation for another day!
Tabs in the future
Some clever people involved with Open UI are using web components to explore how a useful tabs
element could work if it were an HTML element. Check out the Tabvengers’ spicy-sections component. Again, this is based on the conventional expectation of tabs as a same-page experience for arranging content, rather than as a navigation menu. And I think it’d make sense to stay on the same path as the rest of the web.
Editable table cells
Yesterday the Design System team received a tentative enquiry regarding making table cells editable. I’m not yet sure whether or not this is a good idea – experience and spidey sense tell me it’s not – but regardless I decided to start exploring so as to base my answer on facts and avoid being overly cautious.
In my mind’s eye, there are two ways to achieve this:
- on clicking the cell, the cell content is presented in an (editable) form input; or
- apply the
contenteditable
attribute
In both cases you get into slightly gnarlier territory when you start considering the need for a submit button and how to position it.
I don’t have anything further to add at the moment other than to say that if I had to spike this out, I’d probably start by following Scott O’Hara’s article Using JavaScript & contenteditable.
I’d probably also tweet Scott and ask if he can say anything more on his closing statement which was:
I have more thoughts on the accessibility of contenteditable elements, but that will also have to be a topic for another day…
Update 27-09-22: I’ve also remembered that (if I were to pursue Option 1: input within cell) Adrian Roselli has an article on Accessibly including inputs in tables.
Sites which don’t work without JavaScript enabled still benefit from progressive enhancement
At work I and our team just had an interesting realisation about a recent conversation. We had been discussing progressive enhancement for custom toggles and a colleague mentioned that the web app in question breaks at a fundamental level if the user has disabled JavaScript, displaying a message telling them to change their settings in order to continue. He used this to suggest that any efforts to provide a no-JavaScript experience would be pointless. And this fairly absolute (and on-the-surface, sensible) statement caught me off-guard and sent me and the others down a blind alley.
I remember replying “yes, but even still we should try to improve the code by introducing good practices” and that feeling a little box-ticky.
However in retrospect I realise that we had temporarily made the mistake of conflating “JavaScript enabled” with “JavaScript available” – which are separate possibilities.
When considering resilience around JavaScript, we can consider the “factors required for JavaScript to work” as layers:
- is JavaScript enabled in the user’s browser?
- is the JavaScript getting through firewalls? (it recently didn’t for one of our customers on the NHS’s network)
- has the JavaScript finished loading?
- does the user’s browser support the JavaScript features the developers have used (i.e. does the browser “cut the mustard”?)
- is the JavaScript error-free? It’s easy for some malformed JSON to creep in and break it…
And the point my colleague made relates to Layer 1 only. And that layer – JavaScript being disabled by the user – is actually the least likely explanation for a JavaScript-dependent feature not working.
So it's really important to remember that when we build things with progressive enhancement we are not just addressing Layer 1, but Layers 2—5 too (as well as other layers I’ve probably forgotten!)
How we think about browsers, on GitHub’s blog
Keith Cirkel of Github has written about how they think about browsers and it’s interesting. In summary Github achieve:
- improved performance;
- exploiting new native technologies; and
- universal user access/inclusion
…via a progressive enhancement strategy that ensures a basic experience for all but delivers an enhanced experience to most. Their tooling gets a bit deep/exotic in places but I think the basic premise is:
- decide on what our basic experience is, then use native HTML combined with a bare minimum of other stuff to help old browsers deliver that; and
- exploit new JS features in our enhanced experience (the one most people will get) to make it super lean and fast
Pretty cool.
Refactoring a modal dialogue in 2022
My team will soon be refactoring our modal dialogue component. Ours has a few deficiencies, needs better developer experience and documentation, is not built to our Design System component standards, and could use a resilience boost from some progressive enhancement.
For a long time the best – meaning accessible, framework-agnostic, feature-packed – modal implementations were custom. Specifically:
However with recent browser advances (especially from Safari), there’s an argument that the time has now come that we no longer need custom solutions and can go native. So we might reach for the native <dialog>
HTML element.
However first I think we’d need to make an informed decision regarding our satisfaction with support, based on the updated advice in Scott O’Hara’s article Having an Open Dialog.
Additionally we should definitely be keeping one eye on proposals around the exciting new togglepopup
and popup
attributes which promise the holy grail of entirely HTML-powered modal dialogues with no JavaScript dependency.
Web Components with Declarative Shadow DOM via Lit and Eleventy
Here’s a new development in the Web Components story, and one that may have positive implications for resilience, performance and progressive enhancement.
Declarative Shadow DOM is a new way to implement and use Shadow DOM directly in HTML rather than by constructing a shadow root in JavaScript.
But some people including Chris Coyier and Brad Frost) reckon that writing that looks horrible. Brad said:
Declarative Shadow DOM always looked gross to me and I felt it almost defeats the purpose of web components.
And Chris added:
the server-side rendering story for Web Components, Declarative Shadow DOM, doesn’t feel very nice to me if you have to do it manually.
However Lit, a library which makes working with Web Components easier, are now providing ways to make this easier when Lit is used with Eleventy.
With tools like this (especially this @lit-labs/ssr project), we can have our cake and eat it too: use web components in a dev-friendly way, and then have the machines do the heavy lifting to convert that into a grosser-yet-progressive-enhancement-enabled syntax that ships to the user.
using JavaScript frameworks in an entirely-client-side rendered way isn’t nearly as good for anything (users, SEO, performance, accessibility, etc) as server-side rendering (the effects of hydration are still debatable, but I view as largely worth it) … [but] the server-side rendering story for Web Components, Declarative Shadow DOM, doesn’t feel very nice to me if you have to do it manually. So… don’t do it manually! Let Eleventy do it!
As an additional footnote, perhaps we can make frameworks other than Eleventy (such as Rails) create server-rendered custom elements with Declarative Shadow DOM in a similar way. One to explore.
My talk, “Hiding elements on the web” for FreeAgent’s tech blog
I recorded a talk on “Hiding elements on the web” for @freeagent’s tech blog. It’s a tricky #FrontEnd & #a11y topic so I try to cover some good practices and responsible choices. Hope it helps someone. (Also it’s my first video. Lots of room to improve!)
---What open-source design systems are built with web components?
Alex Page, a Design System engineer at Spotify, has just asked:
What open-source design systems are built with web components? Anyone exploring this space? Curious to learn what is working and what is challenging. #designsystems #webcomponents
And there are lots of interesting examples in the replies.
I plan to read up on some of the stories behind these systems.
I really like Web Components but given that I don’t take a “JavaScript all the things” approach to development and design system components, I’ve been reluctant to consider that web components should be used for every component in a system. They would certainly offer a lovely, HTML-based interface for component consumers and offer interoperability benefits such as Figma integration. But if we shift all the business logic that we currently manage on the server to client-side JavaScript then:
- the user pays the price of downloading that additional code;
- you’re writing client-side JavaScript even for those of your components that aren’t interactive; and
- you’re making everything a custom element (which as Jim Neilsen has previously written brings HTML semantics and accessibility challenges).
However maybe we can keep the JavaScript for our Web Component-based components really lightweight? I don’t know. For now I’m interested to just watch and learn.
My first Web Component: a disclosure widget
After a couple of years of reading about web components (and a lot of head-scratching), I’ve finally got around to properly creating one… or at least a rough first draft!
Check out disclosure-widget on codepen.
See also my pen which imports and consumes the component.
Caveats and to-dos:
- I haven’t yet tried writing tests for a web component
- I should find out how to refer to the custom element name in JavaScript without repeating it
- I should look into whether
observedAttributes
andattributeChangedCallback
are more appropriate than the more typical event listeners I used
References
I found Eric Bidelman’s article Custom Elements v1: Reusable Web Components pretty handy. In particular it taught me how to create a <template>
including a <slot>
to automatically ringfence the Light DOM content, and then to attach that template to the Shadow DOM to achieve my enhanced component.
Building a toast component (by Adam Argyle)
Great tutorial (with accompanying video) from Adam Argyle which starts with a useful definition of what a Toast
is and is not:
Toasts are non-interactive, passive, and asynchronous short messages for users. Generally they are used as an interface feedback pattern for informing the user about the results of an action. Toasts are unlike notifications, alerts and prompts because they're not interactive; they're not meant to be dismissed or persist. Notifications are for more important information, synchronous messaging that requires interaction, or system level messages (as opposed to page level). Toasts are more passive than other notice strategies.
There are some important distinctions between toasts and notifications in that definition: toasts are for less important information and are non-interactive. I remember in a previous work planning exercise regarding a toast component a few of us got temporarily bogged down in working out the best JavaScript templating solution for SVG icon-based “Dismiss” buttons… however we were probably barking up the wrong tree with the idea that toasts should be manually dismissable.
There are lots of interesting ideas and considerations in Adam’s tutorial, such as:
- using the
<output>
element for each toast - some crafty use of CSS Grid and logical properties for layout
- combining
hsl
and percentages in custom properties to proportionately modify rather than redefine colours for dark mode - animation using
keyframes
andanimation
- native JavaScript modules
- inserting an element before the
<body>
element (TIL that this is a viable option)
Thanks for this, Adam!
(via Adam’s tweet)
There’s some nice code in here but the demo page minifies and obfuscates everything. However the toast component source is available on GitHub.
Related links
- A toast to accessible toasts by Scott O’Hara
Web animation tips
Warning: this entry is a work-in-progress and incomplete. That said, it's still a useful reference to me which is why I've published it. I’ll flesh it out soon!
There are lots of different strands of web development. You try your best to be good at all of them, but there’s only so much time in the day! Animation is an area where I know a little but would love to know more, and from a practical perspective I’d certainly benefit from having some road-ready solutions to common challenges. As ever I want to favour web standards over libraries where possible, and take an approach that’s lean, accessible, progressively-enhanced and performance-optimised.
Here’s my attempt to break down web animation into bite-sized chunks for ocassional users like myself.
Defining animation
Animation lets us make something visually move between different states over a given period of time.
Benefits of animation
Animation is a good way of providing visual feedback, teaching users how to use a part of the interface, or adding life to a website and making it feel more “real”.
Simple animation with transition
properties
CSS transition
is great for simple animations triggered by an event.
We start by defining two different states for an element—for example opacity:1
and opacity:0
—and then transition
between those states.
The first state would be in the element’s starting styles (either defined explicitly or existing implicitly based on property defaults) and the other in either its :hover
or :focus
styles or in a class applied by JavaScript following an event.
Without the transition
the state change would still happen but would be instantaneous.
You’re not limited to only one property being animated and might, for example, transition between different opacity
and transform
states simultaneously.
Here’s an example “rise on hover” effect, adapted from Stephanie Eckles’s Smol CSS.
<div class="u-animate u-animate--rise">
<span>rise</span>
</div>
.u-animate > * {
--transition-property: transform;
--transition-duration: 180ms;
transition: var(--transition-property) var(--transition-duration) ease-in-out;
}
.u-animate--rise:hover > * {
transform: translateY(-25%);
}
Note that:
- using custom properties makes it really easy to transition a different property than
transform
without writing repetitious CSS. - we have a parent and child (
<div>
and<span>
respectively in this example) allowing us to avoid the accidental flicker which can occur when the mouse is close to an animatable element’s border by having the child be the effect which animates when the trigger (the parent) is hovered.
Complex animations with animation
properties
If an element needs to animate automatically (perhaps on page load or when added to the DOM), or is more complex than a simple A to B state change, then a CSS animation
may be more appropriate than transition
. Using this approach, animations can:
- run automatically (you don’t need an event to trigger a state change)
- go from an initial state through multiple intermediate steps to a final state rather than just from state A to state B
- run forwards, in reverse, or alternate directions
- loop infinitely
The required approach is:
- use
@keyframes
to define a reusable “template” set of animation states (or frames); then - apply
animation
properties to an element we want to animate, including one or more@keyframes
to be used.
Here’s how you do it:
@keyframes flash {
0% { opacity: 0; }
20% { opacity: 1; }
80% { opacity: 0; }
100% { opacity: 1; }
}
.animate-me {
animation: flash 5s infinite;
}
Note that you can also opt to include just one state in your @keyframes
rule, usually the initial state (written as either from
or 0%
) or final state (written as either to
or 100%
). You’d tend to do that for a two-state animation where the other “state” is in the element’s default styles, and you’d either be starting from the default styles (if your single @keyframes
state is to
) or finishing on them (if your single @keyframes
state is from
).
Should I use transition
or animation
?
As far as I can tell there’s no major performance benefit of one over the other, so that’s not an issue.
When the animation will be triggered by pseudo-class-based events like :hover
or :focus
and is simple i.e. based on just two states, transition
feels like the right choice.
Beyond that, the choice gets a bit less binary and seems to come down to developer preference. But here are a couple of notes that might help in making a decision.
For elements that need to “animate in” on page load such as an alert, or when newly added to the DOM such as items in a to-do list, an animation
with keyframes
feels the better choice. This is because transition
requires the presence of two CSS rules, leading to dedicated JavaScript to grab the element and apply a class, whereas animation
requires only one and can move between initial and final states automatically including inserting a delay before starting.
For animations that involve many frames; control over the number of iterations; or looping… use @keyframes
and animation
.
For utility classes and classes that get added by JS to existing, visible elements following an event, either approach could be used. Arguably transition
is the slightly simpler and more elegant CSS to write if it covers your needs. Then again, you might want to reuse the animations applied by those classes for both existing, visible elements and new, animated-in elements, in which case you might feel that instead using @keyframes
and animation
covers more situations.
Performance
A smooth animation should run at 60fps (frames per second). Animations that are too computationally expensive result in frames being dropped, i.e. a reduced fps rate, making the animation appear janky.
Cheap and slick properties
The CSS properties transform
and opacity
are very cheap to animate. Also, browsers often optimise these types of animation using hardware acceleration. To hint to the browser that it should optimise an animation property (and to ensure it is handled by the GPU rather than passed from CPU to GPU causing a noticeable glitch) we should use the CSS will-change
property.
.my-element {
will-change: transform;
}
Expensive properties
CSS properties which affect layout such as height
are very expensive to animate. Animating height causes a chain reaction where sibling elements have to move too. Use transform
over layout-affecting properties such as width
or left
if you can.
Some other CSS properties are less expensive but still not ideal, for example background-color
. It doesn't affect layout but requires a repaint per frame.
Test your animations on a popular low-end device.
Timing functions
- linear goes at the same rate from start to finish. It’s not like most motion in the real world.
- ease-out starts fast then gets really slow. Good for things that come in from off-screen, like a modal dialogue.
- ease-in starts slow then gets really fast. Good for moving somethng off-screen.
- ease-in-out is the combination of the previous two. It‘s symmetrical, having an equal amount of acceleration and deceleration. Good for things that happen in a loop such as element fading in and out.
- ease is the default value and features a brief ramp-up, then a lot of deceleration. It’s a good option for most general case motion that doesn’t enter or exit the viewport.
Practical examples
You can find lots of animation inspiration in libraries such as animate.css (and be sure to check animate.css on github where you can search their source for specific @keyframe
animation styles).
But here are a few specific examples of animations I or teams I’ve worked on have had to implement.
Skip to content
The anchor’s State A sees its position fixed
—i.e. positioned relative to the viewport—but then moved out of sight above it via transform: translateY(-10em)
. However its :focus
styles define a State B where the intial translate
has been undone so that the link is visible (transform: translateY(0em)
). If we transition
the transform
property then we can animate the change of state over a chosen duration, and with our preferred timing function for the acceleration curve.
HTML:
<div class="u-visually-hidden-until-focused">
<a
href="#skip-link-target"
class="u-visually-hidden-until-focused__item"
>Skip to main content</a>
</div>
<nav>
<ul>
<li><a href="/">Home</a></li>
<li><a href="/">News</a></li>
<li><a href="/">About</a></li>
<!-- …lots more nav links… -->
<li><a href="/">Contact</a></li>
</ul>
</nav>
<main id="skip-link-target">
<h1>This is the Main content</h1>
<p>Lorem ipsum <a href="/news/">dolor sit amet</a> consectetur adipisicing elit.</p>
<p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p>
</main>
CSS:
.u-visually-hidden-until-focused {
left: -100vw;
position: absolute;
&__item {
position: fixed;
top: 0;
left: 0;
transform: translateY(-10em);
transition: transform 0.2s ease-in-out;
&:focus {
transform: translateY(0em);
}
}
}
To see this in action, visit my pen Hiding: visually hidden until focused and press the tab key.
Animating in an existing element
For this requirement we want an element to animate from invisible to visible on page load. I can imagine doing this with an image or an alert, for example. This is pretty straightforward with CSS only using @keyframes
, opacity
and animation
.
Check out my fade in and out on page load with CSS codepen.
Animating in a newly added element
Stephanie Eckles shared a great CSS-only solution for animating in a newly added element which handily includes a Codepen demo. She mentions “CSS-only” because it’s common for developers to achieve the fancy animation via transition
but that means needing to “make a fake event” via a JavaScript setTimeout()
so that you can transition from the newly-added, invisible and class-free element state to adding a CSS class (perhaps called show
) that contains the opacity:1
, fancy transforms and a transition
. However Stephanie’s alternative approach combines i) hiding the element in its default styles; with ii) an automatically-running animation
that includes the necessary delay and also finishes in the keyframe’s single 100%
state… to get the same effect minus the JavaScript.
Avoiding reliance on JS and finding a solution lower down the stack is always good.
HTML:
<button>Add List Item</button>
<ul>
<li>Lorem ipsum dolor sit amet consectetur adipisicing elit. Nostrum facilis perspiciatis dignissimos, et dolores pariatur.</li>
</ul>
CSS:
li {
animation: show 600ms 100ms cubic-bezier(0.38, 0.97, 0.56, 0.76) forwards;
// Prestate
opacity: 0;
// remove transform for just a fade-in
transform: rotateX(-90deg);
transform-origin: top center;
}
@keyframes show {
100% {
opacity: 1;
transform: none;
}
}
Jhey Tompkins shared another CSS-only technique for adding elements to the DOM with snazzy entrance animations. He also uses just a single @keyframes
state but in his case the from
state which he uses to set the element’s initial opacity:0
, then in his animation he uses an animation-fill-mode
of both
(rather than forwards
as Stephanie used).
I can’t profess to fully understand both
however if you change Jhey’s example to use forwards
instead, then the element being animated in will temporarily appear before the animation starts (which ain’t good) rather than being initially invisible. Changing it to backwards
gets us back on track, so I guess the necessary value relates to whether you’re going for from
/0%
or to
/100%
… and both
just covers you for both cases. I’d probably try to use the appropriate one rather than both
just in case there’s a performance implication.
Animated disclosure
Here’s an interesting conundrum.
For disclosure (i.e. collapse and expand) widgets, I tend to either use the native HTML <details>
element if possible or else a simple, accessible DIY disclosure in which activating a trigger
toggles a nearby content element’s visibility. In both cases, there’s no animation; the change from hidden to revealed and back again is immediate.
To my mind it’s generally preferable to keep it simple and avoid animating a disclosure widget. For a start, it’s tricky! The <details>
element can’t be (easily) animated. And if using a DIY widget it’ll likely involve animating one of the expensive properties. Animating height
or max-height
is also gnarly when working with variable (auto) length content and often requires developers to go beyond CSS and reach for JavaScript to calculate computed element heights. Lastly, forgetting the technical challenges, there’s often no real need to animate disclosure; it might only hinder rather than help the user experience.
But let’s just say you have to do it, perhaps because the design spec requires it (like in BBC Sounds’ expanding and collapsing tracklists when viewed on narrow screens).
Options:
- Animate the
<details>
element. This is a nice, standards-oriented approach. But it might only be viable for when you don’t need to mess with<details>
appearance too much. We’d struggle to apply very custom styles, or to handle a “show the first few list items but not all” requirement like in the BBC Sounds example; - Animate CSS Grid. This is a nice idea but for now the animation only works in Firefox*. It’d be great to just consider it a progressive enhancement so it just depends on whether the animation is deemed core to the experience;
- Animate from a max-height of 0 to “something sufficient” (my pen is inspired by Scott O’Hara’s disclosure example). This is workable but not ideal; you kinda need to set a max-height sweetspot otherwise your animation will be delayed and too long. You could of course add some JavaScript to get the exact necessary height then set it. BBC use
max-height
for their tracklist animation and those tracklists likely vary in length so I expect they use some JavaScript for height calculation.
* Update 20/2/23: the “animate CSS Grid” option now has wide browser support and is probably my preferred approach. I made a codepen that demonstrates a disclosure widget with animation of grid-template-rows
.
Ringing bell icon
To be written.
Pulsing “radar” effect
To be written.
Accessibility
Accessibility and animation can co-exist, as Cassie Evans explains in her CSS-Tricks article Empathetic Animation. We should consider which parts of our website are suited to animation (for example perhaps not on serious, time-sensitive tasks) and we can also respect reduced motion preferences at a global level or in a more finer-grained way per component.
Notes
transition-delay
can be useful for avoiding common annoyances, such as when a dropdown menu that appears on hover disappears when you try to move the cursor to it.
References
- Inspiration: the animate.css library
- animate.css on github (good for searching for keyframe CSS)
- CSS transitions and transforms on Thoughtbot
- CSS Transitions by Josh Comeau
- Keyframe Animations by Josh Comeau
- Transition vs animation on CSS Animation
- Keyframe animation syntax on CSS-Tricks
- CSS animation for beginners on Thoughtbot
- Using CSS Transions on auto dimensions on CSS-Tricks
- Jhey Tompkins’s Image fade with interest codepen
Front-end architecture for a new website (in 2021)
Just taking a moment for some musings on which way the front-end wind is blowing (from my perspective at least) and how that might practically impact my approach on the next small-ish website that I code.
I might lean into HTTP2
Breaking CSS into small modules then concatenating everything into a single file has traditionally been one of the key reasons for using Sass, but in the HTTP2 era where multiple requests are less of a performance issue it might be acceptable to simply include a number of modular CSS files in the <head>
, as follows:
<link href="/css/base.css" rel="stylesheet">
<link href="/css/component_1.css" rel="stylesheet">
<link href="/css/component_2.css" rel="stylesheet">
<link href="/css/component_3.css" rel="stylesheet">
The same goes for browser-native JavaScript modules.
This isn’t something I’ve tried yet and it’d feel like a pretty radical departure from the conventions of recent years… but it‘s an option!
I’ll combine ES modules and classes
It’s great that JavaScript modules are natively supported in modern browsers. They allow me to remove build tools, work with web standards, and they perform well. They can also serve as a mustard cut that allows me to use other syntax and features such as async/await
, arrow functions, template literals, the spread operator etc with confidence and without transpilation or polyfilling.
In the <head>
:
<script type="module" src="/js/main.js"></script>
In main.js
import { Modal } from '/components/modal.js';
const Modal = new Modal();
modal.init();
In modal.js
export class Modal {
init() {
// modal functionality here
}
}
I’ll create Web Components
I’ve done a lot of preparatory reading and learning about web components in the last year. I’ll admit that I’ve found the concepts (including Shadow DOM) occasionally tough to wrap my head around, and I’ve also found it confusing that everyone seems to implement web components in different ways. However Dave Rupert’s HTML with Superpowers presentation really helped make things click.
I’m now keen to create my own custom elements for javascript-enhanced UI elements; to give LitElement a spin; to progressively enhance a Light DOM baseline into Shadow DOM fanciness; and to check out how well the lifecycle callbacks perform.
I’ll go deeper with custom properties
I’ve been using custom properties for a few years now, but at first it was just as a native replacement for Sass variables, which isn’t really exploiting their full potential. However at work we’ve recently been using them as the special sauce powering component variations (--gap
, --mode
etc).
In our server-rendered components we’ve been using inline style
attributes to apply variations via those properties, and this brings the advantage of no longer needing to create a CSS class per variation (e.g. one CSS class for each padding variation based on a spacing scale), which in turn keeps code and specificity simpler. However as I start using web components, custom properties will prove really handy here too. Not only can they be updated by JavaScript, but furthermore they provide a bridge between your global CSS and your web component because they can “pierce the Shadow Boundary”, make styling Shadow DOM HTML in custom elements easier.
I’ll use BEM, but loosely
Naming and structuring CSS can be hard, and is a topic which really divides opinion. Historically I liked to keep it simple using the cascade, element and contextual selectors, plus a handful of custom classes. I avoided “object-oriented” CSS methodologies because I found them verbose and, if I’m honest, slightly “anti-CSS”. However it’s fair to say that in larger applications and on projects with many developers, this approach lacked a degree of structure, modularisation and predictability, so I gravitated toward BEM.
BEM’s approach is a pretty sensible one and, compared to the likes of SUIT, provides flexibility and good documentation. And while I’ve been keeping a watchful eye on new methodologies like CUBE CSS and can see that they’re choc-full of ideas, my feeling is that BEM remains the more robust choice.
It’s also important to me that BEM has the concept of a mix because this allows you to place multiple block classes on the same element so as to (for example) apply an abstract layout in combination with a more implementation-specific component class.
<div class="l-stack c-news-feed">
Where I’ll happily deviate from BEM is to favour use of certain ARIA attributes as selectors (for example [aria-current=page]
or [aria-expanded=true]
because this enforces good accessibility practice and helps create equivalence between the visual and non-visual experience. I’m also happy to use the universal selector (*
) which is great for owl selectors and I’m fine with adjacent sibling (and related) selectors.
Essentially I’m glad of the structure and maintainability that BEM provides but I don’t want a straitjacket that stops me from using my brain and applying CSS properly.
Resources for learning front-end web development
A designer colleague recently asked me what course or resources I would recommend for learning front-end web development. She mentioned React at the beginning but I suggested that it’d be better to start by learning HTML, CSS, and JavaScript. As for React: it’s a subset or offshoot of JavaScript so it makes sense to understand vanilla JS first.
For future reference, here are my tips.
Everything in one place
Google’s web.dev training resource have also been adding some excellent guides, such as:
Another great one-stop shop is MDN Web Docs. Not only is MDN an amazing general quick reference for all HTML elements, CSS properties, JavaScript APIs etc but for more immersive learning there are also MDN’s guides.
Pay attention to HTML
One general piece of advice is that when people look at lists of courses (whether or not they are new to web development) they ensure to learn HTML. People tend to underestimate how complicated, fast-moving and important HTML is.
Also, everything else – accessibility, CSS, JavaScript, performance, resilience – requires a foundation of good HTML. Think HTML first!
Learning CSS, specifically
CSS is as much about concepts and features – e.g. the cascade and specificity, layout, responsive design, typography, custom properties – as it is about syntax. In fact probably more so.
Most tutorials will focus on the concepts but not necessarily so much on practicalities like writing-style or file organisation.
Google’s Learn CSS course should be pretty good for the modern concepts.
Google also have Learn Responsive Design.
If you’re coming from a kinda non-CSS-oriented perspective, Josh W Comeau’s CSS for JavaScript Developers (paid course) could be worth a look.
If you prefer videos, you could check out Steve Griffith’s video series Learning CSS. Steve’s videos are comprehensive and well-paced. It contains a whole range of topics (over 100!), starting from the basics like CSS Box Model.
In terms of HTML and CSS writing style (BEM etc) and file organisation (ITCSS etc), here’s a (version of a) “style guide” that my team came up with for one of our documentation websites. I think it’s pretty good!
CSS and HTML Style Guide (to do: add link here)
For more on ITCSS and Harry Roberts’s thoughts on CSS best practices, see:
- Manage large projects with ITCSS
- Harry’s Skillshare course on ITCSS
- Harry’s CSS Guidelines rulebook
- Harry’s Discovr demo project
Learning JavaScript
I recommended choosing a course or courses from CSS-Tricks’ post Beginner JavaScript notes, especially as it includes Wes Bos’s Beginner JavaScript Notes + Reference.
If you like learning by video, check out Steve Griffith’s JavaScript playlist.
Once you start using JS in anger, I definitely recommend bookmarking Chris Ferdinandi’s Methods and APIs reference guide.
If you’re then looking for a lightweight library for applying sprinkles of JavaScript, you could try Stimulus.
Learning Responsive Design
I recommend Jeremy Keith’s Learn Responsive Design course on web.dev.
Lists of courses
You might choose a course or courses from CSS-Tricks’ post Where do you learn HTML and CSS in 2020?
Recommended books
- Resilient Web Design by Jeremy Keith. A fantastic wide-screen perspective on what we’re doing, who we’re doing it for, and how to go about it. Read online or listen as an audiobook.
- Inclusive Components by Heydon Pickering. A unique, accessible approach to building interactive components, from someone who’s done this for BBC, Bulb, Spotify.
- Every Layout by Heydon Pickering & Andy Bell. Introducing layout primitives, for handling responsive design in Design Systems at scale (plus so many insights about the front-end)
- Atomic Design by Brad Frost. A classic primer on Design Systems and component-composition oriented thinking.
- Practical SVG by Chris Coyier. Learn why and how to use SVG to make websites more aesthetically sharp, performant, accessible and flexible.
- Web Typography by Richard Rutter. Elevate the web by applying the principles of typography via modern web typography techniques.
Collapsible sections, on Inclusive Components
It’s a few years old now, but this tutorial from Heydon Pickering on how to create an accessible, progressively enhanced user interface comprised of multiple collapsible and expandable sections is fantastic. It covers using the appropriate HTML elements (buttons) and ARIA attributes, how best to handle icons (minimal inline SVG), turning it into a web component and plenty more besides.
Buttons and links: definitions, differences and tips
On the web buttons and links are fundamentally different materials. However some design and development practices have led to them becoming conceptually “bundled together” and misunderstood. Practitioners can fall into the trap of seeing the surface-level commonality that “you click the thing, then something happens” and mistakenly thinking the two elements are interchangeable. Some might even consider them as a single “button component” without considering the distinctions underneath. However this mentality causes our users problems and is harmful for effective web development. In this post I’ll address why buttons and links are different and exist separately, and when to use each.
Problematic patterns
Modern website designs commonly apply the appearance of a button to a link. For isolated calls to action this can make sense however as a design pattern it is often overused and under-cooked, which can cause confusion to developers implementing the designs.
Relatedly, it’s now common for Design Systems to have a Button component which includes button-styled links that are referred to simply as buttons. Unless documented carefully this can lead to internal language and comprehension issues.
Meanwhile developers have historically used faux links (<a href="#">
) or worse, a DIY clickable div
, as a trigger for JavaScript-powered functionality where they should instead use native buttons.
These patterns in combination have given rise to a collective muddle over buttons and links. We need to get back to basics and talk about foundational HTML.
Buttons and anchors in HTML
There are two HTML elements of interest here.
Hyperlinks are created using the HTML anchor element (<a>
). Buttons (by which I mean real buttons rather than links styled to appear as buttons) are implemented with the HTML button element (<button>
).
Although a slight oversimplification, I think David MacDonald’s heuristic works well:
If it GOES someWHERE use a link
If it DOES someTHING use a button
A link…
- goes somewhere (i.e. navigates to another place)
- normally links to another document (i.e. page) on the current website or on another website
- can alternatively link to a different section of the same page
- historically and by default appears underlined
- when hovered or focused offers visual feedback from the browser’s status bar
- uses the “pointing hand” mouse pointer
- results in browser making an HTTP
GET
request by default. It’s intended to get a page or resource rather than to change something - offers specific right-click options to mouse users (open in new tab, copy URL, etc)
- typically results in an address which can be bookmarked
- can be activated by pressing the return key
- is announced by screen readers as “Link”
- is available to screen reader users within an overall Links list
A button…
- does something (i.e. performs an action, such as “Add”, “Update” or "Show")
- can be used as
<button type=submit>
within a form to submit the form. This is a modern replacement for<input type=submit />
and much better as it’s easier to style, allows nested HTML and supports CSS pseudo-elements - can be used as
<button type=button>
to trigger JavaScript. This type of button is different to the one used for submitting a<form>
. It can be used for any type of functionality that happens in-place rather than taking the user somewhere, such as expanding and collapsing content, or performing a calculation. - historically and by default appears in a pill or rounded rectangle
- uses the normal mouse pointer arrow
- can be activated by pressing return or space.
- implictly gets the ARIA button role.
- can be extended with further ARIA button-related states like
aria-pressed
- is announced by screen readers as “Button”
- unlike a link is not available to screen reader users within a dedicated list
Our responsibilities
It’s our job as designers and developers to use the appropriate purpose-built element for each situation, to present it in a way that respects conventions so that users know what it is, and to then meet their expectations of it.
Tips
- Visually distinguish button-styled call-to-action links from regular buttons, perhaps with a more pill-like appearance and a right-pointing arrow
- Avoid a proliferation of call-to-action links by linking content itself (for example a news teaser’s headline). Not only does this reduce “link or button?” confusion but it also saves space, and provides more accessible link text.
- Consider having separate Design System components for Button and ButtonLink to reinforce important differences.
- For triggering JavaScript-powered interactions I’ll typically use a
button
. However in disclosure patterns where the trigger and target element are far apart in the DOM it can make sense to use a link as the trigger. - For buttons which are reliant on JavaScript, it’s best to use them within a strategy of progressive enhancement and not render them on the server but rather with client-side JavaScript. That way, if the client-side JavaScript is unsupported or fails, the user won’t be presented with a broken button.
Update: 23 November 2024
Perhaps a better heuristic than David MacDonald’s mentioned above, is:
Links are for a simple connection to a resource; buttons are for actions.
What I prefer about including a resource is that the “goes somewhere” definition of a link breaks down for anchors that instruct the linked resource to download (via the download
attribute) rather than render in the browser, but this doesn’t. I also like the inclusion of simple because some buttons (like the submit button of a search form) might finish by taking you to a resource (the search results page) but that’s a complex action not a simple connection; you’re searching a database using your choice of search query.
References
- Get safe, by Jeremy Keith
- Buttons vs. Links, by Eric Eggert
- The Button Cheat Sheet, by Manuel Matuzović
- A complete guide to links and buttons on CSS-Tricks
- The Links vs Buttons Showdown, by Marcy Sutton
HTML with Superpowers (from Dave Rupert)
Here’s a great new presentation by Dave Rupert (of the Shop Talk show) in which he makes a compelling case for adopting Web Components. Not only do they provide the same benefits of encapsulation and reusability as components in proprietary JavaScript frameworks, but they also bring the reliability and portability of web standards, work without build tools, are suited to progressive enhancement, and may pave the way for a better web.
Dave begins by explaining that Web Components are based on not just a set of technologies but a set of standards, namely:
- Custom Elements (for example
<custom-alert>
) - Shadow DOM
- ES Modules
- the HTML
<template>
element
Standards have the benefit that we can rely on them to endure and work into the future in comparison to proprietary technologies in JavaScript frameworks. That’s good news for people who like to avoid the burnout-inducing churn of learning and relearning abstractions. Of course the pace of technology change with web standards tends to be slower, however that’s arguably a price worth paying for cross-platform stability and accessibility.
Some of Web Components’ historical marketing problems are now behind them, since they are supported by all major browsers and reaching maturity. Furthermore, web components have two superpowers not found in other JavaScript component approaches:
Firstly, the Shadow DOM (which is both powerful and frustrating). The Shadow DOM provides encapsulation but furthermore in progressive enhancement terms it enables the final, enhanced component output which serves as an upgrade from the baseline Light DOM HTML we provided in our custom element instance. It can be a little tricky or confusing to style, however, although there are ways.
Secondly, you can use web components standalone, i.e. natively, without any frameworks, build tools, or package managers. All that’s required to use a “standalone” is to load the <script type=module …>
element for it and then use the relevant custom element HTML on your page. This gets us closer to just writing HTML rather than wrestling with tools.
Dave highlights an education gap where developers focused on HTML, CSS, and Design Systems don’t tend to use Web Components. He suggests that this is likely as a result of most web component tutorials focusing on JavaScript APIs for JavaScript developers. However we can instead frame Web Component authoring as involving a layered approach that starts with HTML, adds some CSS, then ends by applying JavaScript.
Web Components are perfectly suited to progressive enhancement. And that progressive enhancement might for example apply lots of complicated ARIA-related accessibility considerations. I really like the Tabs example where one would create a <generic-tabs>
instance which starts off with simple, semantic, resilient HTML that renders headings and paragraphs…
<generic-tabs>
<h2>About</h2>
<div>
<p>About content goes here. Lorem ipsum dolor sit amet…</p>
</div>
<h2>Contact</h2>
<div>
<p>Contact content goes here. Lorem ipsum dolor sit amet…</p>
</div>
</generic-tabs>
…but the Web Component’s JavaScript would include a template
and use this to upgrade the Light DOM markup into the final interactive tab markup…
<generic-tabs>
<h2 slot="tab" aria-selected="true" tabindex="0" role="tab" id="generic-tab-3-0" aria-controls="generic-tab-3-0" selected="">About</h2>
<div role="tabpanel" aria-labelledby="generic-tab-3-0" slot="panel">
<p>About content goes here. Lorem ipsum dolor sit amet…</p>
</div>
<h2 slot="tab" aria-selected="false" tabindex="-1" role="tab" id="generic-tab-3-1" aria-controls="generic-tab-3-1">Contact</h2>
<div role="tabpanel" aria-labelledby="generic-tab-3-1" slot="panel" hidden>
<p>Contact content goes here. Lorem ipsum dolor sit amet…</p>
</div>
</generic-tabs>
The idea is that the component’s JS would handle all the complex interactivity and accessibility requirements of Tabs under the hood. I think if I were implementing something like Inclusive Components’ Tabs component these days I’d seriously consider doing this as a Web Component.
Later, Dave discusses the JavaScript required to author a Custom Element. He advises that in order to avoid repeatedly writing the same lengthy, boilerplate code on each component we might use a lightweight library such as his favourite, LitElement.
Lastly, Dave argues that by creating and using web components we are working with web standards rather than building for a proprietary library. We are creating compatible components which pave the cowpaths for these becoming future HTML standards (e.g. a <tabs>
element!) And why is advancing the web important? Because an easier web lowers barriers: less complexity, less tooling and setup, less gatekeeping—a web for everyone.
How to debug event listeners with your browser’s developer tools (on Go Make Things)
On the page, right-click the element you want to debug event listeners for, then click Inspect Element. In chromium-based browsers like MS Edge and Google Chrome, click the Event Listeners tab in Developer Tools. There, you’ll see a list of all of the events being listened to on that element. If you expand the event, you can see what element they’re attached to and click a link to open up the actual event listener itself in the JavaScript.
Motion One: The Web Animations API for everyone
A new animation library, built on the Web Animations API for the smallest filesize and the fastest performance.
This JavaScript-based animation library—which can be installed via npm—leans on an existing web API to keep its file size low and uses hardware accelerated animations where possible to achieve impressively smooth results.
For fairly basic animations, this might provide an attractive alternative to the heavier Greensock. The Motion docs do however flag the limitation that it can only animate “CSS styles”. They also say “SVG styles work fine”. I hope by this they mean SVG presentation attributes rather than inline CSS on an SVG, although it’s hard to tell. However their examples look promising.
The docs website also contains some really great background information regarding animation performance.
Testing ES modules with Jest
Here are a few troubleshooting tips to enable Jest, the JavaScript testing framework, to be able to work with ES modules without needing Babel in the mix for transpilation. Let’s get going with a basic set-up.
package.json
…,
"scripts": {
"test": "NODE_ENV=test NODE_OPTIONS=--experimental-vm-modules jest"
},
"type": "module",
"devDependencies": {
"jest": "^27.2.2"
}
Note: take note of the crucial "type": "module"
part as it’s the least-documented bit and your most likely omission!
After that set-up, you’re free to import
and export
to your heart’s content.
javascript/sum.js
export const sum = (a, b) => {
return a + b;
}
spec/sum.test.js
import { sum } from "../javascript/sum.js";
test('adds 1 + 2 to equal 3', () => {
expect(sum(1, 2)).toBe(3);
});
Hopefully that’ll save you (and future me) some head-scratching.
(Reference: Jest’s EcmaScript Modules docs page)
Harry Roberts says “Get Your Head Straight”
Harry Roberts (who created ITCSS for organising CSS at scale but these days focuses on performance) has just given a presentation about the importance of getting the content, order and optimisation of the <head>
element right, including lots of measurement data to back up his claims. Check out the slides: Get your Head Straight
While some of the information about asset loading best practices is not new, the stuff about ordering of head
elements is pretty interesting. I’ll be keeping my eyes out for a video recording of the presentation though, as it’s tricky to piece together his line of argument from the slides alone.
However one really cool thing he’s made available is a bookmarklet for evaluating any website’s <head>
:
— ct.css
Practical front-end performance tips
I’ve been really interested in the subject of Web Performance since I read Steve Souders’ book High Performance Websites back in 2007. Although some of the principles in that book are still relevant, it’s also fair to say that a lot has changed since then so I decided to pull together some current tips. Disclaimer: This is a living document which I’ll expand over time. Also: I’m a performance enthusiast but not an expert. If I have anything wrong, please let me know.
Inlining CSS and or JavaScript
The first thing to know is that both CSS and JavaScript are (by default) render-blocking, meaning that when a browser encounters a standard .css
or .js
file in the HTML, it waits until that file has finished downloading before rendering anything else.
The second thing to know is that there is a “magic file size” when it comes to HTTP requests. File data is transferred in small chunks of about 14 kb. So if a file is larger than 14 kb, it requires multiple roundtrips.
If you have a lean page and minimal CSS and or JavaScript to the extent that the page in combination with the (minified) CSS/JS content would weigh 14 kb or less after minifying and gzipping, you can achieve better performance by inlining your CSS and or JavaScript into the HTML. This is because there’d be only one request, thereby allowing the browser to get everything it needs to start rendering the page from that single request. So your page is gonna be fast.
If your page including CSS/JS is over 14 kb after minifying and gzipping then you’d be better off not inlining those assets. It’d be better for performance to link to external assets and let them be cached rather than having a bloated HTML file that requires multiple roundtrips and doesn’t get the benefit of static asset caching.
Avoid CSS @import
JavaScript modules in the head
Native JavaScript modules are included on a page using the following:
<script type="module" src="main.js"></script>
Unlike standard <script>
elements, module scripts are deferred (non render-blocking) by default. Rather than placing them before the closing </body>
tag I place them in the <head>
so as to allow the script to be downloaded early and in parallel with the DOM being processed. That way, the JavaScript is already available as soon as the DOM is ready.
Background images
Sometimes developers implement an image as a CSS background image rather than a “content image”, either because they feel it’ll be easier to manipulate that way—a typical example being a responsive hero banner with overlaid text—or simply because it’s decorative rather than meaningful. However it’s worth being aware of how that impacts the way that image loads.
Outgoing requests for images defined in CSS rather than HTML won’t start until the browser has created the Render Tree. The browser must first download and parse the CSS then construct the CSSOM before it knows that “Element X” should be visible and has a background image specified, in order to then decide to download that image. For important images, that might feel too late.
As Harry Roberts explains it’s worth considering whether the need might be served as well or better by a content image, since by comparison that allows the browser to discover and request the image nice and early.
By moving the images to <img> elements… the browser can discover them far sooner—as they become exposed to the browser’s preload scanner—and dispatch their requests before (or in parallel to) CSSOM completion
However if still makes sense to use a background image and performance is important Harry recommends including an accompanying hidden image inline or preloading it in the <head>
via link rel=preload
.
Preload
From MDN’s preload docs, preload allows:
specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.
The benefits are most clearly seen on large and late-discovered resources. For example:
- Resources that are pointed to from inside CSS, like fonts or images.
- Resources that JavaScript can request such as JSON and imported scripts.
- Larger images and videos.
I’ve recently used the following to assist performance of a large CSS background image:
<link rel="preload" href="bg-illustration.svg" as="image" media="(min-width: 60em)">
Self-host your assets
Using third-party hosting services for fonts or other assets no longer offers the previously-touted benefit of the asset potentially already being in the user’s browser cache. Cross domain caching has been disabled in all major browsers.
You can still take advantage of the benefits of CDNs for reducing network latency, but preferably as part of your own infrastructure.
Miscellaneous
Critical CSS is often a wasted effort due to CSS not being a bottleneck, so is generally not worth doing.
References
- Inlining literally everything on Go Make Things
- MDN’s guide to native JavaScript modules
- How and when browsers download images by Harry Roberts
Progressively enhanced burger menu tutorial by Andy Bell
Here’s a smart and comprehensive tutorial from Andy Bell on how to create a progressively enhanced narrow-screen navigation solution using a custom element. Andy also uses Proxy
for “enabled” and “open” state management, ResizeObserver
on the custom element’s containing header
for a Container Query like solution, and puts some serious effort into accessible focus management.
One thing I found really interesting was that Andy was able to style child elements of the custom element (as opposed to just elements which were present in the original unenhanced markup) from his global CSS. My understanding is that you can’t get styles other than inheritable properties through the Shadow Boundary so this had me scratching my head. I think the explanation is that Andy is not attaching the elements he creates in JavaScript to the Shadow DOM but rather rewriting and re-rendering the element’s innerHTML
. This is an interesting approach and solution for getting around web component styling issues. I see elsewhere online that the innerHTML
based approach is frowned upon however Andy doesn’t “throw out” the original markup but instead augments it.
Adapting Stimulus usage for better Progressive Enhancement
A while back, Jake Archibald tweeted:
Don't render buttons on the server that require JS to work.
The idea is that user interface elements which depend on JavaScript (such as buttons) should be rendered on the client-side, i.e. with JavaScript.
In the context of a progressive enhancement mindset, this makes perfect sense. Our minimum viable experience should work without JavaScript due to the fragility of a JavaScript-dependent approach so should not include script-triggering buttons which might not work. The JavaScript which applies the enhancements should not only listen for and act upon button events, but should also be responsible for actually rendering the button.
This is how I used to build JavaScript interactions as standard, however sadly due to time constraints and framework conventions I don’t always follow this best practice on all projects.
At work, we use Stimulus. Stimulus has a pretty appealing philosophy:
Stimulus is designed to enhance static or server-rendered HTML—the “HTML you already have”
However in their examples they always render buttons on the server; they always assume the JavaScript-powered experience is the baseline experience. I’ve been pondering whether that could easily be adapted toward better progressive enhancement and it seems it can.
My hunch was that I should use the connect()
lifecycle method to render a button
into the component (and introduce any other script-dependent markup adjustments) at the earliest opportunity. I wasn’t sure whether creating new DOM elements at this point and fitting them with Stimulus-related attributes such as action
and target
would make them available via the standard Stimulus APIs like server-rendered elements but was keen to try. I started by checking if anyone was doing anything similar and found a thread where Stimulus contributor Javan suggested that DIY target creation is fine.
I then gave that a try and it worked! Check out my pen Stimulus with true progressive enhancement. It’s a pretty trivial example for now, but proves the concept.
Astro
Astro looks very interesting. It’s in part a static site builder (a bit like Eleventy) but it also comes with a modern (revolutionary?) developer experience which lets you author components as web components or in a JS framework of your choice but then renders those to static HTML for optimal performance. Oh, and as far as I can tell theres no build pipeline!
Astro lets you use any framework you want (or none at all). And if most sites only have islands of interactivity, shouldn’t our tools optimize for that?
People have been posting some great thoughts and insights on Astro already, for example:
- Chris Coyier’s review
- Review in CSS-Tricks Newsletter #255 including links to Chris’s Astro demo site
- The web is too damn complex, by Robin Rendle
- Astro’s introductory blog post
(via @css)
clipboard.js - Copy to clipboard without Flash
Here’s a handy JS package for “copy to clipboard” functionality that’s lightweight and installable from npm.
It also appears to have good legacy browser support plus a means for checking/confirming support too, which should assist if your approach is to only add a “copy” button to the DOM as a progressive enhancement.
(via @chriscoyier)
Inspire.js
Lean, hackable, extensible slide deck framework
I’ve been on the lookout for a lightweight, web standards based slide deck solution for a while and this one from Lea Verou could well be perfect.
It has keyboard navigation, video and code demo support, an index page and much more… and it’s all powered by straightforward HTML, CSS and JavaScript. I need to give this a spin!
Dragula - Browser drag-and-drop so simple it hurts
Here’s a nice, lightweight and framework-free drag and drop UI solution, that’s sure to come in handy.
Drag and drop so simple it hurts
(via @mxbck)
Container Queries in Web Components | Max Böck
Max’s demo is really clever and features lots of interesting web component related techniques.
I came up with this demo of a book store. Each of the books is draggable and can be moved to one of three sections, with varying available space. Depending on where it is placed, different styles will be applied to the book.
Some of the techniques I found interesting included:
- starting with basic HTML for each book and its image, title, and author elements rather than an empty custom element, thereby providing a resilient baseline
- wrapping each book in a custom
book-element
tag (which the browser would simply treat like adiv
in the worst case scenario) - applying the
slot
attribute to each of the nested elements, for exampleslot="title"
- including a
template
withid="book-element"
at the top of the HTML. This centralises the optimal book markup, which makes for quicker, easier, and less disruptive maintenance. (Atemplate
is parsed but not rendered by the browser. It is available solely to be referenced and used by JavaScript) - including slots within the
template
, such as<slot name="title">
- putting a
style
block within thetemplate
. These styles target the book component only, and include container query driven responsiveness - targetting the
<book-element>
wrapper element in CSS via the:host
selector, and applyingcontain
to set it as a container query context - targetting a
slot
in the component CSS using (for example)::slotted(img)
Thoughts
Firstly, in the basic HTML/CSS, I might ensure images are display: block
and use div
rather than span
for a better baseline appearance should JavaScript fail.
Secondly, even though this tutorial is really nice, I still find myself asking: why use a Web Component to render a book rather than a server-side solution when the latter removes the JS dependency? Part of the reason is no doubt developer convenience—people want to build component libraries in JavaScript if that’s their language of choice. Also, it requires less backend set-up and leads to a more portable stack. And back-end tools for component-based architectures are generally less mature and feature-rich then those for the front-end.
One Web Component specific benefit is that Shadow DOM provides an encapsulation mechanism to style, script, and HTML markup. This encapsulation provides private scope that both prevents the content of the component from being affected by the external document, and keeps its CSS and JS from leaking out… which might be nice for avoiding the namespacing you’d otherwise have to do.
I have a feeling that Web Components might make sense for some components but be neither appropriate nor required for others. Therefore just because you use Web Components doesn’t mean that you suddenly need to feel the need to write or refactor every component that way. It’s worth bearing in mind that client-side JavaScript based functionality comes with a performance cost—the user needs to wait for it to download. So I feel there might be a need to exercise some restraint. I want to think about this a little more.
Other references
Ruthlessly eliminating layout shift on netlify.com, by Zach Leatherman
I love hearing about clever front-end solutions which combine technologies and achieve multiple goals. In Zach’s post we hear how Netlify’s website suffered from layout shift when conditionally rendering dismissible promo banners, and how he addressed this by rethinking the problem and shifting responsibilities around the stack.
Here’s my summary of the smart ideas covered in the post:
- decide on the appropriate server-rendered content… in this case showing rather than hiding the banner, making the most common use case faster to load
- have the banner “dismiss” button’s event handling script store the banner’s
href
in the user’s localStorage as an identifier accessible on return visits - process lightweight but critical JavaScript logic early in the
<head>
… in this case a check for this banner’s identifier existing in localStorage - under certain conditions – in this case when the banner was previously seen and dismissed – set a “state” class (
banner--hide
) on the<html>
element, leading to the component being hidden seamlessly by CSS - build the banner as a web component, the first layer of which being a custom element
<announcement-banner>
and the second a JavaScript class to enhance it - delegate responsibility for presenting the banner’s “dismiss” button to the same script responsible for the component’s enhancements, meaning that a broken button won’t be presented if that script were to break.
So much to like in there!
Here are some further thoughts the article provoked.
Web components FTW
It feels like creating a component such as this one as a web component leads to a real convergence of benefits:
- tool-free, async loading of the component JS as an ES module
- fast, native element discovery (no need for a
document.querySelector
) - enforces using a nice, idiomatic class providing encapsulation and high-performing native callbacks
- resilience and progressive enhancement by putting all your JS-dependent stuff into the JS class and having that enhance your basic custom element. If that JS breaks, you still have the basic element and won’t present any broken elements.
Even better, you end up with framework-independent, standards-based component that you could share with others for reuse elsewhere, just like Zach did.
Multiple banners
I could see there being a case where there are multiple banners during the same time period. I guess in that situation the localStorage banner
value could be a stringified object rather than a simple, single-URL string.
Setting context on the root
It’s really handy to have a way to exert just-in-time control over the display of a server-rendered element in a way that avoids flashes of content… and adding a class to the <html>
element offers that. In this approach, we run the small amount of JavaScript required to test a local condition (e.g. checking for a value in localStorage) really early. That lets us process our conditional logic before the element is rendered… although this also means that it’s not yet available in the DOM for direct manipulation. But adding a class to the HTML element means that we can pre-prepare CSS to use that class as a contextual selector for hiding the element.
We’re already familiar with the technique of placing classes on the root element from libraries like modernizr and some font-loading approaches, but this article serves as a reminder that we can employ it whenever we need it.
Handling the close button
Zach’s approach to handling the banner’s dismiss button was interesting. He makes sure that it’s not shown unless the web component’s JavaScript runs successfully which is great, but rather than inject it with JavaScript he includes it in the initial HTML but hidden with CSS, and his method of hiding is opacity
.
We use opacity to toggle the close button so that it doesn’t reflow the component when it’s enabled via JavaScript.
I think what Zach’s saying is that the alternatives – inserting the button with JS, or toggling the hidden
attribute or its CSS counterpart display:none
– would affect geometry causing the browser to perform layout… whereas modifying opacity does not.
I love that level of diligence! Typically I prefer to delegate responsibility for inserting JS-dependent buttons to JavaScript because in comparison to including a button in the server-rendered HTML then hiding it, it feels more resilient and a more maintainable separation of concerns. However as always the best solution depends on the situation.
If I were going down Zach’s route I think I’d replace opacity
with visibility
since the latter hiding method removes the hidden element from the document which feels more accessible, while still avoiding triggering the reflow that display
would.
Side-thoughts
In a server-side scripted application – one using Rails or PHP, for example – you could alternatively handle persisting state with cookies rather than localStorage… allowing you to test for the presence of the cookie on the server then handle conditional rendering of the banner on the server too, rather than needing classes which trigger hiding. I can see an argument for that. Thing is though, not everyone’s working in that environment. Zach has provided a standalone solution.
References
- Zach’s Herald of the dog web component
- CSS Triggers of reflow and repaint
- Minimising layout thrashing
Observer APIs in a nutshell
I’ve played with the various HTML5 Observer APIs (IntersectionObserver
, ResizeObserver
and MutationObserver
) a little over the last few years—for example using ResizeObserver
in a container query solution for responsive grids. But in all honesty their roles, abilities and differences haven’t yet fully stuck in my brain. So I’ve put together a brief explainer for future reference.
Intersection Observer
Lets you watch for when an element of your choice intersects with a root element of your choice—typically the viewport—and then take action in response.
So you might watch for a div
that’s way down the page entering the viewport as a result of the user scrolling, then act upon that by applying a class which animates that div
’s opacity from 0
to 1
to make it fade in.
Here’s how it works:
- Instantiate a new
IntersectionObserver
object, passing in firstly a callback function and secondly an options array which specifies your root element (usually the viewport, or a specific subsection of it). - Call
observe
on your instance, passing in the element you want to watch. If you have multiple elements to watch, you could callobserve
repeatedly in a loop through the relevantNodeList
. - in the callback function add the stuff you want to happen in response to “intersecting” and “no longer intersecting” events.
Mutation Observer
Lets you watch for changes to the attributes or content of DOM elements then take action in response.
You might use this if you have code that you want to run if and when an element changes because of another script.
Here’s how it works:
- Your typical starting point is that you already have one or more event listeners which modify the DOM in response to an event.
- Instantiate a new
MutationObserver
object, passing in a callback function. - The callback function will be called every time the DOM is changed.
- Call
observe
on your instance, passing in as first argument the element to watch and as second argument a config object specifying what type of changes you’re interested in, for example you might only care about changes to specific attributes. - your callback function provides an array of MutationRecord objects—one for each change that has just taken place—which you can loop through and act upon.
Resize Observer
Lets you watch for an element meeting a given size threshold then take action in response.
For example you might add a class of wide
to a given container only when it is wider than 60em
so that new styles are applied. This is a way of providing container query capability while we wait for that to land in CSS.
Or you might load additional, heavier-weight media in response to a certain width threshold because you feel you can assume a device type that indicates the user is on wifi. Adding functionality rather than applying styles is something we could not achieve with CSS alone.