Skip to main content

Tagged “javascript”

Lightning Fast Web Performance course by Scott Jehl

I purchased Scott’s course back in 2021 and immediately liked it, but hadn’t found the space to complete it until now. Anyway, I’m glad I have as it’s well structured and full of insights and practical tips. I’ll use this post to summarise my main takeaways.

Having completed the course I have much more rounded knowledge in the following areas:

  • Why performance matters to users and business
  • Performance-related metrics and “moments”: defining fast and slow
  • Identifying performance problems: the tools and how to use them
  • Making things faster, via various good practices and fixes

I’ll update this post soon to add some key bullet-points for each of the above headings.

How would you build Wordle with just HTML & CSS? by Scott Jehl

Scott proposes an interview question relating to web standards and intelligent use of JavaScript.

How would you attempt to build Wordle (...or some other complex app) if you could only use HTML and CSS? Which features of that app would make more sense to build with JavaScript than with other technologies? And, can you imagine a change or addition to the HTML or CSS standard that could make any of those features more straight-forward to build?

Discussing any approaches to this challenge will reveal the candidate's broad knowledge of web standards–including new and emerging HTML and CSS features–and as a huge benefit, it would help select for the type of folks who are best suited to lead us out of the JavaScript over-reliance problems that are holding back the web today.

I hate interviews (and the mere thought of interviews), but I could handle being asked a question like this.

Use the dialog element (reasonably), by Scott O’Hara

Here’s an important update on native modal dialogues. TL;DR – it’s now OK to use dialog.

Last year I posted that Safari now supported the HTML dialog element meaning that we were within touching distance of being able to adopt it with confidence. My caveat was:

However first I think we’d need to make an informed decision regarding our satisfaction with support, based on the updated advice in Scott O’Hara’s article Having an Open Dialog.

(Accessibility expert Scott O’Hara has been diligently testing the dialog element for years.)

However the happy day has arrived. The other day Scott posted Use the dialog element (reasonably). It includes this advice:

I personally think it’s time to move away from using custom dialogs, and to use the dialog element instead.

That’s an important green-light!

And this of course means that we can stop DIYing modal dialogues from divs plus super-complicated scripting and custom ARIA, and instead let a native HTML element do most of the heavy lifting for us.

From a Design System perspective, I’d previously suggested to my team that when we revisit our Modal component we should err toward a good custom dialogue library first, however I’m now likely to recommend we go for native dialog instead. Which is great!

Web Components Guide

This new resource on Web Components from Keith Cirkel and Kristján Oddsson of GitHub (and friends) is looking great so far.

As I recently tweeted, I love that it’s demoing “vanilla” Web Components first rather than using a library for the demos.

So far I’ve found that the various web component development frameworks (Lit etc) are cool, but generally I’d like to see more demos that create components without using abstractions. The frameworks include dependencies, opinions and proprietary syntax that (for me at least) make an already tricky learning curve more steep so they’re not (yet) my preferred approach. Right now I want to properly understand what’s going on at the web standards level.

Aside from tutorials this guide also includes a great Learn section which digs into JavaScript topics such as Classes and Events and why these are important for Web Components.

I hope that in future the guide will cover testing Web Components too.

Lastly, I like the Embed Mastodon Toot web component tutorial, and to help it sink in (and save the code for posterity) I’ve chucked the code into a pen.

Lean “plugin subscription form” by Chris Ferdinandi

I enjoyed this two-part tutorial from Chris arising from his critique of a subscription form plugin which includes the entire React library to achieve what could be done with a lightweight HTML and vanilla JavaScript solution. Chris advocates a progressively-enhanced approach. Instead of rendering the form with JavaScript he renders it in HTML and argues that not only is there no need for the former approach – because forms natively work without JavaScript – but also it only introduces fragility where we could provide resilience.

Part one: Using a wrecking ball when a hammer would suffice

Part two: More HTML, less JavaScript

I also recreated the front-end parts in a codepen to help this sink in.

Lastly, as someone who always favours resilience over fragility I wanted to take a moment to consider the part that Chris didn’t cover – why JavaScript is required at all. I guess it’s because, being a plugin, this is intended for portability and you as author have decided you want the error and success messaging to be “white-labelled” on the consuming website rather than for the user to be taken to a separate website. You don’t know the context’s stack so to provide a universally-supported solution you use JavaScript to handle the messaging, which then means you need to use JS to orchestrate everything else – preventing default submission, validating, fetch-based submission and API response handling.

Full disclosure

Whether I’m thinking about inclusive hiding, hamburger menus or web components one UI pattern I keep revisiting is the disclosure widget. Perhaps it’s because you can use this small pattern to bring together so many other wider aspects of good web development. So for future reference, here’s a braindump of my knowledge and resources on the subject.

A disclosure widget is for collapsing and expanding something. You might alternately describe that as hiding and showing something. The reason we collapse content is to save space. The thinking goes that users have a finite amount of screen estate (and attention) so we might want to reduce the space taken up by secondary content, or finer details, or repeated content so as to push the page’s key messages to the fore and save the user some scrolling. With a disclosure widget we collapse detailed content into a smaller snippet that acts as a button the user can activate to expand the full details (and collapse them again).

Adrian Roselli’s article Disclosure Widgets is a great primer on the available native and custom ARIA options, how to implement them and where each might be appropriate. Adrian’s article helpfully offers that a disclosure widget (the custom ARIA flavour) can be used as a base in order to achieve some other common UI requirements so long as you’re aware there are extra considerations and handle those carefully. Examples include:

  • link and disclosure widget navigation
  • table with expando rows
  • accordion
  • hamburger navigation
  • highly custom select alternatives when listbox is innapropriate because it needs to include items that do not have the option role
  • a toggle-tip

Something Adrian addresses (and I’ve previously written about) is the question around for which collapse/expand use cases we can safely use the native details element. There’s a lot to mention but since I’d prefer to present a simple heuristic let’s go meta here and use a details:

Use details for basic narrative content and panels but otherwise use a DIY disclosure

It’s either a bad idea or at the very least “challenging” to use a native `details` for:

  • a hamburger menu
  • an accordion

In terms of styling terms it’s tricky to use a `details` for:

  • a custom appearance
  • animation

The above styling issues are perhaps not insurmountable. It depends on what level of customisation you need.

Note to self: add more detail and links to this section when I get the chance.

I’ve also noticed that Adrian has a handy pen combining code for numerous disclosure widget variations.

Heydon Pickering’s Collapsible sections on Inclusive Components is excellent, and includes consideration of progressive enhancement and an excellent web component version. It’s also oriented toward multiple adjacent sections (an accordion although it doesn’t use that term) and includes fantastic advice regarding:

  • appropriate markup including screen reader considerations
  • how best to programmatically switch state (such as open/closed) within a web component
  • how to make that state accessible via an HTML attribute on the web component (e.g. <toggle-section open=true>)
  • how that attribute is then accessible outside the component, for example to a button and script that collapses and expands all sections simultaneously

There’s my DIY Disclosure widget demo on Codepen. I first created it to use as an example in a talk on Hiding elements on the web, but since then its implementation has taken a few twists and turns. In its latest incarnation I’ve taken some inspiration from the way Manuel Matuzovic’s navigation tutorial uses a template in the markup to prepare the “hamburger toggle” button.

I’ve also been reflecting on how the hidden attribute’s boolean nature is ideal for a toggle button in theory – it’s semantic and therefore programattically conveys state – but how hiding with CSS can be more flexible, chiefly because hidden (like CSS’s display) is not animatible. If you hide with CSS, you could opt to use visibility: hidden (perhaps augmented with position so to avoid taking up space while hidden) which similarly hides from everyone in terms of accessibilty.

As it happens, the first web component I created was a disclosure widget. It could definitely be improved by some tweaks and additions along the lines of Heydon Pickering’s web component mentioned above. I’ll try to do that soon.

Troubleshooting

For some disclosure widget use cases (such as a custom link menu often called a Dropdown) there are a few events that typically should collapse the expanded widget. One is the escape key. Another is when the user moves focus outside the widget. One possible scenario is that the user might activate the trigger button, assess the expanded options and subsequently decide none are suitable and move elsewhere. The act of clicking/tapping elsewhere should collapse the widget. However there’s a challenge. In order for the widget to be able to fire unfocus so that an event listener can act upon that, it would have to be focused in the first place. And in Safari – unlike other browsers – buttons do not automatically receive focus when activated. (I think Firefox used to be the same but was updated.) The workaround is to set focus manually via focus() in your click event listener for the trigger button.

WebC

WebC, the latest addition to the Eleventy suite of technologies, is focused on making Web Components easier to use. I have to admit, it took me a while to work out the idea behind this one, but I see it now and it looks interesting.

Here are a few of the selling points for WebC, as I see them.

With WebC, web components are the consumer developer’s authoring interface. So for example you might add a badge component into your page with <my-badge text='Lorem ipsum'></my-badge>.

Using web components is great – especially for Design Systems – because unlike with proprietary component frameworks, components are not coupled to a single technology stack but rather are platform and stack-agnostic, meaning they could be reused across products.

From the component creator perspective: whereas normally web components frustratingly require writing accompanying JavaScript and depending on JavaScript availability even for “JavaScript-free” components, with WebC this is not the case. You can create “HTML-only” components and rely on them to be rendered to screen. This is because WebC takes your .webc component file and compiles it to the simplest output HTML required.

WebC is progressive enhancement friendly. As mentioned above, your no-JS HTML foundation will render. But going further, you can also colocate your foundational baseline beside the scripts and CSS that enhance it within the same file.

This ability to write components within a single file (rather than separate template and script files) is pretty nice in general.

There are lots of other goodies too such as the ability to scope your styles and JavaScript to your component, and to set styles and JS to be aggregated in such a way that your layout file can optionally load only the styles and scripts required for the components in use on the current page.

Useful resources:

The ARIA presentation role

I’ve never properly understood when you would need to use the ARIA presentation role. This is perhaps in part because it is often used inappropriately, for example in situations where aria-hidden would be more appropriate. However I think the penny has finally dropped.

It’s fairly nuanced stuff so I’ll forgive myself this time!

You might use role=presentation in JavaScript which progressively enhances a basic JS-independent HTML foundation into a more advanced JS-dependent experience. In such cases you might want to reuse the baseline HTML but remove semantics which are no longer appropriate.

As an example, Inclusive Components Tabbed Interfaces starts with a Table of Contents marked up as an unordered list of links. However in enhanced mode the links take on a new role as tabs in a tablist so role=presentation is applied to their containing <li> elements so that the tab list is announced appropriately and not as a plain list.

Thoughts on HTML over the wire solutions

Max Böck just tweeted his excitement about htmx:

htmx (and similar "HTML over the wire" approaches) could someday replace Javascript SPAs. Cool to see a real-world case study on that, and with promising results

There’s similar excitement at my place of work (and among the Rails community in general) about Turbo. It promises:

the speed of an SPA without having to write any JS

These new approaches are attractive because they let us create user interfaces that update the current page sans reload – usually communicating with the server to get swap-in content or update the database – but by writing HTML rather than JavaScript. Developers have long wished for HTML alone to handle common interactive patterns so a set of simple, declarative conventions really appeals. Writing less JavaScript feels good for performance and lightening maintenance burden. Furthermore the Single Page App (SPA) approach via JS frameworks like React and Vue is heavier and more complicated than most situations call for.

However I have some concerns.

I’m concerned about the “no javascript” language being used, for example in articles titled Hotwire: reactive Rails with no JavaScript. Let’s be clear about what Turbo and htmx are in simple material terms. As Reddit user nnuri puts it in do Hotwire and htmx have a commitment to accessibility? the approach is based on:

a JavaScript library in the client's browser that manipulates the DOM.

Your UI that uses htmx or Turbo is dependent on that JS library. And JS is the most brittle part of the stack. So you need to think about resilience and access. The htmx docs has a section on progressive enhancement but I’m not convinced it’s part of the design.

Secondly if you have client-side JS that changes content and state, that brings added accessibility responsibilities. When the content or state of a page is changed, you need to convey this programatically. We normally need to handle this in JavaScript. Do these solutions cover the requirements of accessible JS components, or even let you customise them to do add the necessary state changes yourself? For example when replacing HTML you need to add aria-live (see also Léonie Watson on accessible forms with ARIA live regions).

Another concern relates to user expectations. Just because you can do something doesn’t mean you should. For example, links should not be used to do the job of a button. If you do, they need role=button however this is inadvisable because you then need to recreate (and will likely miss) the other features of a button, and will also likely confuse people due to mismatches between perceived affordance and actual functionality. Additionally, as Jeremy Keith has written, links should not delete things.

In general I feel the message of the new HTML over the wire solutions is very weighted toward developer experience but doesn’t make user experience considerations and implications clear. Due to unanswered questions regarding accessibility I worry that firstly they’re not natively mature in their understanding and approach on that front, and secondly that their framing of benefits is likely to make accessibility ignored due to engineers thinking that they can totally rely on the library.

I’d be really pleased if my concerns could be allayed because in general I like the approach.

Update 30/1//22

I decided to revisit a book I read back in 2007 – Jeremy Keith’s Bulletproof Ajax. I had forgotten this, but it actually contains a section titled “Ajax and accessibility”. It acknowledged that reconciling the two is challenging and despite listing ideas for mitigating issues, it admitted that the situation was not great. However since 2007 – specifically since around 2014 – WAI-ARIA has been a completed W3C recommendation and provides a means of making web pages more accessible, particularly when dealing with dynamic content.

I don’t often have cause to use more than a few go-to ARIA attributes, however here’s my understanding of how you might approach making Ajax-driven content changes accessible by using ARIA.

To do: write this section.

References:

Tabs: truth, fiction and practical measures

My colleague Anda and I just had a good conversation about tabs, and specifically the company’s tabs component. I’ve mentioned before that our tabs are unconventional and potentially confusing, and Anda was interested to hear more.

What’s the purpose of a tabbed interface?

A tabbed interface is a space-saving tool for collapsing parallel content into panels, with one panel visible at a time but all accessible on-demand. While switching between tab panels the user is kept within the same wider context i.e. the same page, rather than being moved around.

Conventional tabbed interfaces

Here are some great examples of tabs components.

Tabs are a device intended to improve content density. They should deliver a same-page experience. Activating a tab reveals its corresponding tab panel. Ideally the approach employs progressive enhancement, starting as a basic Table of Contents. There’s quite a lot of advanced semantics, state and interactivity under the hood.

Faux tabs

But in our Design System at work, ours are currently just the “tabs” with no tab panels, and each “tab” generally points to another page rather than somewhere on the same page. In other words it’s a navigation menu made to look like a tabbed interface.

I’m not happy with this from an affordance point of view. Naming and presenting something as one thing but then having it function differently leads to usability problems and communication breakdowns. As the Inclusive Components Tabbed Interfaces page says:

making the set of links in site navigation appear like a set of tabs is deceptive: A user should expect the keyboard behaviors of a tabbed interface, as well as focus remaining on a tab in the current page. A link pointing to a different page will load that page and move focus to its document (body) element.

Confused language causes problems

One real-life problem with our tabs is that they have been engineered as if they are conventional tabs, however since the actual use case is often navigation the semantics are inappropriate.

We currently give each “tab” the ARIA tab role, defined as follows:

The ARIA tab role indicates an interactive element inside a tablist that, when activated, displays its associated tabpanel.

But our tabs have no corresponding tabpanel; they don’t use JavaScript for a single-page experience balancing semantics, interactivity and state as is conventional. They’re just navigation links. And this mismatch of tabs-oriented ARIA within a non-tabs use case will do more harm than good. It’s an accessibility fail.

A stop-gap solution

If content for one or more tabpanel is provided, apply the complicated ARIA attributes for proper tabs. If not, don’t. This means we allow component consumers to either create i) a real tabbed interface, or ii) “a nav menu that looks like tabs” (but without any inappropriate ARIA attributes). I don’t agree with the latter as a design approach, but that’s a conversation for another day!

Tabs in the future

Some clever people involved with Open UI are using web components to explore how a useful tabs element could work if it were an HTML element. Check out the Tabvengers’ spicy-sections component. Again, this is based on the conventional expectation of tabs as a same-page experience for arranging content, rather than as a navigation menu. And I think it’d make sense to stay on the same path as the rest of the web.

Editable table cells

Yesterday the Design System team received a tentative enquiry regarding making table cells editable. I’m not yet sure whether or not this is a good idea – experience and spidey sense tell me it’s not – but regardless I decided to start exploring so as to base my answer on facts and avoid being overly cautious.

In my mind’s eye, there are two ways to achieve this:

  1. on clicking the cell, the cell content is presented in an (editable) form input; or
  2. apply the contenteditable attribute

In both cases you get into slightly gnarlier territory when you start considering the need for a submit button and how to position it.

I don’t have anything further to add at the moment other than to say that if I had to spike this out, I’d probably start by following Scott O’Hara’s article Using JavaScript & contenteditable.

I’d probably also tweet Scott and ask if he can say anything more on his closing statement which was:

I have more thoughts on the accessibility of contenteditable elements, but that will also have to be a topic for another day…

Update 27-09-22: I’ve also remembered that (if I were to pursue Option 1: input within cell) Adrian Roselli has an article on Accessibly including inputs in tables.

Sites which don’t work without JavaScript enabled still benefit from progressive enhancement

At work I and our team just had an interesting realisation about a recent conversation. We had been discussing progressive enhancement for custom toggles and a colleague mentioned that the web app in question breaks at a fundamental level if the user has disabled JavaScript, displaying a message telling them to change their settings in order to continue. He used this to suggest that any efforts to provide a no-JavaScript experience would be pointless. And this fairly absolute (and on-the-surface, sensible) statement caught me off-guard and sent me and the others down a blind alley.

I remember replying “yes, but even still we should try to improve the code by introducing good practices” and that feeling a little box-ticky.

However in retrospect I realise that we had temporarily made the mistake of conflating “JavaScript enabled” with “JavaScript available” – which are separate possibilities.

When considering resilience around JavaScript, we can consider the “factors required for JavaScript to work” as layers:

  1. is JavaScript enabled in the user’s browser?
  2. is the JavaScript getting through firewalls? (it recently didn’t for one of our customers on the NHS’s network)
  3. has the JavaScript finished loading?
  4. does the user’s browser support the JavaScript features the developers have used (i.e. does the browser “cut the mustard”?)
  5. is the JavaScript error-free? It’s easy for some malformed JSON to creep in and break it…

And the point my colleague made relates to Layer 1 only. And that layer – JavaScript being disabled by the user – is actually the least likely explanation for a JavaScript-dependent feature not working.

So it's really important to remember that when we build things with progressive enhancement we are not just addressing Layer 1, but Layers 2—5 too (as well as other layers I’ve probably forgotten!)

How we think about browsers, on GitHub’s blog

Keith Cirkel of Github has written about how they think about browsers and it’s interesting. In summary Github achieve:

  • improved performance;
  • exploiting new native technologies; and
  • universal user access/inclusion

…via a progressive enhancement strategy that ensures a basic experience for all but delivers an enhanced experience to most. Their tooling gets a bit deep/exotic in places but I think the basic premise is:

  1. decide on what our basic experience is, then use native HTML combined with a bare minimum of other stuff to help old browsers deliver that; and
  2. exploit new JS features in our enhanced experience (the one most people will get) to make it super lean and fast

Pretty cool.

Refactoring a modal dialogue in 2022

My team will soon be refactoring our modal dialogue component. Ours has a few deficiencies, needs better developer experience and documentation, is not built to our Design System component standards, and could use a resilience boost from some progressive enhancement.

For a long time the best – meaning accessible, framework-agnostic, feature-packed – modal implementations were custom. Specifically:

However with recent browser advances (especially from Safari), there’s an argument that the time has now come that we no longer need custom solutions and can go native. So we might reach for the native <dialog> HTML element.

However first I think we’d need to make an informed decision regarding our satisfaction with support, based on the updated advice in Scott O’Hara’s article Having an Open Dialog.

Additionally we should definitely be keeping one eye on proposals around the exciting new togglepopup and popup attributes which promise the holy grail of entirely HTML-powered modal dialogues with no JavaScript dependency.

Web Components with Declarative Shadow DOM via Lit and Eleventy

Here’s a new development in the Web Components story, and one that may have positive implications for resilience, performance and progressive enhancement.

Declarative Shadow DOM is a new way to implement and use Shadow DOM directly in HTML rather than by constructing a shadow root in JavaScript.

But some people including Chris Coyier and Brad Frost) reckon that writing that looks horrible. Brad said:

Declarative Shadow DOM always looked gross to me and I felt it almost defeats the purpose of web components.

And Chris added:

the server-side rendering story for Web Components, Declarative Shadow DOM, doesn’t feel very nice to me if you have to do it manually.

However Lit, a library which makes working with Web Components easier, are now providing ways to make this easier when Lit is used with Eleventy.

Brad likes this:

With tools like this (especially this @lit-labs/ssr project), we can have our cake and eat it too: use web components in a dev-friendly way, and then have the machines do the heavy lifting to convert that into a grosser-yet-progressive-enhancement-enabled syntax that ships to the user.

As does Chris:

using JavaScript frameworks in an entirely-client-side rendered way isn’t nearly as good for anything (users, SEO, performance, accessibility, etc) as server-side rendering (the effects of hydration are still debatable, but I view as largely worth it) … [but] the server-side rendering story for Web Components, Declarative Shadow DOM, doesn’t feel very nice to me if you have to do it manually. So… don’t do it manually! Let Eleventy do it!

As an additional footnote, perhaps we can make frameworks other than Eleventy (such as Rails) create server-rendered custom elements with Declarative Shadow DOM in a similar way. One to explore.

What open-source design systems are built with web components?

Alex Page, a Design System engineer at Spotify, has just asked:

What open-source design systems are built with web components? Anyone exploring this space? Curious to learn what is working and what is challenging. #designsystems #webcomponents

And there are lots of interesting examples in the replies.

I plan to read up on some of the stories behind these systems.

I really like Web Components but given that I don’t take a “JavaScript all the things” approach to development and design system components, I’ve been reluctant to consider that web components should be used for every component in a system. They would certainly offer a lovely, HTML-based interface for component consumers and offer interoperability benefits such as Figma integration. But if we shift all the business logic that we currently manage on the server to client-side JavaScript then:

  • the user pays the price of downloading that additional code;
  • you’re writing client-side JavaScript even for those of your components that aren’t interactive; and
  • you’re making everything a custom element (which as Jim Neilsen has previously written brings HTML semantics and accessibility challenges).

However maybe we can keep the JavaScript for our Web Component-based components really lightweight? I don’t know. For now I’m interested to just watch and learn.

My first Web Component: a disclosure widget

After a couple of years of reading about web components (and a lot of head-scratching), I’ve finally got around to properly creating one… or at least a rough first draft!

Check out disclosure-widget on codepen.

See also my pen which imports and consumes the component.

Caveats and to-dos:

  • I haven’t yet tried writing tests for a web component
  • I should find out how to refer to the custom element name in JavaScript without repeating it
  • I should look into whether observedAttributes and attributeChangedCallback are more appropriate than the more typical event listeners I used

References

I found Eric Bidelman’s article Custom Elements v1: Reusable Web Components pretty handy. In particular it taught me how to create a <template> including a <slot> to automatically ringfence the Light DOM content, and then to attach that template to the Shadow DOM to achieve my enhanced component.

Building a toast component (by Adam Argyle)

Great tutorial (with accompanying video) from Adam Argyle which starts with a useful definition of what a Toast is and is not:

Toasts are non-interactive, passive, and asynchronous short messages for users. Generally they are used as an interface feedback pattern for informing the user about the results of an action. Toasts are unlike notifications, alerts and prompts because they're not interactive; they're not meant to be dismissed or persist. Notifications are for more important information, synchronous messaging that requires interaction, or system level messages (as opposed to page level). Toasts are more passive than other notice strategies.

There are some important distinctions between toasts and notifications in that definition: toasts are for less important information and are non-interactive. I remember in a previous work planning exercise regarding a toast component a few of us got temporarily bogged down in working out the best JavaScript templating solution for SVG icon-based “Dismiss” buttons… however we were probably barking up the wrong tree with the idea that toasts should be manually dismissable.

There are lots of interesting ideas and considerations in Adam’s tutorial, such as:

  • using the <output> element for each toast
  • some crafty use of CSS Grid and logical properties for layout
  • combining hsl and percentages in custom properties to proportionately modify rather than redefine colours for dark mode
  • animation using keyframes and animation
  • native JavaScript modules
  • inserting an element before the <body> element (TIL that this is a viable option)

Thanks for this, Adam!

(via Adam’s tweet)

There’s some nice code in here but the demo page minifies and obfuscates everything. However the toast component source is available on GitHub.

Web animation tips

Warning: this entry is a work-in-progress and incomplete. That said, it's still a useful reference to me which is why I've published it. I’ll flesh it out soon!

There are lots of different strands of web development. You try your best to be good at all of them, but there’s only so much time in the day! Animation is an area where I know a little but would love to know more, and from a practical perspective I’d certainly benefit from having some road-ready solutions to common challenges. As ever I want to favour web standards over libraries where possible, and take an approach that’s lean, accessible, progressively-enhanced and performance-optimised.

Here’s my attempt to break down web animation into bite-sized chunks for ocassional users like myself.

Defining animation

Animation lets us make something visually move between different states over a given period of time.

Benefits of animation

Animation is a good way of providing visual feedback, teaching users how to use a part of the interface, or adding life to a website and making it feel more “real”.

Simple animation with transition properties

CSS transition is great for simple animations triggered by an event.

We start by defining two different states for an element—for example opacity:1 and opacity:0—and then transition between those states.

The first state would be in the element’s starting styles (either defined explicitly or existing implicitly based on property defaults) and the other in either its :hover or :focus styles or in a class applied by JavaScript following an event.

Without the transition the state change would still happen but would be instantaneous.

You’re not limited to only one property being animated and might, for example, transition between different opacity and transform states simultaneously.

Here’s an example “rise on hover” effect, adapted from Stephanie Eckles’s Smol CSS.

<div class="u-animate u-animate--rise">
  <span>rise</span>
</div>
.u-animate > * {
  --transition-property: transform;
  --transition-duration: 180ms;
  transition: var(--transition-property) var(--transition-duration) ease-in-out;
}

.u-animate--rise:hover > * {
  transform: translateY(-25%);
}

Note that:

  1. using custom properties makes it really easy to transition a different property than transform without writing repetitious CSS.
  2. we have a parent and child (<div> and <span> respectively in this example) allowing us to avoid the accidental flicker which can occur when the mouse is close to an animatable element’s border by having the child be the effect which animates when the trigger (the parent) is hovered.

Complex animations with animation properties

If an element needs to animate automatically (perhaps on page load or when added to the DOM), or is more complex than a simple A to B state change, then a CSS animation may be more appropriate than transition. Using this approach, animations can:

  • run automatically (you don’t need an event to trigger a state change)
  • go from an initial state through multiple intermediate steps to a final state rather than just from state A to state B
  • run forwards, in reverse, or alternate directions
  • loop infinitely

The required approach is:

  1. use @keyframes to define a reusable “template” set of animation states (or frames); then
  2. apply animation properties to an element we want to animate, including one or more @keyframes to be used.

Here’s how you do it:

@keyframes flash {
  0%    { opacity: 0; }
  20%   { opacity: 1; }
  80%   { opacity: 0; }
  100%  { opacity: 1; }
}

.animate-me {
  animation: flash 5s infinite;
}

Note that you can also opt to include just one state in your @keyframes rule, usually the initial state (written as either from or 0%) or final state (written as either to or 100%). You’d tend to do that for a two-state animation where the other “state” is in the element’s default styles, and you’d either be starting from the default styles (if your single @keyframes state is to) or finishing on them (if your single @keyframes state is from).

Should I use transition or animation?

As far as I can tell there’s no major performance benefit of one over the other, so that’s not an issue.

When the animation will be triggered by pseudo-class-based events like :hover or :focus and is simple i.e. based on just two states, transition feels like the right choice.

Beyond that, the choice gets a bit less binary and seems to come down to developer preference. But here are a couple of notes that might help in making a decision.

For elements that need to “animate in” on page load such as an alert, or when newly added to the DOM such as items in a to-do list, an animation with keyframes feels the better choice. This is because transition requires the presence of two CSS rules, leading to dedicated JavaScript to grab the element and apply a class, whereas animation requires only one and can move between initial and final states automatically including inserting a delay before starting.

For animations that involve many frames; control over the number of iterations; or looping… use @keyframes and animation.

For utility classes and classes that get added by JS to existing, visible elements following an event, either approach could be used. Arguably transition is the slightly simpler and more elegant CSS to write if it covers your needs. Then again, you might want to reuse the animations applied by those classes for both existing, visible elements and new, animated-in elements, in which case you might feel that instead using @keyframes and animation covers more situations.

Performance

A smooth animation should run at 60fps (frames per second). Animations that are too computationally expensive result in frames being dropped, i.e. a reduced fps rate, making the animation appear janky.

Cheap and slick properties

The CSS properties transform and opacity are very cheap to animate. Also, browsers often optimise these types of animation using hardware acceleration. To hint to the browser that it should optimise an animation property (and to ensure it is handled by the GPU rather than passed from CPU to GPU causing a noticeable glitch) we should use the CSS will-change property.

.my-element {
  will-change: transform;
}

Expensive properties

CSS properties which affect layout such as height are very expensive to animate. Animating height causes a chain reaction where sibling elements have to move too. Use transform over layout-affecting properties such as width or left if you can.

Some other CSS properties are less expensive but still not ideal, for example background-color. It doesn't affect layout but requires a repaint per frame.

Test your animations on a popular low-end device.

Timing functions

  • linear goes at the same rate from start to finish. It’s not like most motion in the real world.
  • ease-out starts fast then gets really slow. Good for things that come in from off-screen, like a modal dialogue.
  • ease-in starts slow then gets really fast. Good for moving somethng off-screen.
  • ease-in-out is the combination of the previous two. It‘s symmetrical, having an equal amount of acceleration and deceleration. Good for things that happen in a loop such as element fading in and out.
  • ease is the default value and features a brief ramp-up, then a lot of deceleration. It’s a good option for most general case motion that doesn’t enter or exit the viewport.

Practical examples

You can find lots of animation inspiration in libraries such as animate.css (and be sure to check animate.css on github where you can search their source for specific @keyframe animation styles).

But here are a few specific examples of animations I or teams I’ve worked on have had to implement.

Skip to content

The anchor’s State A sees its position fixed—i.e. positioned relative to the viewport—but then moved out of sight above it via transform: translateY(-10em). However its :focus styles define a State B where the intial translate has been undone so that the link is visible (transform: translateY(0em)). If we transition the transform property then we can animate the change of state over a chosen duration, and with our preferred timing function for the acceleration curve.

HTML:

<div class="u-visually-hidden-until-focused">
  <a
    href="#skip-link-target"
    class="u-visually-hidden-until-focused__item"
  >Skip to main content</a>
</div>

<nav>
  <ul>
    <li><a href="/">Home</a></li>
    <li><a href="/">News</a></li>
    <li><a href="/">About</a></li>
    <!-- …lots more nav links… -->
    <li><a href="/">Contact</a></li>
  </ul>
</nav>

<main id="skip-link-target">
  <h1>This is the Main content</h1>
  <p>Lorem ipsum <a href="/news/">dolor sit amet</a> consectetur adipisicing elit.</p>
  <p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p>
</main>

CSS:

.u-visually-hidden-until-focused {
  left: -100vw;
  position: absolute;

  &__item {
    position: fixed;
    top: 0;
    left: 0;
    transform: translateY(-10em);
    transition: transform 0.2s ease-in-out;

    &:focus {
      transform: translateY(0em);
    }
  }
}

To see this in action, visit my pen Hiding: visually hidden until focused and press the tab key.

Animating in an existing element

For this requirement we want an element to animate from invisible to visible on page load. I can imagine doing this with an image or an alert, for example. This is pretty straightforward with CSS only using @keyframes, opacity and animation.

Check out my fade in and out on page load with CSS codepen.

Animating in a newly added element

Stephanie Eckles shared a great CSS-only solution for animating in a newly added element which handily includes a Codepen demo. She mentions “CSS-only” because it’s common for developers to achieve the fancy animation via transition but that means needing to “make a fake event” via a JavaScript setTimeout() so that you can transition from the newly-added, invisible and class-free element state to adding a CSS class (perhaps called show) that contains the opacity:1, fancy transforms and a transition. However Stephanie’s alternative approach combines i) hiding the element in its default styles; with ii) an automatically-running animation that includes the necessary delay and also finishes in the keyframe’s single 100% state… to get the same effect minus the JavaScript.

Avoiding reliance on JS and finding a solution lower down the stack is always good.

HTML:

<button>Add List Item</button>
<ul>
  <li>Lorem ipsum dolor sit amet consectetur adipisicing elit. Nostrum facilis perspiciatis dignissimos, et dolores pariatur.</li>
</ul>

CSS:

li {
  animation: show 600ms 100ms cubic-bezier(0.38, 0.97, 0.56, 0.76) forwards;

  // Prestate
  opacity: 0;
  // remove transform for just a fade-in
  transform: rotateX(-90deg);
  transform-origin: top center;
}

@keyframes show {
  100% {
    opacity: 1;
    transform: none;
  }
}

Jhey Tompkins shared another CSS-only technique for adding elements to the DOM with snazzy entrance animations. He also uses just a single @keyframes state but in his case the from state which he uses to set the element’s initial opacity:0, then in his animation he uses an animation-fill-mode of both (rather than forwards as Stephanie used).

I can’t profess to fully understand both however if you change Jhey’s example to use forwards instead, then the element being animated in will temporarily appear before the animation starts (which ain’t good) rather than being initially invisible. Changing it to backwards gets us back on track, so I guess the necessary value relates to whether you’re going for from/0% or to/100%… and both just covers you for both cases. I’d probably try to use the appropriate one rather than both just in case there’s a performance implication.

Animated disclosure

Here’s an interesting conundrum.

For disclosure (i.e. collapse and expand) widgets, I tend to either use the native HTML <details> element if possible or else a simple, accessible DIY disclosure in which activating a trigger toggles a nearby content element’s visibility. In both cases, there’s no animation; the change from hidden to revealed and back again is immediate.

To my mind it’s generally preferable to keep it simple and avoid animating a disclosure widget. For a start, it’s tricky! The <details> element can’t be (easily) animated. And if using a DIY widget it’ll likely involve animating one of the expensive properties. Animating height or max-height is also gnarly when working with variable (auto) length content and often requires developers to go beyond CSS and reach for JavaScript to calculate computed element heights. Lastly, forgetting the technical challenges, there’s often no real need to animate disclosure; it might only hinder rather than help the user experience.

But let’s just say you have to do it, perhaps because the design spec requires it (like in BBC Sounds’ expanding and collapsing tracklists when viewed on narrow screens).

Options:

  • Animate the <details> element. This is a nice, standards-oriented approach. But it might only be viable for when you don’t need to mess with <details> appearance too much. We’d struggle to apply very custom styles, or to handle a “show the first few list items but not all” requirement like in the BBC Sounds example;
  • Animate CSS Grid. This is a nice idea but for now the animation only works in Firefox*. It’d be great to just consider it a progressive enhancement so it just depends on whether the animation is deemed core to the experience;
  • Animate from a max-height of 0 to “something sufficient” (my pen is inspired by Scott O’Hara’s disclosure example). This is workable but not ideal; you kinda need to set a max-height sweetspot otherwise your animation will be delayed and too long. You could of course add some JavaScript to get the exact necessary height then set it. BBC use max-height for their tracklist animation and those tracklists likely vary in length so I expect they use some JavaScript for height calculation.

* Update 20/2/23: the “animate CSS Grid” option now has wide browser support and is probably my preferred approach. I made a codepen that demonstrates a disclosure widget with animation of grid-template-rows.

Ringing bell icon

To be written.

Pulsing “radar” effect

To be written.

Accessibility

Accessibility and animation can co-exist, as Cassie Evans explains in her CSS-Tricks article Empathetic Animation. We should consider which parts of our website are suited to animation (for example perhaps not on serious, time-sensitive tasks) and we can also respect reduced motion preferences at a global level or in a more finer-grained way per component.

Notes

  • transition-delay can be useful for avoiding common annoyances, such as when a dropdown menu that appears on hover disappears when you try to move the cursor to it.

References

Front-end architecture for a new website (in 2021)

Just taking a moment for some musings on which way the front-end wind is blowing (from my perspective at least) and how that might practically impact my approach on the next small-ish website that I code.

I might lean into HTTP2

Breaking CSS into small modules then concatenating everything into a single file has traditionally been one of the key reasons for using Sass, but in the HTTP2 era where multiple requests are less of a performance issue it might be acceptable to simply include a number of modular CSS files in the <head>, as follows:

<link href="/css/base.css" rel="stylesheet">
<link href="/css/component_1.css" rel="stylesheet">
<link href="/css/component_2.css" rel="stylesheet">
<link href="/css/component_3.css" rel="stylesheet">

The same goes for browser-native JavaScript modules.

This isn’t something I’ve tried yet and it’d feel like a pretty radical departure from the conventions of recent years… but it‘s an option!

I’ll combine ES modules and classes

It’s great that JavaScript modules are natively supported in modern browsers. They allow me to remove build tools, work with web standards, and they perform well. They can also serve as a mustard cut that allows me to use other syntax and features such as async/await, arrow functions, template literals, the spread operator etc with confidence and without transpilation or polyfilling.

In the <head>:

<script type="module" src="/js/main.js"></script>

In main.js

import { Modal } from '/components/modal.js';

const Modal = new Modal();
modal.init();

In modal.js

export class Modal {
  init() {
    // modal functionality here
  }
}

I’ll create Web Components

I’ve done a lot of preparatory reading and learning about web components in the last year. I’ll admit that I’ve found the concepts (including Shadow DOM) occasionally tough to wrap my head around, and I’ve also found it confusing that everyone seems to implement web components in different ways. However Dave Rupert’s HTML with Superpowers presentation really helped make things click.

I’m now keen to create my own custom elements for javascript-enhanced UI elements; to give LitElement a spin; to progressively enhance a Light DOM baseline into Shadow DOM fanciness; and to check out how well the lifecycle callbacks perform.

I’ll go deeper with custom properties

I’ve been using custom properties for a few years now, but at first it was just as a native replacement for Sass variables, which isn’t really exploiting their full potential. However at work we’ve recently been using them as the special sauce powering component variations (--gap, --mode etc).

In our server-rendered components we’ve been using inline style attributes to apply variations via those properties, and this brings the advantage of no longer needing to create a CSS class per variation (e.g. one CSS class for each padding variation based on a spacing scale), which in turn keeps code and specificity simpler. However as I start using web components, custom properties will prove really handy here too. Not only can they be updated by JavaScript, but furthermore they provide a bridge between your global CSS and your web component because they can “pierce the Shadow Boundary”, make styling Shadow DOM HTML in custom elements easier.

I’ll use BEM, but loosely

Naming and structuring CSS can be hard, and is a topic which really divides opinion. Historically I liked to keep it simple using the cascade, element and contextual selectors, plus a handful of custom classes. I avoided “object-oriented” CSS methodologies because I found them verbose and, if I’m honest, slightly “anti-CSS”. However it’s fair to say that in larger applications and on projects with many developers, this approach lacked a degree of structure, modularisation and predictability, so I gravitated toward BEM.

BEM’s approach is a pretty sensible one and, compared to the likes of SUIT, provides flexibility and good documentation. And while I’ve been keeping a watchful eye on new methodologies like CUBE CSS and can see that they’re choc-full of ideas, my feeling is that BEM remains the more robust choice.

It’s also important to me that BEM has the concept of a mix because this allows you to place multiple block classes on the same element so as to (for example) apply an abstract layout in combination with a more implementation-specific component class.

<div class="l-stack c-news-feed">

Where I’ll happily deviate from BEM is to favour use of certain ARIA attributes as selectors (for example [aria-current=page] or [aria-expanded=true] because this enforces good accessibility practice and helps create equivalence between the visual and non-visual experience. I’m also happy to use the universal selector (*) which is great for owl selectors and I’m fine with adjacent sibling (and related) selectors.

Essentially I’m glad of the structure and maintainability that BEM provides but I don’t want a straitjacket that stops me from using my brain and applying CSS properly.

Resources for learning front-end web development

A designer colleague recently asked me what course or resources I would recommend for learning front-end web development. She mentioned React at the beginning but I suggested that it’d be better to start by learning HTML, CSS, and JavaScript. As for React: it’s a subset or offshoot of JavaScript so it makes sense to understand vanilla JS first.

For future reference, here are my tips.

Everything in one place

Google’s web.dev training resource have also been adding some excellent guides, such as:

Another great one-stop shop is MDN Web Docs. Not only is MDN an amazing general quick reference for all HTML elements, CSS properties, JavaScript APIs etc but for more immersive learning there are also MDN’s guides.

Pay attention to HTML

One general piece of advice is that when people look at lists of courses (whether or not they are new to web development) they ensure to learn HTML. People tend to underestimate how complicated, fast-moving and important HTML is.

Also, everything else – accessibility, CSS, JavaScript, performance, resilience – requires a foundation of good HTML. Think HTML first!

Learning CSS, specifically

CSS is as much about concepts and features – e.g. the cascade and specificity, layout, responsive design, typography, custom properties – as it is about syntax. In fact probably more so.

Most tutorials will focus on the concepts but not necessarily so much on practicalities like writing-style or file organisation.

Google’s Learn CSS course should be pretty good for the modern concepts.

Google also have Learn Responsive Design.

If you’re coming from a kinda non-CSS-oriented perspective, Josh W Comeau’s CSS for JavaScript Developers (paid course) could be worth a look.

If you prefer videos, you could check out Steve Griffith’s video series Learning CSS. Steve’s videos are comprehensive and well-paced. It contains a whole range of topics (over 100!), starting from the basics like CSS Box Model.

In terms of HTML and CSS writing style (BEM etc) and file organisation (ITCSS etc), here’s a (version of a) “style guide” that my team came up with for one of our documentation websites. I think it’s pretty good!

CSS and HTML Style Guide (to do: add link here)

For more on ITCSS and Harry Roberts’s thoughts on CSS best practices, see:

Learning JavaScript

I recommended choosing a course or courses from CSS-Tricks’ post Beginner JavaScript notes, especially as it includes Wes Bos’s Beginner JavaScript Notes + Reference.

If you like learning by video, check out Steve Griffith’s JavaScript playlist.

Once you start using JS in anger, I definitely recommend bookmarking Chris Ferdinandi’s Methods and APIs reference guide.

If you’re then looking for a lightweight library for applying sprinkles of JavaScript, you could try Stimulus.

Learning Responsive Design

I recommend Jeremy Keith’s Learn Responsive Design course on web.dev.

Lists of courses

You might choose a course or courses from CSS-Tricks’ post Where do you learn HTML and CSS in 2020?

  • Resilient Web Design by Jeremy Keith. A fantastic wide-screen perspective on what we’re doing, who we’re doing it for, and how to go about it. Read online or listen as an audiobook.
  • Inclusive Components by Heydon Pickering. A unique, accessible approach to building interactive components, from someone who’s done this for BBC, Bulb, Spotify.
  • Every Layout by Heydon Pickering & Andy Bell. Introducing layout primitives, for handling responsive design in Design Systems at scale (plus so many insights about the front-end)
  • Atomic Design by Brad Frost. A classic primer on Design Systems and component-composition oriented thinking.
  • Practical SVG by Chris Coyier. Learn why and how to use SVG to make websites more aesthetically sharp, performant, accessible and flexible.
  • Web Typography by Richard Rutter. Elevate the web by applying the principles of typography via modern web typography techniques.

Collapsible sections, on Inclusive Components

It’s a few years old now, but this tutorial from Heydon Pickering on how to create an accessible, progressively enhanced user interface comprised of multiple collapsible and expandable sections is fantastic. It covers using the appropriate HTML elements (buttons) and ARIA attributes, how best to handle icons (minimal inline SVG), turning it into a web component and plenty more besides.

Buttons and links: definitions, differences and tips

On the web buttons and links are fundamentally different materials. However some design and development practices have led to them becoming conceptually “bundled together” and misunderstood. Practitioners can fall into the trap of seeing the surface-level commonality that “you click the thing, then something happens” and mistakenly thinking the two elements are interchangeable. Some might even consider them as a single “button component” without considering the distinctions underneath. However this mentality causes our users problems and is harmful for effective web development. In this post I’ll address why buttons and links are different and exist separately, and when to use each.

Problematic patterns

Modern website designs commonly apply the appearance of a button to a link. For isolated calls to action this can make sense however as a design pattern it is often overused and under-cooked, which can cause confusion to developers implementing the designs.

Relatedly, it’s now common for Design Systems to have a Button component which includes button-styled links that are referred to simply as buttons. Unless documented carefully this can lead to internal language and comprehension issues.

Meanwhile developers have historically used faux links (<a href="#">) or worse, a DIY clickable div, as a trigger for JavaScript-powered functionality where they should instead use native buttons.

These patterns in combination have given rise to a collective muddle over buttons and links. We need to get back to basics and talk about foundational HTML.

Buttons and anchors in HTML

There are two HTML elements of interest here.

Hyperlinks are created using the HTML anchor element (<a>). Buttons (by which I mean real buttons rather than links styled to appear as buttons) are implemented with the HTML button element (<button>).

Although a slight oversimplification, I think David MacDonald’s heuristic works well:

If it GOES someWHERE use a link

If it DOES someTHING use a button

A link…

  • goes somewhere (i.e. navigates to another place)
  • normally links to another document (i.e. page) on the current website or on another website
  • can alternatively link to a different section of the same page
  • historically and by default appears underlined
  • when hovered or focused offers visual feedback from the browser’s status bar
  • uses the “pointing hand” mouse pointer
  • results in browser making an HTTP GET request by default. It’s intended to get a page or resource rather than to change something
  • offers specific right-click options to mouse users (open in new tab, copy URL, etc)
  • typically results in an address which can be bookmarked
  • can be activated by pressing the return key
  • is announced by screen readers as “Link”
  • is available to screen reader users within an overall Links list

A button…

  • does something (i.e. performs an action, such as “Add”, “Update” or "Show")
  • can be used as <button type=submit> within a form to submit the form. This is a modern replacement for <input type=submit /> and much better as it’s easier to style, allows nested HTML and supports CSS pseudo-elements
  • can be used as <button type=button> to trigger JavaScript. This type of button is different to the one used for submitting a <form>. It can be used for any type of functionality that happens in-place rather than taking the user somewhere, such as expanding and collapsing content, or performing a calculation.
  • historically and by default appears in a pill or rounded rectangle
  • uses the normal mouse pointer arrow
  • can be activated by pressing return or space.
  • implictly gets the ARIA button role.
  • can be extended with further ARIA button-related states like aria-pressed
  • is announced by screen readers as “Button”
  • unlike a link is not available to screen reader users within a dedicated list

Our responsibilities

It’s our job as designers and developers to use the appropriate purpose-built element for each situation, to present it in a way that respects conventions so that users know what it is, and to then meet their expectations of it.

Tips

  • Visually distinguish button-styled call-to-action links from regular buttons, perhaps with a more pill-like appearance and a right-pointing arrow
  • Avoid a proliferation of call-to-action links by linking content itself (for example a news teaser’s headline). Not only does this reduce “link or button?” confusion but it also saves space, and provides more accessible link text.
  • Consider having separate Design System components for Button and ButtonLink to reinforce important differences.
  • For triggering JavaScript-powered interactions I’ll typically use a button. However in disclosure patterns where the trigger and target element are far apart in the DOM it can make sense to use a link as the trigger.
  • For buttons which are reliant on JavaScript, it’s best to use them within a strategy of progressive enhancement and not render them on the server but rather with client-side JavaScript. That way, if the client-side JavaScript is unsupported or fails, the user won’t be presented with a broken button.

Update: 23 November 2024

Perhaps a better heuristic than David MacDonald’s mentioned above, is:

Links are for a simple connection to a resource; buttons are for actions.

What I prefer about including a resource is that the “goes somewhere” definition of a link breaks down for anchors that instruct the linked resource to download (via the download attribute) rather than render in the browser, but this doesn’t. I also like the inclusion of simple because some buttons (like the submit button of a search form) might finish by taking you to a resource (the search results page) but that’s a complex action not a simple connection; you’re searching a database using your choice of search query.

References

HTML with Superpowers (from Dave Rupert)

Here’s a great new presentation by Dave Rupert (of the Shop Talk show) in which he makes a compelling case for adopting Web Components. Not only do they provide the same benefits of encapsulation and reusability as components in proprietary JavaScript frameworks, but they also bring the reliability and portability of web standards, work without build tools, are suited to progressive enhancement, and may pave the way for a better web.

Dave begins by explaining that Web Components are based on not just a set of technologies but a set of standards, namely:

  • Custom Elements (for example <custom-alert>)
  • Shadow DOM
  • ES Modules
  • the HTML <template> element

Standards have the benefit that we can rely on them to endure and work into the future in comparison to proprietary technologies in JavaScript frameworks. That’s good news for people who like to avoid the burnout-inducing churn of learning and relearning abstractions. Of course the pace of technology change with web standards tends to be slower, however that’s arguably a price worth paying for cross-platform stability and accessibility.

Some of Web Components’ historical marketing problems are now behind them, since they are supported by all major browsers and reaching maturity. Furthermore, web components have two superpowers not found in other JavaScript component approaches:

Firstly, the Shadow DOM (which is both powerful and frustrating). The Shadow DOM provides encapsulation but furthermore in progressive enhancement terms it enables the final, enhanced component output which serves as an upgrade from the baseline Light DOM HTML we provided in our custom element instance. It can be a little tricky or confusing to style, however, although there are ways.

Secondly, you can use web components standalone, i.e. natively, without any frameworks, build tools, or package managers. All that’s required to use a “standalone” is to load the <script type=module …> element for it and then use the relevant custom element HTML on your page. This gets us closer to just writing HTML rather than wrestling with tools.

Dave highlights an education gap where developers focused on HTML, CSS, and Design Systems don’t tend to use Web Components. He suggests that this is likely as a result of most web component tutorials focusing on JavaScript APIs for JavaScript developers. However we can instead frame Web Component authoring as involving a layered approach that starts with HTML, adds some CSS, then ends by applying JavaScript.

Web Components are perfectly suited to progressive enhancement. And that progressive enhancement might for example apply lots of complicated ARIA-related accessibility considerations. I really like the Tabs example where one would create a <generic-tabs> instance which starts off with simple, semantic, resilient HTML that renders headings and paragraphs…

<generic-tabs>
  <h2>About</h2>
  <div>
    <p>About content goes here. Lorem ipsum dolor sit amet…</p>
  </div>

  <h2>Contact</h2>
  <div>
    <p>Contact content goes here. Lorem ipsum dolor sit amet…</p>
  </div> 
</generic-tabs>

…but the Web Component’s JavaScript would include a template and use this to upgrade the Light DOM markup into the final interactive tab markup…

<generic-tabs>
  <h2 slot="tab" aria-selected="true" tabindex="0" role="tab" id="generic-tab-3-0" aria-controls="generic-tab-3-0" selected="">About</h2>
  <div role="tabpanel" aria-labelledby="generic-tab-3-0" slot="panel">
    <p>About content goes here. Lorem ipsum dolor sit amet…</p>
  </div>
  <h2 slot="tab" aria-selected="false" tabindex="-1" role="tab" id="generic-tab-3-1" aria-controls="generic-tab-3-1">Contact</h2>
  <div role="tabpanel" aria-labelledby="generic-tab-3-1" slot="panel" hidden>
    <p>Contact content goes here. Lorem ipsum dolor sit amet…</p>
  </div>
</generic-tabs>

The idea is that the component’s JS would handle all the complex interactivity and accessibility requirements of Tabs under the hood. I think if I were implementing something like Inclusive Components’ Tabs component these days I’d seriously consider doing this as a Web Component.

Later, Dave discusses the JavaScript required to author a Custom Element. He advises that in order to avoid repeatedly writing the same lengthy, boilerplate code on each component we might use a lightweight library such as his favourite, LitElement.

Lastly, Dave argues that by creating and using web components we are working with web standards rather than building for a proprietary library. We are creating compatible components which pave the cowpaths for these becoming future HTML standards (e.g. a <tabs> element!) And why is advancing the web important? Because an easier web lowers barriers: less complexity, less tooling and setup, less gatekeeping—a web for everyone.

How to debug event listeners with your browser’s developer tools (on Go Make Things)

On the page, right-click the element you want to debug event listeners for, then click Inspect Element. In chromium-based browsers like MS Edge and Google Chrome, click the Event Listeners tab in Developer Tools. There, you’ll see a list of all of the events being listened to on that element. If you expand the event, you can see what element they’re attached to and click a link to open up the actual event listener itself in the JavaScript.


Motion One: The Web Animations API for everyone

A new animation library, built on the Web Animations API for the smallest filesize and the fastest performance.

This JavaScript-based animation library—which can be installed via npm—leans on an existing web API to keep its file size low and uses hardware accelerated animations where possible to achieve impressively smooth results.

For fairly basic animations, this might provide an attractive alternative to the heavier Greensock. The Motion docs do however flag the limitation that it can only animate “CSS styles”. They also say “SVG styles work fine”. I hope by this they mean SVG presentation attributes rather than inline CSS on an SVG, although it’s hard to tell. However their examples look promising.

The docs website also contains some really great background information regarding animation performance.

Testing ES modules with Jest

Here are a few troubleshooting tips to enable Jest, the JavaScript testing framework, to be able to work with ES modules without needing Babel in the mix for transpilation. Let’s get going with a basic set-up.

package.json

…,
"scripts": {
  "test": "NODE_ENV=test NODE_OPTIONS=--experimental-vm-modules jest"
},
"type": "module",
"devDependencies": {
  "jest": "^27.2.2"
}

Note: take note of the crucial "type": "module" part as it’s the least-documented bit and your most likely omission!

After that set-up, you’re free to import and export to your heart’s content.

javascript/sum.js

export const sum = (a, b) => {
  return a + b;
}

spec/sum.test.js

import { sum } from "../javascript/sum.js";
    
test('adds 1 + 2 to equal 3', () => {
  expect(sum(1, 2)).toBe(3);
});

Hopefully that’ll save you (and future me) some head-scratching.

(Reference: Jest’s EcmaScript Modules docs page)

Harry Roberts says “Get Your Head Straight”

Harry Roberts (who created ITCSS for organising CSS at scale but these days focuses on performance) has just given a presentation about the importance of getting the content, order and optimisation of the <head> element right, including lots of measurement data to back up his claims. Check out the slides: Get your Head Straight

While some of the information about asset loading best practices is not new, the stuff about ordering of head elements is pretty interesting. I’ll be keeping my eyes out for a video recording of the presentation though, as it’s tricky to piece together his line of argument from the slides alone.

However one really cool thing he’s made available is a bookmarklet for evaluating any website’s <head>:
ct.css

Practical front-end performance tips

I’ve been really interested in the subject of Web Performance since I read Steve Souders’ book High Performance Websites back in 2007. Although some of the principles in that book are still relevant, it’s also fair to say that a lot has changed since then so I decided to pull together some current tips. Disclaimer: This is a living document which I’ll expand over time. Also: I’m a performance enthusiast but not an expert. If I have anything wrong, please let me know.

Inlining CSS and or JavaScript

The first thing to know is that both CSS and JavaScript are (by default) render-blocking, meaning that when a browser encounters a standard .css or .js file in the HTML, it waits until that file has finished downloading before rendering anything else.

The second thing to know is that there is a “magic file size” when it comes to HTTP requests. File data is transferred in small chunks of about 14 kb. So if a file is larger than 14 kb, it requires multiple roundtrips.

If you have a lean page and minimal CSS and or JavaScript to the extent that the page in combination with the (minified) CSS/JS content would weigh 14 kb or less after minifying and gzipping, you can achieve better performance by inlining your CSS and or JavaScript into the HTML. This is because there’d be only one request, thereby allowing the browser to get everything it needs to start rendering the page from that single request. So your page is gonna be fast.

If your page including CSS/JS is over 14 kb after minifying and gzipping then you’d be better off not inlining those assets. It’d be better for performance to link to external assets and let them be cached rather than having a bloated HTML file that requires multiple roundtrips and doesn’t get the benefit of static asset caching.

Avoid CSS @import

CSS @import is really slow!

JavaScript modules in the head

Native JavaScript modules are included on a page using the following:

<script type="module" src="main.js"></script>

Unlike standard <script> elements, module scripts are deferred (non render-blocking) by default. Rather than placing them before the closing </body> tag I place them in the <head> so as to allow the script to be downloaded early and in parallel with the DOM being processed. That way, the JavaScript is already available as soon as the DOM is ready.

Background images

Sometimes developers implement an image as a CSS background image rather than a “content image”, either because they feel it’ll be easier to manipulate that way—a typical example being a responsive hero banner with overlaid text—or simply because it’s decorative rather than meaningful. However it’s worth being aware of how that impacts the way that image loads.

Outgoing requests for images defined in CSS rather than HTML won’t start until the browser has created the Render Tree. The browser must first download and parse the CSS then construct the CSSOM before it knows that “Element X” should be visible and has a background image specified, in order to then decide to download that image. For important images, that might feel too late.

As Harry Roberts explains it’s worth considering whether the need might be served as well or better by a content image, since by comparison that allows the browser to discover and request the image nice and early.

By moving the images to <img> elements… the browser can discover them far sooner—as they become exposed to the browser’s preload scanner—and dispatch their requests before (or in parallel to) CSSOM completion

However if still makes sense to use a background image and performance is important Harry recommends including an accompanying hidden image inline or preloading it in the <head> via link rel=preload.

Preload

From MDN’s preload docs, preload allows:

specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.

The benefits are most clearly seen on large and late-discovered resources. For example:

  • Resources that are pointed to from inside CSS, like fonts or images.
  • Resources that JavaScript can request such as JSON and imported scripts.
  • Larger images and videos.

I’ve recently used the following to assist performance of a large CSS background image:

<link rel="preload" href="bg-illustration.svg" as="image" media="(min-width: 60em)">

Self-host your assets

Using third-party hosting services for fonts or other assets no longer offers the previously-touted benefit of the asset potentially already being in the user’s browser cache. Cross domain caching has been disabled in all major browsers.

You can still take advantage of the benefits of CDNs for reducing network latency, but preferably as part of your own infrastructure.

Miscellaneous

Critical CSS is often a wasted effort due to CSS not being a bottleneck, so is generally not worth doing.

References

Progressively enhanced burger menu tutorial by Andy Bell

Here’s a smart and comprehensive tutorial from Andy Bell on how to create a progressively enhanced narrow-screen navigation solution using a custom element. Andy also uses Proxy for “enabled” and “open” state management, ResizeObserver on the custom element’s containing header for a Container Query like solution, and puts some serious effort into accessible focus management.

One thing I found really interesting was that Andy was able to style child elements of the custom element (as opposed to just elements which were present in the original unenhanced markup) from his global CSS. My understanding is that you can’t get styles other than inheritable properties through the Shadow Boundary so this had me scratching my head. I think the explanation is that Andy is not attaching the elements he creates in JavaScript to the Shadow DOM but rather rewriting and re-rendering the element’s innerHTML. This is an interesting approach and solution for getting around web component styling issues. I see elsewhere online that the innerHTML based approach is frowned upon however Andy doesn’t “throw out” the original markup but instead augments it.

Adapting Stimulus usage for better Progressive Enhancement

A while back, Jake Archibald tweeted:

Don't render buttons on the server that require JS to work.

The idea is that user interface elements which depend on JavaScript (such as buttons) should be rendered on the client-side, i.e. with JavaScript.

In the context of a progressive enhancement mindset, this makes perfect sense. Our minimum viable experience should work without JavaScript due to the fragility of a JavaScript-dependent approach so should not include script-triggering buttons which might not work. The JavaScript which applies the enhancements should not only listen for and act upon button events, but should also be responsible for actually rendering the button.

This is how I used to build JavaScript interactions as standard, however sadly due to time constraints and framework conventions I don’t always follow this best practice on all projects.

At work, we use Stimulus. Stimulus has a pretty appealing philosophy:

Stimulus is designed to enhance static or server-rendered HTML—the “HTML you already have”

However in their examples they always render buttons on the server; they always assume the JavaScript-powered experience is the baseline experience. I’ve been pondering whether that could easily be adapted toward better progressive enhancement and it seems it can.

My hunch was that I should use the connect() lifecycle method to render a button into the component (and introduce any other script-dependent markup adjustments) at the earliest opportunity. I wasn’t sure whether creating new DOM elements at this point and fitting them with Stimulus-related attributes such as action and target would make them available via the standard Stimulus APIs like server-rendered elements but was keen to try. I started by checking if anyone was doing anything similar and found a thread where Stimulus contributor Javan suggested that DIY target creation is fine.

I then gave that a try and it worked! Check out my pen Stimulus with true progressive enhancement. It’s a pretty trivial example for now, but proves the concept.

Astro

Astro looks very interesting. It’s in part a static site builder (a bit like Eleventy) but it also comes with a modern (revolutionary?) developer experience which lets you author components as web components or in a JS framework of your choice but then renders those to static HTML for optimal performance. Oh, and as far as I can tell theres no build pipeline!

Astro lets you use any framework you want (or none at all). And if most sites only have islands of interactivity, shouldn’t our tools optimize for that?

People have been posting some great thoughts and insights on Astro already, for example:

(via @css)

clipboard.js - Copy to clipboard without Flash

Here’s a handy JS package for “copy to clipboard” functionality that’s lightweight and installable from npm.

It also appears to have good legacy browser support plus a means for checking/confirming support too, which should assist if your approach is to only add a “copy” button to the DOM as a progressive enhancement.

(via @chriscoyier)

Inspire.js

Lean, hackable, extensible slide deck framework

I’ve been on the lookout for a lightweight, web standards based slide deck solution for a while and this one from Lea Verou could well be perfect.

It has keyboard navigation, video and code demo support, an index page and much more… and it’s all powered by straightforward HTML, CSS and JavaScript. I need to give this a spin!

Container Queries in Web Components | Max Böck

Max’s demo is really clever and features lots of interesting web component related techniques.

I came up with this demo of a book store. Each of the books is draggable and can be moved to one of three sections, with varying available space. Depending on where it is placed, different styles will be applied to the book.

Some of the techniques I found interesting included:

  • starting with basic HTML for each book and its image, title, and author elements rather than an empty custom element, thereby providing a resilient baseline
  • wrapping each book in a custom book-element tag (which the browser would simply treat like a div in the worst case scenario)
  • applying the slot attribute to each of the nested elements, for example slot="title"
  • including a template with id="book-element" at the top of the HTML. This centralises the optimal book markup, which makes for quicker, easier, and less disruptive maintenance. (A template is parsed but not rendered by the browser. It is available solely to be referenced and used by JavaScript)
  • including slots within the template, such as <slot name="title">
  • putting a style block within the template. These styles target the book component only, and include container query driven responsiveness
  • targetting the <book-element> wrapper element in CSS via the :host selector, and applying contain to set it as a container query context
  • targetting a slot in the component CSS using (for example) ::slotted(img)

Thoughts

Firstly, in the basic HTML/CSS, I might ensure images are display: block and use div rather than span for a better baseline appearance should JavaScript fail.

Secondly, even though this tutorial is really nice, I still find myself asking: why use a Web Component to render a book rather than a server-side solution when the latter removes the JS dependency? Part of the reason is no doubt developer convenience—people want to build component libraries in JavaScript if that’s their language of choice. Also, it requires less backend set-up and leads to a more portable stack. And back-end tools for component-based architectures are generally less mature and feature-rich then those for the front-end.

One Web Component specific benefit is that Shadow DOM provides an encapsulation mechanism to style, script, and HTML markup. This encapsulation provides private scope that both prevents the content of the component from being affected by the external document, and keeps its CSS and JS from leaking out… which might be nice for avoiding the namespacing you’d otherwise have to do.

I have a feeling that Web Components might make sense for some components but be neither appropriate nor required for others. Therefore just because you use Web Components doesn’t mean that you suddenly need to feel the need to write or refactor every component that way. It’s worth bearing in mind that client-side JavaScript based functionality comes with a performance cost—the user needs to wait for it to download. So I feel there might be a need to exercise some restraint. I want to think about this a little more.

Other references

Ruthlessly eliminating layout shift on netlify.com, by Zach Leatherman

I love hearing about clever front-end solutions which combine technologies and achieve multiple goals. In Zach’s post we hear how Netlify’s website suffered from layout shift when conditionally rendering dismissible promo banners, and how he addressed this by rethinking the problem and shifting responsibilities around the stack.

Here’s my summary of the smart ideas covered in the post:

  • decide on the appropriate server-rendered content… in this case showing rather than hiding the banner, making the most common use case faster to load
  • have the banner “dismiss” button’s event handling script store the banner’s href in the user’s localStorage as an identifier accessible on return visits
  • process lightweight but critical JavaScript logic early in the <head>… in this case a check for this banner’s identifier existing in localStorage
  • under certain conditions – in this case when the banner was previously seen and dismissed – set a “state” class (banner--hide) on the <html> element, leading to the component being hidden seamlessly by CSS
  • build the banner as a web component, the first layer of which being a custom element <announcement-banner> and the second a JavaScript class to enhance it
  • delegate responsibility for presenting the banner’s “dismiss” button to the same script responsible for the component’s enhancements, meaning that a broken button won’t be presented if that script were to break.

So much to like in there!

Here are some further thoughts the article provoked.

Web components FTW

It feels like creating a component such as this one as a web component leads to a real convergence of benefits:

  • tool-free, async loading of the component JS as an ES module
  • fast, native element discovery (no need for a document.querySelector)
  • enforces using a nice, idiomatic class providing encapsulation and high-performing native callbacks
  • resilience and progressive enhancement by putting all your JS-dependent stuff into the JS class and having that enhance your basic custom element. If that JS breaks, you still have the basic element and won’t present any broken elements.

Even better, you end up with framework-independent, standards-based component that you could share with others for reuse elsewhere, just like Zach did.

Multiple banners

I could see there being a case where there are multiple banners during the same time period. I guess in that situation the localStorage banner value could be a stringified object rather than a simple, single-URL string.

Setting context on the root

It’s really handy to have a way to exert just-in-time control over the display of a server-rendered element in a way that avoids flashes of content… and adding a class to the <html> element offers that. In this approach, we run the small amount of JavaScript required to test a local condition (e.g. checking for a value in localStorage) really early. That lets us process our conditional logic before the element is rendered… although this also means that it’s not yet available in the DOM for direct manipulation. But adding a class to the HTML element means that we can pre-prepare CSS to use that class as a contextual selector for hiding the element.

We’re already familiar with the technique of placing classes on the root element from libraries like modernizr and some font-loading approaches, but this article serves as a reminder that we can employ it whenever we need it.

Handling the close button

Zach’s approach to handling the banner’s dismiss button was interesting. He makes sure that it’s not shown unless the web component’s JavaScript runs successfully which is great, but rather than inject it with JavaScript he includes it in the initial HTML but hidden with CSS, and his method of hiding is opacity.

We use opacity to toggle the close button so that it doesn’t reflow the component when it’s enabled via JavaScript.

I think what Zach’s saying is that the alternatives – inserting the button with JS, or toggling the hidden attribute or its CSS counterpart display:none – would affect geometry causing the browser to perform layout… whereas modifying opacity does not.

I love that level of diligence! Typically I prefer to delegate responsibility for inserting JS-dependent buttons to JavaScript because in comparison to including a button in the server-rendered HTML then hiding it, it feels more resilient and a more maintainable separation of concerns. However as always the best solution depends on the situation.

If I were going down Zach’s route I think I’d replace opacity with visibility since the latter hiding method removes the hidden element from the document which feels more accessible, while still avoiding triggering the reflow that display would.

Side-thoughts

In a server-side scripted application – one using Rails or PHP, for example – you could alternatively handle persisting state with cookies rather than localStorage… allowing you to test for the presence of the cookie on the server then handle conditional rendering of the banner on the server too, rather than needing classes which trigger hiding. I can see an argument for that. Thing is though, not everyone’s working in that environment. Zach has provided a standalone solution.

References

Observer APIs in a nutshell

I’ve played with the various HTML5 Observer APIs (IntersectionObserver, ResizeObserver and MutationObserver) a little over the last few years—for example using ResizeObserver in a container query solution for responsive grids. But in all honesty their roles, abilities and differences haven’t yet fully stuck in my brain. So I’ve put together a brief explainer for future reference.

Intersection Observer

Lets you watch for when an element of your choice intersects with a root element of your choice—typically the viewport—and then take action in response.

So you might watch for a div that’s way down the page entering the viewport as a result of the user scrolling, then act upon that by applying a class which animates that div’s opacity from 0 to 1 to make it fade in.

Here’s how it works:

  • Instantiate a new IntersectionObserver object, passing in firstly a callback function and secondly an options array which specifies your root element (usually the viewport, or a specific subsection of it).
  • Call observe on your instance, passing in the element you want to watch. If you have multiple elements to watch, you could call observe repeatedly in a loop through the relevant NodeList.
  • in the callback function add the stuff you want to happen in response to “intersecting” and “no longer intersecting” events.

Mutation Observer

Lets you watch for changes to the attributes or content of DOM elements then take action in response.

You might use this if you have code that you want to run if and when an element changes because of another script.

Here’s how it works:

  • Your typical starting point is that you already have one or more event listeners which modify the DOM in response to an event.
  • Instantiate a new MutationObserver object, passing in a callback function.
  • The callback function will be called every time the DOM is changed.
  • Call observe on your instance, passing in as first argument the element to watch and as second argument a config object specifying what type of changes you’re interested in, for example you might only care about changes to specific attributes.
  • your callback function provides an array of MutationRecord objects—one for each change that has just taken place—which you can loop through and act upon.

Resize Observer

Lets you watch for an element meeting a given size threshold then take action in response.

For example you might add a class of wide to a given container only when it is wider than 60em so that new styles are applied. This is a way of providing container query capability while we wait for that to land in CSS.

Or you might load additional, heavier-weight media in response to a certain width threshold because you feel you can assume a device type that indicates the user is on wifi. Adding functionality rather than applying styles is something we could not achieve with CSS alone.

Given that Container Query support is coming in CSS and that we can usually get by without it in the meantime, I don’t think it’s something I need so desperately that I’ll keep reaching for JavaScript to achieve it. However that’s not the only use for ResizeObserver.

It’s also worth remembering that it’s not all about width: ResizeObserver can also be used to detect and respond to changes to an element’s height. An example might be watching for changes to a chat window’s height—something that’s liable to happen as new messages appear—and then ensuring the focus stays at the bottom, on the latest message.

References

Steve Griffiths has some great video tutorials on these topics.

Encapsulated Eleventy/Nunjucks components with macros (by Trys Mudford)

Trys shows us how to use the Nunjucks macro to create encapsulated components. This works out less leaky and more predictable than an include preceded by variables assigned with set.

Trys’s solution allows us to render components like so:

{{ component('button', {
  text: 'Press me'
}) }}

{# Output #}
<button type="button">Press me</button>

Update, 8th Aug 2021: when I tried implementing this I found that it results in errors when attempting to render components anywhere other than on the base template where Trys recommended including the import line. The workaround—as Paul Salaets points out—is to include the import at the top of every page-level template (index.njk, archive.njk etc) that uses the component macro.

Front-of-the-front-end and back-of-the-front-end web development (by Brad Frost)

The Great Divide between so-called front-end developers is real! Here, Brad Frost proposes some modern role definitions.

A front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS, and presentational JavaScript code.

A back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.

Brad also offers:

A succinct way I’ve framed the split is that a front-of-the-front-end developer determines the look and feel of a button, while a back-of-the-front-end developer determines what happens when that button is clicked.

I’m not sure I completely agree with his definitions—I see a bit more nuance in it. Then again, maybe I’m biased by my own career experience. I’m sort-of a FOTFE developer, but one who has also always done both BOTFE and “actual” back-end work (building Laravel applications, or working in Ruby on Rails etc).

I like the fact that we are having this discussion, though. The expectations on developers are too great and employers and other tech people need to realise that.

Vanilla JS List

Here’s Chris Ferdinandi’s curated list of organisations which use vanilla JS to build websites and web apps.

You don’t need a heavyweight JavaScript framework, and vanilla JS does scale.

At the time of writing the list includes Marks & Spencer, Selfridges, Basecamp and GitHub.

(via @ChrisFerdinandi)

Accessible interactions (on Adactio)

Jeremy Keith takes us through his thought process regarding the choice of link or button when planning accessible interactive disclosure elements.

A button is generally a solid choice as it’s built for general interactivity and carries the expectation that when activated, something somewhere happens. However in some cases a link might be appropriate, for example when the trigger and target content are relatively far apart in the DOM and we feel the need move the user to the target / give it focus.

For a typical disclosure pattern where some content is shown/hidden by an adjacent trigger, a button suits perfectly. The DOM elements are right next to each other and flow into each other so there’s no need to move or focus anything.

However in the case of a log-in link in a navigation menu which—when enhanced by JavaScript—opens a log-in form inside a modal dialogue, a link might be better. In this case you might use an anchor with a fragment identifier (<a href="#login-modal">Log in</a>) pointing to a login-form far away at the bottom of the page. This simple baseline will work if JavaScript is unavailable or fails, however when JavaScript is available we can intercept the link’s default behaviour and enhance things. Furthermore because the expectation with links is that you’ll go somewhere and modal dialogues are kinda like faux pages, the link feels appropriate.

While not explicit in the article, another thing I take from this is that by structuring your no-JavaScript experience well, this will help you make appropriate decisions when considering the with-JavaScript experience. There’s a kind of virtuous circle there.

Progressively enhanced JavaScript In Real Life

Over the last couple of days I’ve witnessed a good example of progressive enhancement “In Real Life”. And I think it’s good to log and share these validations of web development best practices when they happen so that their benefits can be seen as real rather than theoretical.

A few days ago I noticed that the search function on my website wasn’t working optimally. As usual, I’d click the navigation link “Search” then some JavaScript would reveal a search input and set keyboard focus to it, prompting me to enter a search term. Normally, the JavaScript would then “look ahead” as I type characters, searching the website for matching content and presenting (directly underneath) a list of search result links to choose from.

The problem was that although the search input was appearing, the search result suggestions were no longer appearing as I typed.

Fortunately, back when I built the feature I had just read Phil Hawksworth’s Adding Search to a Jamstack site which begins by creating a non-JavaScript baseline using a standard form which submits to Google Search (scoped to your website), passing as search query the search term you just typed. This is how I built mine, too.

So, just yesterday at work I was reviewing a PR which prompted me to search for a specific article on my website by using the term “aria-label”. And although the enhanced search wasn’t working, the baseline search functionally was there to deliver me to a Google search result page (site:https://fuzzylogic.me/ aria-label) with the exact article I needed appearing top of the search results. Not a rolls-royce experience, but perfectly serviceable!

Why had the enhanced search solution failed? It was because the .json file which is the data source for the lookahead search had at some point allowed in a weird character and become malformed. And although the site’s JS was otherwise fine, this malformed data file was preventing the enhanced search from working.

JavaScript is brittle and fails for many reasons and in many ways, making it different from the rest of the stack. Added to that there’s the “unavailable until loaded” aspect, or as Jake Archibald put it:

all your users are non-JS while they’re downloading your JS.

The best practices that we as web developers have built up for years are not just theoretical. Go watch a screen reader user browse the web if you want proof that providing descriptive link text rather than “click here”, or employing headings and good document structure, or describing images properly with alt attributes are worthwhile endeavours. Those users depend on those good practices.

Likewise, JavaScript will fail to be available on ocassion, so building a baseline no-JS solution will ensure that when it does, the show still goes on.

Newsletters, by Robin Rendle

A fantastic so-called “Scroll Story” from Robin Rendle. In his own words it’s “an elaborate blog post where I rant about a thing” however given the beautiful typography, layout and illustrations on show I think he’s selling it a little short!

The content of this “story” is pretty interesting – Robin laments the fact that web authors often need newsletters, or to “spam social media” in order to publicise articles on their websites, because most people don’t use RSS (awareness is too low and barriers to entry too great).

However it’s the story’s design and technical implementation which really caught my eye.

Robin does some really cool stuff with the CSS scroll-snap-type property and also explains some steps he had to take to tame differing implementations of height:100vh across browsers.

Browser Support Heuristics

In web development it’s useful when we can say “if the browser supports X, then we know it also supports Y”.

There was a small lightbulb moment at work earlier this year when we worked out that:

if the user’s browser supports CSS Grid, then you know you it also supports custom properties.

Knowing this means that if you wrap some CSS in an @supports(display:grid) then you can also safely use custom properties within that block.

I love this rule of thumb! It saves you looking up caniuse.com for each feature and comparing the browser support.

This weekend I did some unplanned rabbit-holing on the current state of (and best practices for using) ES modules in the browser, as-is and untranspiled. That revealed another interesting rule of thumb:

any browser that supports <script type="module"> also supports let and constasync/await, the spread operator, etc.

One implication of this is that if you currently build a large JavaScript bundle (due to being transpiled down to ES 3/5 and including lots of polyfills) and ship this to all browsers including the modern ones… you could instead improve performance for the majority of your visitors by configuring your bundler to generate two bundles from your code then doing:

// only one of these will be used. 
<script type="module" src="lean-and-modern.js"></script>
<script nomodule src="bulky-alternative-for-old-browsers.js"></script>

I might make a little page or microsite for these rules of thumb. They’re pretty handy!

Creating websites with prefers-reduced-data (on polypane.app)

Even though more and more people get access to the internet every day, not all of them have fast gigabit connections or unlimited data. Using the media query prefers-reduced-data we can keep our sites accessible to everyone.

I’ve long wondered if there could be a way to tailor content to a user based on the speed of their internet connection, especially when considering features like responsive images. This looks like it might be the way to do that (although browser support is currently non-existent). It also allows us to respect the user’s wishes on how much data / battery life etc they’re willing to spare for your website.

(via @adactio)

Minimalist Container Queries

Scott Jehl’s experimental take on a container/element query aimed at letting us set responsive styles for our elements based on their immediate context rather than that of the viewport.

I made a quick and minimal take on approximating Container/Element Queries using a web component and basic CSS selectors.

The idea is that for any given instance of the <c-q> custom element / web component you would define a scoped custom property which sets the pixel min-widths you’re interested in, like so:

c-q {
  --breakpoints: "400 600 800";
  background: black;
}

Zero to many of those numeric min-width values will appear in the element’s data-min-width HTML attribute based on which (if any) of them the element’s width is equal to or greater than.

You can style the element based on their presence using the ~= attribute selector, like this:

c-q[data-min-width~="400"] { 
  background: green; 
}
c-q[data-min-width~="600"] {
  background: blue;
}

See also Scott’s tweet announcing this which contains some interesting contributions including Heydon’s watched-box.

All of the various JavaScript approaches/experiments are summarised in CSS-Tricks’s article Minimal Takes on Faking Container Queries.

(via @scottjehl)

Sets in JavaScript

I don’t often store things in a Set in JavaScript, but maybe I should. The fact it will only store unique values makes it pretty handy.

One place I do currently use a Set is for the TagList in my 11ty-based personal website. I start by defining TagList as a new, empty Set. I then need to assemble all possible tags so iterate blog posts and for each add its associated tags to TagList. This means I can be sure that all duplicates will be removed automatically.

As you would imagine, Set has built-in methods for adding, deleting and iterating items. But if you need to do something beyond that, you can easily turn it into an array, for example by:

const myArray = [...mySet];

Also, browser support is pretty good. So although to date I’ve been safely confining my use to Node scripts for building my static site, I could probably be using it client-side, too.

Cheating Entropy with Native Web Technologies (on Jim Nielsen’s Weblog)

This is why, over years of building for the web, I have learned that I can significantly cut down on the entropy my future self will have to face by authoring web projects in vanilla HTML, CSS, and JS. I like to ask myself questions like:

  • Could this be done with native ES modules instead of using a bundler?
  • Could I do this with DOM scripting instead of using a JS framework?
  • Could I author this in CSS instead of choosing a preprocessor?

Fantastic post from Jim Neilson about how your future self will thank you if you keep your technology stack simple now.

(via @adactio)

Introducing Rome

We’re excited to announce the first beta release and general availability of the Rome linter for JavaScript and TypeScript. This is the beginning of an entire suite of tools. Rome is not only a linter, but also a compiler, bundler, test runner, and more, for JavaScript, TypeScript, HTML, JSON, Markdown, and CSS. We aim to unify the entire frontend development toolchain.

A very ambitious project from the creator of Babel.

Thoughts on inline JavaScript event handlers in the <head>

I’ve been thinking about Scott Jehl’s “simplest way to load external CSS asynchronously” technique. I’m interested in its use of an inline (onload) event handler for running JavaScript-based enhancements in the <head>, in the context of some broader ruminations on how best to progressively enhance UI elements with JavaScript (for example adding toggle show/hide) without causing layout jank.

One really interesting aspect of using inline event handlers to apply enhancements was highlighted by Chris Ferdinandi today: as JavaScript goes, it’s pretty resilient.

Because we’re dealing with an HTML element directly in the document, and because the relevant JS is inline on that element and not dependent on any external files, the only case where the JS won’t run is if someone has JS completely turned off – a sub-1% proportion. The other typical JavaScript resilience pitfalls – such as network connections timing out, CDN failure and JS errors elsewhere blocking your code from running – simply don’t apply here.

Three CSS Alternatives to JavaScript Navigation (on CSS-Tricks)

In general this is a decent article on non-JavaScript-based mobile navigation options, but what I found most interesting is the idea of having a separate page for your navigation menu (at the URL /menu, for example).

Who said navigation has to be in the header of every page? If your front end is extremely lightweight or if you have a long list of menu items to display in your navigation, the most practical method might be to create a separate page to list them all.

I also noted that the article describes a method where you can “spoof” a slide-in hamburger menu without JS by using the checkbox hack. I once coded a similar “HTML and CSS -only” hamburger menu, but opted instead to use the :target pseudo-class in combination with the adjacent sibling selector, as described by Chris Coyier back in 2012.

The Simplest Way to Load CSS Asynchronously (Filament Group)

Scott Jehl of Filament Group demonstrates a one-liner technique for loading external CSS files without them delaying page rendering.

While this isn’t really necessary in situations where your (minified and compressed) CSS is small, say 14k or below, it could be useful when working with large CSS files and want to deliver critical CSS separately and the rest asynchronously.

Today, armed with a little knowledge of how the browser handles various link element attributes, we can achieve the effect of loading CSS asynchronously with a short line of HTML. Here it is, the simplest way to load a stylesheet asynchronously:

<link rel="stylesheet" href="my.css" media="print" onload="this.media='all'">

Note that if JavaScript is disabled or otherwise not available your stylesheet will only load for print and not for screen, so you’ll want to follow up with a “normal” (non-print-specific) stylesheet within <noscript> tags.

Color Theme Switcher (on mxb.dev)

Max shows us how to build a colour theme switcher to let users customise your website. He uses a combination of Eleventy, JSON, Nunjucks with macros, a data attribute on the html element, CSS custom properties and a JavaScript based switcher.

Thanks, Max!

Debouncing vs. throttling with vanilla JS (on Go Make Things)

Chris explains how debouncing and throttling are two related but different techniques for improving performance and user experience when working with frequently invoked JavaScript event handlers.

With throttling, you run a function immediately, then wait a specified amount of time before running it again. Any additional attempts to run it before that time period is over are ignored.

With debouncing, after the relevant event fires a specified time period must pass uninterrupted in order for your function to run. When the time period has passed uninterrupted, that last attempt to run the function is the one that runs, with any previous attempts ignored.

You might debounce code in event handlers for scroll events to run when the user is completely done scrolling so as not to negatively affect browser performance and user experience.

For interactions that update the UI, throttling might make more sense, so that the updates run at predictable intervals.

NB I’ve previously found Trey Huffine’s debounce tutorial and example function really useful, too.

Striking a Balance Between Native and Custom Select Elements (on CSS-Tricks)

We’re not going to try to replicate everything that the browser does by default with a native select element. We’re going to literally use a select element when any assistive tech is used. But when a mouse is being used, we’ll show the styled version and make it function as a select element.

This custom-styled select solution satisfies those who insist on a custom component but retains all the built-in accessibility we get from native form controls. I also really like the use of a @media (hover: hover) media query to detect an environment with hover (such as a computer with a mouse rather than a mobile browser on a handheld device).

JavaScript Arrow Functions

JavaScript arrow functions are one of those bits of syntax about which I occasionally have a brain freeze. Here’s a quick refresher for those moments.

Differences between arrow functions and traditional functions

Arrow functions are shorter than traditional function syntax.

They don’t bind their own this value. Instead, the this value of the scope in which the function was defined is accessible. That makes them poor candidates for methods since this won’t be a reference to the object the method is defined on. However it makes them good candidates for everything else, including use within methods, where—unlike standard functions—they can refer to (for example) this.name just like their parent method because the arrow function has no overriding this binding of its own.

TL;DR: typical usage

const doStuff = (foo) => {
  // stuff that spans multiple lines
};

// short functions
const add = (num1, num2) => num1 + num2;

Explainer

// Traditional Function
function (a) {
  return a + 100;
}

// Arrow Function Breakdown

// 1. Remove "function", place => between argument and opening curly
(a) => {
  return a + 100;
}

// 2. Remove braces and word "return". The return is implied.
(a) => a + 100;

// 3. Remove the argument parentheses
a => a + 100;

References

We’ve ruined the Web. Here’s how we fix it. (This is HCD podcast)

During the COVID situation, people have an urgent need to access critical information online. But in 2020, the average webpage is rammed full of large JavaScript files, huge images etc, and as a result is slow to load. This problem is likely to be most keenly felt by those who don’t have the luxury of fast internet – potentially the same people who need access to that critical information the most.

Here’s a brilliant discussion between Gerry McGovern and Jeremy Keith on that problem, suggesting tactics to help fix things such as performance budgets, introducing tactics at the design stage to mimic slow connections and other access constraints, optimising for return visits, progressive enhancement and more.

Loved this!

(via @adactio)

I have to reluctanctly agree on this one. I’ve interviewed quite a few candidates for “front-end developer” (or similarly named) positions over recent years and the recurring pattern is that they are strong on JavaScript (though not necessarily the right time to use it) and weak on HTML, CSS and the “bigger picture”.

Multiplayer Crosswords (chriszetter.com)

I wanted there to be an easy way to complete crosswords together that didn’t need people to pass a phone back and forth or for a copy of the crossword to be made in a shared Google Spreadsheet.

Several People are Solving is a lovely React app which started as a fork of The Guardian’s Frontend repo in order to further develop their crossword component.

See author Chris Zetter’s article on how he then introduced WebSockets (within a Rails application) to facilitate real-time collaboration.

The crosswords are updated regularly so what are you waiting for? Get solving!

In the same vein as Jeremy Keith’s recent blog post, Hydration, which calls out some of the performance and user experience problems associated with current Server Side Rendering approaches, I think Jake Archibald is absolutely bang on the money here.

BBC GEL Inclusive Components Technical Guide

The BBC Global Experience Language (GEL) Technical Guides are a series of framework-agnostic, code-centric recommendations and examples for building GEL design patterns in websites. They illustrate how to create websites that comply with all BBC guidelines and industry best practice, giving special emphasis to accessibility.

Heydon Pickering (author of Inclusive Components and co-creator of Every Layout) has implemented and documented over 25 of the BBC's design patterns. These are sure to be loaded with best practices and clever tricks for making resilient, accessible modern UI components. (via @heydonworks)

You Don't Need

A nice list of tips and tools on how to use simpler browser standards and APIs to avoid the added weight of unnecessary JavaScript and libraries.

Lodash, Moment and other similar libraries are expensive and we don’t always need them. This Github repo contains a host of nice tips, snippets and code–analysing tools.

One cautionary note regarding the idea of replacing JS with CSS: although the idea of using CSS rather than JavaScript for components like tabs and modals seems nice at first, it doesn’t properly consider that we often need JS for reasons of accessibility, in order to apply the correct aria attributes when the state of a UI component is modified.

Via Will Matthewson at work (FreeAgent) during our group conversation on JavaScript strategy.

Hydration (Adactio: Journal)

The situation we have now is the worst of both worlds: server-side rendering followed by a tsunami of hydration. It has a whiff of progressive enhancement to it (because there’s a cosmetic separation of concerns) but it has none of the user benefits.

Jeremy Keith notes that these days JavaScript frameworks like React can be used in different ways: not solely for creating an SPA or for complex client-site state management, but perhaps for JavaScript that is run on the server. A developer might choose React because they like the way it encourages modularity and componentisation. This could be a good thing if frameworks like Gatsby and Next.js were to use progressive enhancement properly.

In reality, the system of server-side rendering of non-interactive HTML that is reliant on a further payload of JavaScript for hydration leads to an initial loading experience that is “jagged and frustrating”.

Jeremy argues that this represents a worst-of-both-worlds situation and that its alleged “progressive enhancement via improved separation of concerns” is missing the point.

Hope is on the horizon for React in the form of partial hydration. I sincerely hope that it will become the default way of balancing server-side rendering with just-in-time client-side interaction.

(via @adactio)

Testing Stimulus Controllers

Stimulus JS is great but doesn’t provide any documentation for testing controllers, so here’s some of my own that I’ve picked up.

Required 3rd-party libraries

Basic Test

// hello_controller.test.js
import { Application as StimulusApp } from "stimulus";
import HelloController from "path/to/js/hello_controller";

describe("HelloController", () => {
  beforeEach(() => {
    // Insert the HTML and register the controller
    document.body.innerHTML = `
      <div data-controller="hello">
        <input data-target="hello.name" type="text">
        <button data-action="click->hello#greet">
          Greet
        </button>
        <span data-target="hello.output">
        </span>
      </div>
    `;
    StimulusApp.start().register('hello', HelloController);
  })

  it("inserts a greeting using the name given", () => {
    const helloOutput =  document.querySelector("[data-target='hello.output']");
    const nameInput = document.querySelector("[data-target='hello.name']");
    const greetButton = document.querySelector("button");
    // Change the input value and click the greet button
    nameInput.value = "Laurence";
    greetButton.click();
    // Check we have the correct greeting
    expect(helloOutput).toHaveTextContent("Hello, Laurence!");
  })
})

A new technique for making responsive, JavaScript-free charts (DEV Community)

I wanted to see if it was possible to create SVG charts that would work without JS. Well, it is. I've also created an experimental Svelte component library called Pancake to make these techniques easier to use.

A lovely modern, progressively-enhanced approach to data visualisation that uses primarily SVG, HTML and CSS but can be enhanced with JavaScript for Node-based generation or client-side interactivity, if required. (via @jamesmockett)

When should you add the defer attribute to the script element? (on Go Make Things)

For many years I’ve placed script elements just before the closing body tag rather than in the <head>. Since a standard <script> element is render-blocking, the theory is that by putting it at the end of the document – after the main content of the page has loaded – it’s no longer blocking anything, and there’s no need to wrap it in a DOMContentLoaded event listener.

It turns out that my time-honoured default is OK, but there is a better approach.

Chris has done the research for us and ascertained that placing the <script> in the <head> and adding the defer attribute has the same effect as putting that <script> just before the closing body tag but offers improved performance.

This treads fairly complex territory but my general understanding is this:

Using defer on a <script> in the <head> allows the browser to download the script earlier, in parallel, so that it is ready to be used as soon as the DOM is ready rather than having to be downloaded and parsed at that point.

Some additional points worth noting:

  • Only use the defer attribute when the src attribute is present. Don’t use it on inline scripts because it will have no effect.
  • The defer attribute has no effect on module scripts (script type="module"). They defer by default.

Async and Await

My notes and reminders for handling promises with async and await In Real Life.

As I see it, the idea is to switch to using await when working with promise-returning, asynchronous operations (such as fetch) because it lends itself to more flexible and readable code.

async functions

The async keyword, when used before a function declaration like so async function f()):

  • defines an asynchronous function i.e. a function whose processes run after the main call stack and doesn’t block the main thread.
  • always returns a promise. (Its return value is implicitly wrapped in a resolved promise.)
  • allows us to use await.

The await operator

  • use the await keyword within async functions to wait for a Promise.
  • Example usage: const users = await fetch('/users')).
  • It makes the async function pause until that promise settles and returns its result.
  • It makes sense that it may only be used inside async functions so as to scope the “waiting” behaviour to that dedicated context.
  • It’s a more elegant syntax for getting a promise‘s result than promise.then.
  • If the promise resolves successfully, await returns the result.
  • If the promise rejects, await throws the error, just as if there were a throw statement at that line.
  • That throw causes execution of the current function to stop (so the next statements won't be executed), with control passed to the first catch block in the call stack. If no catch block exists among caller functions, the program will terminate.
  • Given this “continue or throw” behaviour, wrapping an await in a try...catch is a really nice and well-suited pattern for including error handling, providing flexibility and aiding readability.

Here’s a try...catch -based example. (NB let’s assume that we have a list of blog articles and a “Load more articles” button which triggers the loadMore() function):

export default class ArticleLoader {

  async loadMore() {
    const fetchURL = "https://mysite.com/blog/";
    try {
      const newItems = await this.fetchItems(fetchURL);
      // If we’re here, we know our promise fulfilled.
      // We might add some additional `await`, or just…
      // Render our new HTML items into the DOM.
      this.renderItems(newItems);
    } catch (err) {
      this.displayError(err);
    }
  }
  
  async fetchArticles(url) {
    const response = await fetch(url, { method: "GET" });
    if (response.ok) {
      return response.text();
    }
    throw new Error("Sorry, there was a problem fetching additional articles.");
  }

  displayError(err) {
    const errorMsgContainer = document.querySelector("[data-target='error-msg']");
    errorMsgContainer.innerHTML = `<span class="error">${err}</span>`;
  }
}

Here’s another example. Let’s say that we needed to wait for multiple promises to resolve:

const allUsers = async () => {
  try {
    let results = await Promise.all([
      fetch(userUrl1),
      fetch(userUrl2),
      fetch(userUrl3)
    ]);
    // we’ll get here if the promise returned by await
    // resolved successfully.
    // We can output a success message.
    // ...
  } catch (err) {
    this.displayError(err);
  }
}

Using await within a try...catch is my favourite approach but sometimes it’s not an option because we’re at the topmost level of the code therefore not in an async function. In these cases it’s good to remember that we can call an async function and work with its returned value like any promise, i.e. using then and catch.

For example:

async function loadUser(url) {
  const response = await fetch(url);
  if (response.status == 200) {
    const json = await response.json();
    return json;
  }
  throw new Error(response.status);
}

loadUser('no-user-here.json')
  .then((json) => {
    // resolved promise, so do something with the json
    // ...
  })
  .catch((err) => {
    // then() returns a promise, so is chainable.
    // rejected promise, so do something with the json
    document.body.innerHTML = "foo";
  });

References:

Modest JS Works

Pascal Laliberté has written a short, free, web-based book which advocates a modest and layered approach to using JavaScript.

I make the case for The JS Gradient, a principle whereby your app can have multiple coexisting modern JS approaches, starting from the global sprinkles to spot view-models to, yes, an SPA if that’s really necessary. At each point in the gradient, you’ll see when it’s a good idea to go a step further toward heavier JavaScript, or not.

Pascal’s philosophy starts with the following ideals:

  • prefer server-generated HTML over JavaScript-generated HTML. If we need to add more complex JavaScript layers we may deviate from that ideal, but this should be the starting point;
  • we should be able to swap and replace the HTML on a page on a whim. We can then support techniques like pjax (replacing the whole body of a page with new HTML such as with Turbolinks) and ahah (asynchronous HTML over HTTP: replacing parts of a page with new HTML, so as to make our app feel really fast while still favouring server-generated HTML;
  • favour native Browser APIs over proprietary libraries. Use the tools the browser gives us (History API, Custom Event handlers, native form elements, CSS and the cascade) and polyfill older browsers.

He argues that a single application can combine the options along the JS Gradient, but also that we need only move to a new level if and when we reach the current level’s threshold.

He defines the levels as follows:

  • Global Sprinkles: general app-level enhancements that occur on most pages, achieved by adding event listeners at document level to catch user interactions and respond with small updates. Such updates might include dropdowns, fetching and inserting HTML fragments, and Ajax form submission. This might be achieved via a single, DIY script (or something like Trimmings) that is available globally and provides reusable utilities via data- attributes;
  • Component Sprinkles: specific page component behaviour defined in individual .js files, where event listeners are still ideally set on the document;
  • Stimulus components: where each component’s HTML holds its state and defines its behaviour, with a companion controller .js file which wires up event handlers to elements;
  • Spot View-Models: using a framework such as Vue or React only in specific spots, for situations where our needs are more complex and generating the HTML on the server would be impractical. Rather than taking over the whole page, this just augments a specific page section with a data-reactive view-model.
  • A single-page application (SPA): typically an all-JavaScript affair, where whole pages are handled by Reactive View-Models like Vue and React and the browser’s handling of clicks and the back button are overriden to serve different JavaScript-generated views to the user. This is the least modest approach but there are times when it is necessary.

One point to which Pascal regularly returns is that it’s better to add event listeners to the document (with a check to ensure the event occurred on the relevant element) rather than to the element itself. I already knew that Event Delegation is better for browser performance however Pascal’s point is that in the context of wanting to support swapping and replacing HTML on a whim, if event listeners are directly on the element but that element is replaced (or a duplicate added) then we would need to keep adding more event listeners. By contrast, this is not necessary when the event listener is added to the document.

Note: Stimulus applies event handlers to elements rather than the document, however one of its USPs is that it’s set up so that as elements appear or disappear from the DOM, event handlers are automatically added and removed. This lets you swap and replace HTML as you need without having to manually define and redefine event handlers. He calls this Automated Behaviour Orchestration and notes that while adding event listeners to the document is the ideal approach, the Stimulus approach is the next best thing.

Also of particular interest to me was his Stimulus-based Shopping Cart page demo where he employs some nice techniques including:

  • multiple controllers within the same block of HTML;
  • multiple Stimulus actions on a single element;
  • controller action methods which use document.dispatchEvent to dispatch Custom Events as a means of communicating changes up to other components;
  • an element with an action which listens for the above custom event occurring on the document (as opposed to an event on the element itself).

I’ve written about Stimulus before and noted a few potential cons when considering complex interfaces, however Pascal’s demo has opened my eyes to additional possibilities.

Loading and Templating JSON Responses in Stimulus.js (by John Beatty)

Just because Stimulus.js is designed to work primarily with existing HTML doesn’t mean it can’t use JSON APIs when the need arises.

Here, John pimps up his Stimulus controller to get and use JSON from a remote API endpoint.

He triggers a fetch() from the connect() lifecycle callback, then iterates the returned JSON creating each HTML list item using a JavaScript template literal before finally rendering them into an existing <ul> via a Stimulus target.

Easy when you know how!

Building an accessible show/hide disclosure component with vanilla JS (Go Make Things)

A disclosure component is the formal name for the pattern where you click a button to reveal or hide content. This includes things like a “show more/show less” interaction for some descriptive text below a YouTube video, or a hamburger menu that reveals and hides when you click it.

Chris’s article provides an accessible means of showing and hiding content at the press of a button when the native HTML details element isn’t suitable.

Progressively Enhanced JavaScript with Stimulus

I’m dipping my toes into Stimulus, the JavaScript micro-framework from Basecamp. Here are my initial thoughts.

I immediately like the ethos of Stimulus.

The creators’ take is that in many cases, using one of the popular contemporary JavaScript frameworks is overkill.

We don’t always need a nuclear solution that:

  • takes over our whole front end;
  • renders entire, otherwise empty pages from JSON data;
  • manages state in JavaScript objects or Redux; or
  • requires a proprietary templating language.

Instead, Simulus suggests a more “modest” solution – using an existing server-rendered HTML document as its basis (either from the initial HTTP response or from an AJAX call), and then progressively enhancing.

It advocates readable markup – being able to read a fragment of HTML which includes sprinkles of Stimulus and easily understand what’s going on.

And interestingly, Stimulus proposes storing state in the HTML/DOM.

How it works

Stimulus’ technical purpose is to automatically connect DOM elements to JavaScript objects which are implemented via ES6 classes. The connection is made by data– attributes (rather than id or class attributes).

data-controller values connect and disconnect Stimulus controllers.

The key elements are:

  • Controllers
  • Actions (essentially event handlers) which trigger controller methods
  • Targets (elements which we want to read or write to, mapped to controller properties)

Some nice touches

I like the way you can use the connect() method – a lifecycle callback invoked whenever a given controller is connected to the DOM - as a place to test browser support for a given feature before applying a JS-based enhancement.

Stimulus also readily supports the ability to have multiple instances of a controller on the page.

Furthermore, actions and targets can be added to any type of element without the controller JavaScript needing to know or care about the specific element, promoting loose coupling between HTML and JavaScript.

Managing State in Stimulus

Initial state can be read in from our DOM element via a data- attribute, e.g. data-slideshow-index.

Then in our controller object we have access to a this.data API with has(), get(), and set() methods. We can use those methods to set new values back into our DOM attribute, so that state lives entirely in the DOM without the need for a JavaScript state object.

Possible Limitations

Stimulus feels a little restrictive if dealing with less simple elements – say, for example, a data table with lots of rows and columns, each differing in multiple ways.

And if, like in our data table example, that element has lots of child elements, it feels like there might be more of a performance hit to update each one individually rather than replace the contents with new innerHTML in one fell swoop.

Summing Up

I love Stimulus’s modest and progressive enhancement friendly approach. I can see me adopting it as a means of writing modern, modular JavaScript which fits well in a webpack context in situations where the interactive elements are relatively simple and not composed of complex, multidimensional data.

How to manage JavaScript dependencies

Managing JavaScript dependencies is about as much fun as a poke in the eye. However even if—like me—you prefer to keep things lean and dependency-free as far as possible, it’s something you’re going to need to do either in large work projects or as your personal side-project grows. In this post I tackle it head-on to reduce the problem to some simple concepts and practical techniques.

In modern JavaScript applications, we can add tried-and-tested open source libraries and utilities by installing packages from the NPM registry. This can aid development by letting you concentrate on your application’s unique features rather than reinventing the wheel for already-solved common tasks.

A typical example might be to add axios or node-fetch to a Node.js project to provide a means of making API calls.

We can use a package manager such as yarn or npm to install packages. When our package manager installs a package it logs it as a project dependency which is to say that the project depends upon its presence to function properly.

It then follows that anyone who wants to run the application should first install its dependencies.

And it’s the responsibility of the project owner (you and your team) to manage the project’s dependencies over time. This involves:

  • updating packages when they release security patches;
  • maintaining compatibility by staying on package upgrade paths; and
  • removing installed packages when they are no longer necessary for your project.

While it’s important to keep your dependencies updated, in a recent survey by Sonatype 52% of developers said they find dependency management painful. And I have to agree that it’s not something I generally relish. However over the years I’ve gotten used to the process and found some things that work for me.

A simplified process

The whole process might go something like this (NB install yarn if you haven’t already).

# Start installing and managing 3rd-party packages.
# (only required if your project doesn’t already have a package.json)
yarn init  # or npm init

# Install dependencies (in a project which already has a package.json)
yarn  # or npm i

# Add a 3rd-party library to your project
yarn add package_name   # or npm i package_name

# Add package as a devDependency.
# For tools only required in the local dev environment
# e.g. CLIs, hot reload.
yarn add -D package_name  # or npm i package_name --save-dev 

# Add package but specify a particular version or semver range
# https://devhints.io/semver
# It’s often wise to do this to ensure predictable results.
# caret (^) is useful: allows upgrade to minor but not major versions.
# is >=1.2.3 <2.0.0
yarn add package_name@^1.2.3

# Remove a package
# use this rather than manually deleting from package.json.
# Updates yarn.lock, package.json and removes from node_modules.
yarn remove package_name   # or npm r package_name

# Update one package (optionally to a specific version/range)
yarn upgrade package_name
yarn upgrade package_name@^1.3.2

# Review (in a nice UI) all packages with pending updates, 
# with the option to upgrade whichever you choose
yarn upgrade-interactive

# Upgrade to latest versions rather than
# semver ranges you’ve defined in package.json.
yarn upgrade-interactive -—latest

Responding to a security vulnerability in a dependency

If you host your source code on GitHub it’s a great idea to enable Dependabot. Essentially Dependabot has your back with regard to any dependencies that need updated. You set it to send you automated security updates by email so that you know straight away if a vulnerability has been detected in one of your project dependencies and requires action.

Helpfully, if you have multiple Github repos and more than one of those include the vulnerable package you also get a round-up email with a message something like “A new security advisory on lodash affects 8 of your repositories” with links to the alert for each repo, letting you manage them all at once.

Dependabot also works for a variety of languages and techologies—not just JavaScript—so for example in a Rails project it might email you to suggest bumping a package in your Gemfile.

Automated upgrades

Sometimes the task is straightforward. The Dependabot alert email tells you about a vulnerability in a package you explicitly installed and the diligent maintainer has already made a patch release available.

A simple upgrade to the relevant patch version would do the job, however Dependabot can even take care of that for you! Dependabot can automatically open a new Pull Request which addresses the vulnerability by updating the relevant dependency. It’ll give the PR a title like

Bump lodash from 4.17.11 to 4.17.19

You just need to approve and merge that PR. This is great; it’s really simple and takes care of lots of cases.

Note 1: if you work on a corporate repo that is not set up to “automatically open PRs”, often you can still take advantage of Github’s intelligence with just one or two extra manual steps. Just follow the links in your Github security alert email.

Note 2: Dependabot can also be set to do automatic version updates even when your installed version does not have a vulnerability. You can enable this by adding a dependabot.yml to your repo. But so far I’ve tended to avoid unpredictability and excess noise by having it manage security updates only.

Manual upgrades

Sometimes Dependabot will alert you to an issue but is unable to fix it for you. Bummer.

This might be because the package owner has not yet addressed the security issue. If your need to fix the situtation is not super-urgent, you could raise an issue on the package’s Github repo asking the maintainer (nicely) if they’d be willing to address it… or even submit a PR applying the fix for them. If you don’t have the luxury of time, you’ll want to quickly find another package which can do the same job. An example here might be that you look for a new CSS minifier package because your current one has a longstanding security issue. Having identified a replacement you’d then remove package A, add package B, then update your code which previously used package A to make it work with package B. Hopefully only minimal changes will be required.

Alternatively the package may have a newer version or versions available but Depandabot can’t suggest a fix because:

  1. the closest new version’s version number is beyond the allowed range you specified in package.json for the package; or
  2. Dependabot can’t be sure that upgrading wouldn’t break your application.

If the package maintainer has released newer versions then you need to decide which to upgrade to. Your first priority is to address the vulnerability, so often you’ll want to minimise upgrade risk by identifying the closest non-vulnerable version. You might then run yarn upgrade <package…>@1.3.2. Note also that you may not need to specify a specific version because your package.json might already specify a semver range which includes your target version, and all that’s required is for you to run yarn upgrade or yarn upgrade <package> so that the specific “locked” version (as specified in yarn.lock) gets updated.

On other occasions you’ll read your security advisory email and the affected package will sound completely unfamiliar… likely because it’s not one you explicitly installed but rather a sub-dependency. Y’see, your dependencies have their own package.json and dependencies, too. It seems almost unfair to have to worry about these too, however sometimes you do. The vulnerability might even appear several times as a sub-dependency in your lock file’s dependency tree. You need to check that lock file (it contains much more detail than package.json), work out which of your top-level dependencies are dependent on the sub-dependency, then go check your options.

Update: use yarn why sockjs (replacing sockjs as appropriate) to find out why a module you don’t recognise is installed. It’ll let you know what module depends upon it, to help save some time.

When having to work out the required update to address a security vulnerability in a package that is a subdependency, I like to quickly get to a place where the task is framed in plain English, for example:

To address a vulnerability in xmlhttprequest-ssl we need to upgrade karma to the closest available version above 4.4.1 where its dependency on xmlhttprequest-ssl is >=1.6.2

Case Study 1

I was recently alerted to a “high severity” vulnerability in package xmlhttprequest-ssl.

Dependabot cannot update xmlhttprequest-ssl to a non-vulnerable version. The latest possible version that can be installed is 1.5.5 because of the following conflicting dependency: @11ty/eleventy@0.12.1 requires xmlhttprequest-ssl@~1.5.4 via a transitive dependency on engine.io-client@3.5.1. The earliest fixed version is 1.6.2.

So, breaking that down:

  • xmlhttprequest-ssl versions less than 1.6.2 have a security vulnerability;
  • that’s a problem because my project currently uses version 1.5.5 (via semver range ~1.5.4), which I was able to see from checking package-lock.json;
  • I didn’t explicitly install xmlhttprequest-ssl. It’s at the end of a chain of dependencies which began at the dependencies of the package @11ty/eleventy, which I did explicitly install;
  • To fix things I want to be able to install a version of Eleventy which has updated its own dependencies such there’s no longer a subdependency on the vulnerable version of xmlhttprequest-ssl;
  • but according to the Dependabot message that’s not possible because even the latest version of Eleventy (0.12.1) is indirectly dependent on a vulnerable version-range of xmlhttprequest-ssl (~1.5.4);
  • based on this knowledge, Dependabot cannot recommend simply upgrading Eleventy as a quick fix.

So I could:

  1. decide it’s safe enough to wait some time for Eleventy to resolve it; or
  2. request Eleventy apply a fix (or submit a PR with the fix myself); or
  3. stop using Eleventy.

Case Study 2

A while ago I received the following security notification about a vulnerability affecting a side-project repo.

dot-prop < 4.2.1 “Prototype pollution vulnerability in dot-prop npm package before versions 4.2.1 and 5.1.1 allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.

I wasn’t familar with dot-prop but saw that it’s a library that lets you “Get, set, or delete a property from a nested object using a dot path”. This is not something I explicitly installed but rather a sub-dependency—a lower-level library that my top-level packages (or their dependencies) use.

Github was telling me that it couldn’t automatically raise a fix PR, so I had to fix it manually. Here’s what I did.

  1. looked in package.json and found no sign of dot-prop;
  2. started thinking that it must be a sub-dependency of one or more of the packages I had installed, namely express, hbs, request or nodemon;
  3. looked in package-lock.json and via a Cmd-F search for dot-prop I found that it appeared twice;
  4. the first occurrence was as a top-level element of package-lock.jsons top-level dependencies object. This object lists all of the project’s dependencies and sub-dependencies in alphabetical order, providing for each the details of the specific version that is actually installed and “locked”;
  5. I noted that the installed version of dot-prop was 4.2.0, which made sense in the context of the Github security message;
  6. the other occurrence of dot-prop was buried deeper within the dependency tree as a dependency of configstore;
  7. I was able to work backwards and see that dot-prop is required by configstore then Cmd-F search for configstore to find that it was required by update-notifier, which is turn is required by nodemon;
  8. I had worked my way up to a top-level dependency nodemon (installed version 1.19.2) and worked out that I would need to update nodemon to a version that had resolved the dot-prop vulnerability (if such a version existed);
  9. I then googled “nodemon dot-prop” and found some fairly animated Github issue threads between Remy the maintainer of nodemon and some users of the package, culminating in a fix;
  10. I checked nodemon’s releases and ascertained that my only option if sticking with nodemon was to install v2.0.3—a new major version. I wouldn’t ideally install a version which might include breaking changes but in this case nodemon was just a devDependency, not something which should affect other parts of the application, and a developer convenience at that so I went for it safe in the knowledge that I could happily remove this package if necessary;
  11. I opened package.json and within devDependencies manually updated nodemon from ^1.19.4 to ^2.0.4. (If I was in a yarn context I’d probably have done this at the command line). I then ran npm i nodemon to reinstall the package based on its new version range which would also update the lock file. I was then prompted to run npm audit fix which I did, and then I was done;
  12. I pushed the change, checked my Github repo’s security section and noted that the alert (and a few others besides) had disappeared. Job’s a goodun!

Proactively checking for security vulnerabilities

It’s a good idea on any important project to not rely on automated alerts and proactively address vulnerabilities.

Check for vulnerabilities like so:

yarn audit

# for a specific level only
yarn audit --level critical
yarn audit --level high
  

Files and directories

When managing dependencies, you can expect to see the following files and directories.

  • package.json
  • yarn.lock
  • node_modules (this is the directory into which packages are installed)

Lock files

As well as package.json, you’re likely to also have yarn.lock (or package.lock or package-lock.json) under source control too. As described above, while package.json can be less specific about a package’s version and suggest a semver range, the lock file will lock down the specific version to be installed by the package manager when someone runs yarn or npm install.

You shouldn’t manually change a lock file.

Choosing between dependencies and devDependencies

Whether you save an included package under dependencies (the default) or devDependencies comes down to how the package will be used and the type of website you’re working on.

The important practical consideration here is whether the package is necessary in the production environment. By production environment I don’t just mean the customer-facing website/application but also the enviroment that builds the application for production.

In a production “build process” environment (i.e. one which likely has the environment variable NODE_ENV set to production) the devDependencies are not installed. devDependencies are packages considered necessary for development only and therefore to keep production build time fast and output lean, they are ignored.

As an example, my personal site is JAMstack-based using the static site generator (SSG) Eleventy and is hosted on Netlify. On Netlify I added a NODE_ENV environment variable and set it to production (to override Netlify’s default setting of development) because I want to take advantage of faster build times where appropriate. To allow Netlify to build the site on each push I have Eleventy under dependencies so that it will be installed and is available to generate my static site.

By contrast, tools such as Netlify’s CLI and linters go under devDependencies. Netlify’s build prorcess does not require them, nor does any client-side JavaScript.

Upgrading best practices

  • Check the package CHANGELOG or releases on Github to see what has changed between versions and if there have been any breaking changes (especially when upgrading to the latest version).
  • Use a dedicated PR (Pull Request) for upgrading packages. Keep the tasks separate from new features and bug fixes.
  • Upgrade to the latest minor version (using yarn upgrade-interactive) and merge that before upgrading to major versions (using yarn upgrade-interactive -—latest).
  • Test your work on a staging server (or Netlify preview build) before deploying to production.

References

Webfont loading strategies

When it comes to webfonts, if you want to serve an accessible and high performance experience across device types it’s not as straightforward as just specifying your fonts in CSS then hoping for the best.

We likely have goals such as the following:

  • avoid Flash of Invisible Text (FOIT). Flash of Unstyled Text (FOUT) is preferable.
  • provide a good experience for users on slow connections.
  • while favouring FOUT over FOIT, take care to minimise jarring reflows.

Browser support is also a factor. For example font-display: swap seems like a great option however as Chris Ferdinandi pointed out in Why use the vanilla JS fontfaceset.load method instead of CSS font-display: swap it doesn’t have comprehensive mobile support. Since mobile users are regularly on slower connections they are likely to be hit hard by load times incurred due to custom fonts, therefore a solution which doesn’t cater to them is less attractive.

I tend to go for FOUT with a class—an approach that is reliable and good enough for most use cases. The idea is to use JavaScript to detect when a font has loaded successfully, and only then add a fonts-loaded class to the HTML element causing CSS scoped to that class to take effect. To date I’ve handled the detection using Bram Stein’s Font Face Observer to benefit from greater browser support (it handles IE) than I’d get using the native FontFaceSet.load() however I expect to gravitate toward the native API solution soon.

I’ve also experimented with pre-loading fonts in the <head> in combination with font-display: swap on this, my personal website. This was inspired by the font-loading approach Zach Leatherman took with CSS-Tricks. However due to the fonts I’m using right now I’d currently be preloading too heavy a load for slow connections (thus temporarily blocking initial page render) so I’d need to go further and introduce two-stage loading into that mix.

I’ve also found that your choice of font and font host can really influence (constrain) the flexibility and options you have for font features and loading.

  • font-display: swap can only be applied on a self-hosted font or one hosted on Google Fonts (via a special querystring parameter on the font URL). So this CSS-only approach is not available with (for example) fonts hosted on Adobe Fonts or fonts.com.
  • link rel=preload is only available for self-hosted fonts because you need to be able to rely on the URL not changing, and you can’t rely on that for externally hosted fonts.
  • You might want to use the opentype features of a font but are they available in your font hosting/loading solution? Some of the fonts hosted on Google Fonts have nice OpenType features but by default these features are pruned out.
  • Getting access to the source .ttf file is beneficial as it allows you to subset the font, including special features and characters but also cutting out any your website does not require. Google Fonts have an open license which means we don’t need to lean on their hosting but can instead download fonts in .ttf, subset them (I use Glyphhanger) and self-host them.

This is all a work in progress and I’m still working this stuff out.

$$ in the DevTools Console

I learned something new today when developing in the Firefox Dev Tools console (although this applies to Chrome too)—something which was really useful and which I thought I’d share.

Basically, type $$('selector') into the console (replacing selector as desired) and it’ll give you back all matching elements on the page.

So for example, $$('script') or $$('li').

Similarly you can select a single element by instead using one dollar sign ($).

These seem to be console shortcuts for document.querySelector() (in the case of $) and document.querySelectorAll() (in the case of $$).

The other really cool thing is that the resultant nodeList is returned as an array, so you could do e.g. $$('li').forEach(…) or similar.

via @rem (Remy Sharp)

Stuffing the front end

Here’s Bridget Stewart, a developer from Ohio, with some thoroughly enjoyable “curmudgeonly” thoughts on why your website doesn’t necessarily need a Javascript framework.

With websites taking a median time of 5.8 seconds to show primary content (source: HTTPArchive) and Google warning us that 53% of mobile visits leave pages that take longer than three seconds to load, it’s hard to justify stuffing the front-end unnecessarily.

Bridget’s article also contains this quote which I love:

First, solve the problem. Then, write the code.

— John Johnson

Essentially, don’t select the tooling first—a principle I fully endorse.

Accessible modal dialogues in 2019

I previously noted Keith J Grant’s article on the HTML dialog element which promised a native means of handling popups and modal dialogues. I’ve not yet used dialog in production, partly because of spotty browser support (although there is a polyfill) but also partly because—for reasons I couldn’t quite put my finger on after reading the spec—it just didn’t feel like the finished article.

Fast forward to March 2019 and Scott O’Hara has written an excellent piece describing why it’s still not ready. There are a combination of factors: lack of browser support, reliance on JavaScript; and multiple accessibility issues.

The good news is: having dug deeper into Scott’s work I see that among the many accessible components he has shared with the world is a custom, accessible modal dialogue. So next time the need arises, I think I’ll start there.

Thanks, Scott!

Promises in JavaScript

A brief explainer (for future-me and anyone else it helps) on what promises are and how to use them. Note: this is not an official definition, but rather one that works for me.

In the normal run of things, JavaScript code is synchronous. When line 1 has run (perhaps defining a new variable) and the operation has completed, line 2 runs (perhaps operating on the variable we just defined). However sometimes in line 1 we need to do something which will take longer – maybe an unknown length of time – for example to call a function which fetches data from an external API. In many cases we don’t mind that taking a while, or even eventually failing, but we want it to execute separately from the main JavaScript thread so as not to hold up the execution of line 2. So instead of waiting for the task to complete and return its value, our API-calling function instead returns a promise to supply that value in the future.

A Promise is an object we use for asynchronous operations. It’s a proxy for the success value or failure reason of the asynchronous operation, which at the time the promise is created is not necessarily yet known.

A promise is in one of three states:

  • pending
  • fulfilled (the operation completed successfully)
  • rejected (the operation failed)

We say a pending promise can go on to be either fulfilled with a value, or rejected with a reason (error).

We also talk about a promise being settled when it is no longer pending.

One memorable analogy was provided in Mariko Kosaka’s The Promise of a Burger Party. In it she describes the scenario of ordering a burger from, say, Burger King. It goes something like this:

  • you place your order for a Whopper;
  • they give you a tray with a buzzer. The tray is a promise that they will provide your burger as soon as it has been cooked, and the buzzer is the promise‘s state;
  • the buzzer is not buzzing to start with: it’s in the pending state;
  • the buzzer buzzes: the promise is now settled;
  • they might inform you that their flame-grill broke down half-way through. The cooking operation failed, and the promise of a burger has been rejected with that reason. You’ll likely want to act on that (by getting a refund);
  • alternatively all goes to plan, you go the counter and they fulfil their promise of a tasty burger placed onto your tray.
  • you decide you want to act on the success of getting your tasty burger by doing another thing, namely buying a coke. In code you’ll be within the .then() method which is the success handler for your promise and in there you can just call buyCoke(). (Note: the buyCoke() operation might be synchronous; the real life analogy being that it’s so quick to pour a coke that the assistant does it and serves it on a tray immediately rather than giving you a tray as a promise for it.) At the end of your then() you might choose to return a meal object which combines your burger and coke.
  • You could then chain a further .then() to the original .then() to work with your snowballing data value. This is because then() methods always return a promise (even if the operation within was not asynchronous), and we already know that promises are thenable (i.e. have a then() method) hence our ability to keep chaining.

Promise syntax

There are two things we need to be able to do with promises:

  1. create and return them;
  2. work with a returned promise.

Create and return a promise

When you create a promise you pass a function known as the executor to it, which runs instantly. Its arguments resolve and reject are callbacks provided by JavaScript. Our code goes inside the executor.

let examplePromise = new Promise(function(resolve, reject) {
  // do a thing, possibly async, then…

  if (/* everything turned out fine */) {
    resolve("Stuff worked!");
  } else {
    reject(Error("It broke"));
  }
});

Work with a returned promise

examplePromise.then(function(response) {
  console.log("Success!", response);
})
.catch(function(error) {
  console.log("Failed!", error);
})

References

See all tags.

External Link Bookmark Note Entry Search