Skip to main content

Journal

Responsive Type and Zoom (by Adrian Roselli)

When people zoom a page, it is typically because they want the text to be bigger. When we anchor the text to the viewport size, even with a (fractional) multiplier, we can take away their ability to do that. It can be as much a barrier as disabling zoom. If a user cannot get the text to 200% of the original size, you may also be looking at a WCAG 1.4.4 Resize text (AA) problem.

I already tend to avoid dynamic, viewport-width-based fluid typography techniques in favour of making just one font-size adjustment – at a desktop breakpoint – based on the typographic theory that suggests we adjust type size according to reading distance. I learned this in Richard Rutter’s excellent book Web Typography.

While the ideas and code behind the fluid typography approach are nice, Adrian’s discovery that it can hinder users who need to zoom text only strengthens my feeling that it’s not the best way to handle responsive type.

The Man in the High Castle by Philip K. Dick

The second Phillp K Dick I’ve read this year is his alternative-history sci-fi classic.

A photo of the book “The Man in the High Castle” by Philip K. Dick
The Man in the High Castle by Philip K. Dick

In this book, Dick protrays an alternate reality where the Axis powers have won the second world war and America is under the rule of Imperial Japan and Nazi Germany.

Interestingly, many of the characters use the ancient Chinese text the I Ching for guidance, and there’s also a clever “book within a book” subplot in which the novel The Grasshopper Lies Heavy depicts an alternate history in which the Allied forces prevailed.

Having now read a couple of Philip K. Dick books this year, I’m arriving at the conclusion that while I don’t love his writing style, this is counterbalanced by his creativity and the interesting ideas which stay with you long afterwards.

Design Better Forms (UX Collective)

As Andrew Coyle says, “Life is short. No one wants to fill out a form.”. Here, he presents a number of form design tips to make the user experience more bearable and increase completion rates.

Making a Better Custom Select Element (24 ways)

We want a way for someone to choose an item from a list of options, but it’s more complicated than just that. We want autocomplete options. We want to put images in there, not just text. The optgroup element is ugly, hard to style, and not announced by screen readers. I had high hopes for the datalist element, but it’s no good for people with low vision who zoom or use high contrast themes. select inputs are limited in a lot of ways. Let’s work out how to make our own while keeping all the accessibility features of the original.

Julie Grundy argues here that despite us having greater ability to style the standard select in 2019 there are times when that element doesn’t quite meet modern expectations.

This is a lovely, full-featured and fully accessible component. It could perhaps be improved by not showing the down-arrow icon until JavaScript is loaded, but otherwise it’s great.

Julie’s code currently exists solely as a Github repo, but for ease I’ve created this editable version on Codepen.

Will I use this in place of the select element? Not if I can help it, because after years of experience working with form elements I still trust native elements to work better cross-platform than custom alternatives. However if a design requires dropdown options to employ custom patterns such as media objects, then I’ll definitely reach for this component.

When should you add the defer attribute to the script element? (on Go Make Things)

For many years I’ve placed script elements just before the closing body tag rather than in the <head>. Since a standard <script> element is render-blocking, the theory is that by putting it at the end of the document – after the main content of the page has loaded – it’s no longer blocking anything, and there’s no need to wrap it in a DOMContentLoaded event listener.

It turns out that my time-honoured default is OK, but there is a better approach.

Chris has done the research for us and ascertained that placing the <script> in the <head> and adding the defer attribute has the same effect as putting that <script> just before the closing body tag but offers improved performance.

This treads fairly complex territory but my general understanding is this:

Using defer on a <script> in the <head> allows the browser to download the script earlier, in parallel, so that it is ready to be used as soon as the DOM is ready rather than having to be downloaded and parsed at that point.

Some additional points worth noting:

  • Only use the defer attribute when the src attribute is present. Don’t use it on inline scripts because it will have no effect.
  • The defer attribute has no effect on module scripts (script type="module"). They defer by default.

Carbon

Create and share beautiful images of your source code. Start typing or drop a file into the text area to get started.

I’ve noticed Andy Bell using a really lovely format when sharing code snippets on Twitter. Turns out that it was using this great tool.

(via @hankchizljaw)

Async and Await

My notes and reminders for handling promises with async and await In Real Life.

As I see it, the idea is to switch to using await when working with promise-returning, asynchronous operations (such as fetch) because it lends itself to more flexible and readable code.

async functions

The async keyword, when used before a function declaration like so async function f()):

  • defines an asynchronous function i.e. a function whose processes run after the main call stack and doesn’t block the main thread.
  • always returns a promise. (Its return value is implicitly wrapped in a resolved promise.)
  • allows us to use await.

The await operator

  • use the await keyword within async functions to wait for a Promise.
  • Example usage: const users = await fetch('/users')).
  • It makes the async function pause until that promise settles and returns its result.
  • It makes sense that it may only be used inside async functions so as to scope the “waiting” behaviour to that dedicated context.
  • It’s a more elegant syntax for getting a promise‘s result than promise.then.
  • If the promise resolves successfully, await returns the result.
  • If the promise rejects, await throws the error, just as if there were a throw statement at that line.
  • That throw causes execution of the current function to stop (so the next statements won't be executed), with control passed to the first catch block in the call stack. If no catch block exists among caller functions, the program will terminate.
  • Given this “continue or throw” behaviour, wrapping an await in a try...catch is a really nice and well-suited pattern for including error handling, providing flexibility and aiding readability.

Here’s a try...catch -based example. (NB let’s assume that we have a list of blog articles and a “Load more articles” button which triggers the loadMore() function):

export default class ArticleLoader {

  async loadMore() {
    const fetchURL = "https://mysite.com/blog/";
    try {
      const newItems = await this.fetchItems(fetchURL);
      // If we’re here, we know our promise fulfilled.
      // We might add some additional `await`, or just…
      // Render our new HTML items into the DOM.
      this.renderItems(newItems);
    } catch (err) {
      this.displayError(err);
    }
  }
  
  async fetchArticles(url) {
    const response = await fetch(url, { method: "GET" });
    if (response.ok) {
      return response.text();
    }
    throw new Error("Sorry, there was a problem fetching additional articles.");
  }

  displayError(err) {
    const errorMsgContainer = document.querySelector("[data-target='error-msg']");
    errorMsgContainer.innerHTML = `<span class="error">${err}</span>`;
  }
}

Here’s another example. Let’s say that we needed to wait for multiple promises to resolve:

const allUsers = async () => {
  try {
    let results = await Promise.all([
      fetch(userUrl1),
      fetch(userUrl2),
      fetch(userUrl3)
    ]);
    // we’ll get here if the promise returned by await
    // resolved successfully.
    // We can output a success message.
    // ...
  } catch (err) {
    this.displayError(err);
  }
}

Using await within a try...catch is my favourite approach but sometimes it’s not an option because we’re at the topmost level of the code therefore not in an async function. In these cases it’s good to remember that we can call an async function and work with its returned value like any promise, i.e. using then and catch.

For example:

async function loadUser(url) {
  const response = await fetch(url);
  if (response.status == 200) {
    const json = await response.json();
    return json;
  }
  throw new Error(response.status);
}

loadUser('no-user-here.json')
  .then((json) => {
    // resolved promise, so do something with the json
    // ...
  })
  .catch((err) => {
    // then() returns a promise, so is chainable.
    // rejected promise, so do something with the json
    document.body.innerHTML = "foo";
  });

References:

Modest JS Works

Pascal Laliberté has written a short, free, web-based book which advocates a modest and layered approach to using JavaScript.

I make the case for The JS Gradient, a principle whereby your app can have multiple coexisting modern JS approaches, starting from the global sprinkles to spot view-models to, yes, an SPA if that’s really necessary. At each point in the gradient, you’ll see when it’s a good idea to go a step further toward heavier JavaScript, or not.

Pascal’s philosophy starts with the following ideals:

  • prefer server-generated HTML over JavaScript-generated HTML. If we need to add more complex JavaScript layers we may deviate from that ideal, but this should be the starting point;
  • we should be able to swap and replace the HTML on a page on a whim. We can then support techniques like pjax (replacing the whole body of a page with new HTML such as with Turbolinks) and ahah (asynchronous HTML over HTTP: replacing parts of a page with new HTML, so as to make our app feel really fast while still favouring server-generated HTML;
  • favour native Browser APIs over proprietary libraries. Use the tools the browser gives us (History API, Custom Event handlers, native form elements, CSS and the cascade) and polyfill older browsers.

He argues that a single application can combine the options along the JS Gradient, but also that we need only move to a new level if and when we reach the current level’s threshold.

He defines the levels as follows:

  • Global Sprinkles: general app-level enhancements that occur on most pages, achieved by adding event listeners at document level to catch user interactions and respond with small updates. Such updates might include dropdowns, fetching and inserting HTML fragments, and Ajax form submission. This might be achieved via a single, DIY script (or something like Trimmings) that is available globally and provides reusable utilities via data- attributes;
  • Component Sprinkles: specific page component behaviour defined in individual .js files, where event listeners are still ideally set on the document;
  • Stimulus components: where each component’s HTML holds its state and defines its behaviour, with a companion controller .js file which wires up event handlers to elements;
  • Spot View-Models: using a framework such as Vue or React only in specific spots, for situations where our needs are more complex and generating the HTML on the server would be impractical. Rather than taking over the whole page, this just augments a specific page section with a data-reactive view-model.
  • A single-page application (SPA): typically an all-JavaScript affair, where whole pages are handled by Reactive View-Models like Vue and React and the browser’s handling of clicks and the back button are overriden to serve different JavaScript-generated views to the user. This is the least modest approach but there are times when it is necessary.

One point to which Pascal regularly returns is that it’s better to add event listeners to the document (with a check to ensure the event occurred on the relevant element) rather than to the element itself. I already knew that Event Delegation is better for browser performance however Pascal’s point is that in the context of wanting to support swapping and replacing HTML on a whim, if event listeners are directly on the element but that element is replaced (or a duplicate added) then we would need to keep adding more event listeners. By contrast, this is not necessary when the event listener is added to the document.

Note: Stimulus applies event handlers to elements rather than the document, however one of its USPs is that it’s set up so that as elements appear or disappear from the DOM, event handlers are automatically added and removed. This lets you swap and replace HTML as you need without having to manually define and redefine event handlers. He calls this Automated Behaviour Orchestration and notes that while adding event listeners to the document is the ideal approach, the Stimulus approach is the next best thing.

Also of particular interest to me was his Stimulus-based Shopping Cart page demo where he employs some nice techniques including:

  • multiple controllers within the same block of HTML;
  • multiple Stimulus actions on a single element;
  • controller action methods which use document.dispatchEvent to dispatch Custom Events as a means of communicating changes up to other components;
  • an element with an action which listens for the above custom event occurring on the document (as opposed to an event on the element itself).

I’ve written about Stimulus before and noted a few potential cons when considering complex interfaces, however Pascal’s demo has opened my eyes to additional possibilities.

My first Christmas working at FreeAgent. Strong Christmas Jumper game!

Christmas Jumper Day 2019 at FreeAgent
Christmas Jumper Day 2019 at FreeAgent