Tagged “howto”
Teabag on a spoon technique
At work today I mentioned Mr Scruff’s tip for quick tea-making. My teammates laughed and said they felt like they were on an episode of Would I lie to you? I had to prove I wasn’t talking rubbish and went looking for the tip online. It has all but disappeared from the web but thanks to Wayback Machine I was able to find a cache of the page from 2007.
So here is the tea-making tip, listed in full for posteri-tea! (Sorry…)
Mr Scruff starts by saying:
Some people ask me about the teabag on a spoon technique, which I learnt from Peter Parker of Fingathing. Here is the full breakdown of the method for making black tea in a mug with a teabag.
And here are the steps:
- Boil the kettle with fresh water… no reboiling!
- Warm the mug… you can do this by pouring in a little warm water from the kettle while it is boiling, swishing it around & emptying it. This will help keep your brew warm for longer, essential for forgetful types like myself!
- Pour milk into the cup. If this offends you, you can add it later.
- Take a spoon (tablespoons are best, but a teaspoon will do)
- Place the teabag on the spoon, and hold it horizontally over the mug.
- When the kettle has boiled, hold it over the teabag, and pour as slowly as possible from as high as possible, without making the water splash upwards off the teabag. If you are doing this correctly, you will see little bubbles in the teabag, which is a sign of the oxygen in the boiling water doing its job.
- When the cup is full, add the milk if you have not done so, and examine your brew. If your tea is the correct colour (mine is a kind of brick red/malty brown) then you can discard the teabag. If it is not strong enough for your taste, then delicately lower the teabag onto the top of the tea, and slip the spoon out from under it. Leave it there until the brew is strong enough, and gently remove the bag with the spoon. There is no need to stir the bag or squash it in any way..tease the flavour out!
- Add sugar/salt/cheese/pickle to taste.
- Sit down and enjoy your brew!
- Repeat from stage 1.
Cheers!
PS if you don’t already know Mr Scruff I heartily recommend getting acquainted.
How I use github.com as my JAMstack CMS
Here are my quick links and tips for creating a markdown-file-based blog post using only github.com and no CMS. I’ve put these on this site so that they’re on the web therefore I can access them wherever I am.
Why do I create posts this way? Because I’ve tried forestry.io (now Tina) and Netlify CMS and I no longer have the time or inclination to maintain their dependencies, load their JavaScript or make ongoing updates as they evolve. I’ve also found them a little flaky. So instead let’s see how this lo-fi approach works.
This post is mainly for my own reference but who knows, maybe it’ll be useful to someone else too.
Step 1:
Create a new markdown file in my posts directory
Step 2: use one of these previous posts as a template (use the “copy raw file” icon-button then paste into my new markdown file):
- Standard blog entry
- Bookmark
- Short note
- Note with photo
- Book I’ve started reading
- Film I just watched
- (Video of) record I bought/love
- Mix I’ve just recorded
Useful references
- grab current datetime in right format (see the Date/time field under Convert from strings)
- All of my tags
- Images: how I optimise, host, and code for performance and responsiveness
- Set frontmatter date to
Created
Other notes
Reminder to self: create a bookmark for this post on all my devices. Name it “Nu Gh post”.
How to set up your Technics 1200 turntable (by Longbox Media on YouTube)
The best SL-1200/10 set-up video I’ve seen.
I especially like the section on testing for channel imbalance by playing a mono record (such as Nat Birchall's Upright Living LP and comparing the output meter for any differences between left and right, then untightening the relevant tonearm screws in order to make adjustments.
Saving CSS changes in DevTools without leaving the browser
Browser devtools have made redesigning a site such a pleasure. I love writing and adjusting a CSS file right in the sources panel and seeing design changes happen as I type, and saving it back to the file. (…) Designing against live HTML allows happy accidents and discoveries to happen that I wouldn't think of in an unconstrained design mockup
I feel very late to the party here. I tend to tinker in the DevTools Element Styles panel rather than save changes. So, inspired by Scott, I’ve just tried this out on my personal website. Here’s what I did.
- started up my 11ty-based site locally which launches a
localhost
URL for viewing it in the browser; - opened Chrome’s DevTools at Sources;
- checked the box “Enable local overrides” then followed the prompts to allow access to the folder containing my SCSS files;
- opened an SCSS file in the Sources tab for editing side-by-side with my site in the browser;
- made a change, hit Cmd-S to save and marvelled at the fact that this updated that file, as confirmed by a quick
git status
check. - switched to the Elements panel, opened its Styles subpanel, made an element style change there too, then confirmed that this alternative approach also saves changes to a file.
This is a really interesting and efficient way of working in the browser and I can see me using it.
There are also a couple of challenges which I’ll probably want to consider. Right now when I make a change to a Sass file, the browser takes a while to reflect that change, which diminishes the benefit of this approach. My site is set up such that Eleventy watches for changes to the sass folder as a trigger for rebuilding the static site. This is because for optimal performance I’m purging the compiled and combined CSS and inlining that into the <head>
of every file… which unfortunately means that when the CSS is changed, every file needs rebuilt. So I need to wait for Eleventy to do its build thing until the page I’m viewing shows my CSS change.
To allow my SCSS changes to be built and reflected faster I might consider no longer inlining CSS, or only inlining a small amount of critical stuff… or maybe (as best of all worlds) only do the inlining for production builds but not in development. Yeah, I like that latter idea. Food for thought!
Collected web accessibility guidelines, tips and tests
At work, I’m sometimes asked accessibility questions or to provide guidelines. I’m with Anna Cook in considering myself an accessibility advocate rather than an expert however I have picked up lots of tips and knowledge over many years of developing websites. So I thought it’d be useful to gather some general web accessibility tips and tests in one place as a useful reference.
Caveats and notes:
- this is a living document which I’ll expand over time;
- I’m standing on the shoulders of real experts and I list my references at the foot of the article; and
- if I’ve got anything wrong, please let me know!
Table of contents
- If you only had 5 minutes
- Content structure
- Semantic HTML and ARIA
- Favour native over custom components except where they have known issues
- Make custom components convey state accessibly
- Forms
- Links and buttons
- Ensure keyboard support
- Content resizing
- Better link text
- Supporting high contrast mode
- Skip links
- Navigation and menus
- Modal dialogues
If you only had 5 minutes
If someone had a web page and only had 5 minutes to find and tackle the lowest hanging fruit accessibility-wise, I’d probably echo Jeremy Keith’s advice to ensure that the page covers the following:
- uses heading elements sensibly
- uses landmarks (representing roles like
banner
,navigation
,main
,contentinfo
) - marks up forms sensibly (for example using labels and appropriate buttons)
- provides images with decent text alternatives
(Note: headings and landmarks are used by screen reader users to get a feel for the page then jump to areas of interest.)
Spending just 5 minutes would be bad, of course, and you shouldn’t do that. The point is that if pushed, the above give good bang-for-your-buck.
Content structure
The page’s content should be well structured as this makes it easier to understand for all, especially people with reading and cognitive disabilities.
It should consist of short sections of content preceded by clear headings. Use the appropriate heading level for the place in the page. Don’t use an inappropriate heading level to achieve a given appearance such as a smaller size. Instead use the appropriate heading element then use CSS to achieve your desired style.
It should employ lists where appropriate. It should place the most important content at the beginning of the page or section to give it prominence.
Check your page for any long passages of text with no structure. Ensure that sufficient prominence is given to the most important information and calls to action.
Semantic HTML and ARIA
While there are generic HTML elements like div
and span
, there are many more HTML elements that perform a specific role and convey that role to browers and other technologies. Choosing and using semantic HTML elements appropriately is a very good practice.
Also, using semantic HTML elements is preferable to bolting on semantics via attributes since the semantics are conveyed natively avoiding redundancy and duplication. As Bruce Lawson says, “Built-in beats bolt-on, bigly”.
Apply ARIA carefully. No ARIA is better than bad ARIA.
Landmarks
Create a small number of landmarks using the appropriate HTML elements.
For some landmark-generating elements it’s appropriate to bolster them with a label or accessible name. For example with nav
and aside
, i) there’s a decent chance there might be multiple on the page; and ii) each instance creates a landmark even when it’s nested within a deeper HTML element. So it’s helpful to distinguish each different landmark of the same type by using sensible accessible names otherwise you’d get multiple navigation menus all represented by the same “navigation” in the Landmarks menu. In the case of the section
element it needs an acessible name in order for it to act as a region
landmark. For all of these you can use aria-labelledby
set to the id
of an inner heading, or use aria-label
.
Note that when using multiple <header>
(or footer
) elements on a page, where one and one only is a direct child of body
while the others are used within article
or similar elements, there’s perhaps less need to add custom accessible names. That’s because only a direct child of body
will be treated as a landmark and the others won’t, therefore they won’t be butting against each other in a screen reader’s Landmarks menu and need distinguished.
Correct use of aria-label and aria-labelledby
Use the aria-label
or aria-labelledby
attributes (only when necessary) on interactive elements – buttons, links, form controls – and on landmark regions. Don’t use them on <div>
s, <span>
s, or other elements representing static/noninteractive text-level semantics, such as <p>
, <strong>
, <em>
, and so forth, unless those elements’ roles have been overridden with roles that expect accessible names.
Favour native over custom components except where they have known issues
Native components require very little work, are familiar to users, and are generally accessible by default. Custom components can be built to appear and behave as designers want, but require much more effort to build and are challenging to make accessible.
There are exceptions. Since the native options are flawed across browsers, accessibility experts recommend using custom solutions for:
- form error field messages
- focus indicator styles
Make custom components convey state accessibly
Now that you’re building a custom component you don’t get accessibility out of the box. Whether it’s a Like button or a disclosure widget, you can’t rely on a visual change alone to convey a UI change to all users. You’ll need to use the right element (note – it often starts with a button
) and then use ARIA to convey states such as pressed or expanded to screen reader users.
Forms
Because in the industry form fields are often handled with JavaScript and not submitted, people sometimes question whether form fields should live inside a form (<form>
). My answer is yes, and here’s why.
Using the form element improves usability and accessibility
Using a <form>
provides additional semantics allowing additional accessibility. It helps assistive devices like screen readers better understand the content of the page and gives the person using them more meaningful information.
By putting form fields inside a form we also ensure we match user expectations. We support the functionality (such as the different ways of submitting a form) that users expect when presented with form fields.
If you’re thinking “but what about form fields that don’t look like form fields?” then you’ve entered the problem territory of “deceptive user interfaces” – the situation where perceived affordances don’t match actual functionality, which causes confusion for some people. This is to be avoided. We shouldn’t use form fields (nor a <form>
) when they are not appropriate. A checkbox, radio button, or select menu is meant to gather information. So if your goal is instead to let the user manipulate the current view, use a button
rather than checkboxes or radio buttons.
References:
- Why use a form element when submitting fields with JavaScript
- Lea Verou and Leonie Watson’s discussion regarding Toggles
- My conversation about forms with accessibility expert Adrian Roselli
Using the form element simplifies your JavaScript for event handling
Using the form
element can also make it easier for you to meet user expectations in your JS-powered experience. This is because it gives you a single element (form
) and event combination that allows listening to multiple interactions. With a form element you can add a listener for the submit()
event. This event fires automatically in response to the various ways users expect to submit a form, including pressing enter inside a field.
Anchors and buttons
To let the user navigate to a page or page section, or download a file, use an anchor element.
To let the user trigger an action such as copying to clipboard, launching a modal or submitting a form, use a button element.
Anchors should include an href
attribute otherwise the browser will treat it like a non-interactive element. This means the link will not be included in the expected focus order and will not present a pointer to mouse users like it should. These days there is no remaining use case for an anchor without an href
. We no longer need named anchors to create link-target locations within the page because we can use the id
attribute (on any element) for that. And if you want an interactive element that does not link somewhere, you should use button
.
Do not remove the focus outline from links and buttons in CSS, unless it’s to provide a better version.
Ensure you always give links and buttons an accessible name, even when they use icons rather than text. This might be through visually hidden text or perhaps using an ARIA-related attribute.
Ensure keyboard support
Web pages need to support those who navigate the page by keyboard.
Use the tab key to navigate your page and ensure that you can reach all actionable controls such as links, buttons and form controls. Press the enter key or space bar to activate each control.
If during your test any actionable control is skipped, receives focus in an illogical order, or you cannot see where the focus is at any time, then keyboard support is not properly implemented.
Content resizing
Try zooming your page up to 400%. In Chrome, Zoom is available from the kebab menu at the top-right, or by holding down command with plus or minus.
Content must resize and be available and legible. Everything should reflow.
Relative font settings and responsive design techniques are helpful in effectively handling this requirement.
Relatedly, setting font-sizes in px
should be avoided because although a user can override the “fixed-ness” with zoom, it breaks the user’s ability to choose a larger or smaller default font size (which users often prefer over having to zoom every single page).
Better link text
Blind and visually impaired users use a screen reader to browse web pages, and screen readers provide user-friendly access to all the links on the page via a Links menu. When links are encountered in that context, link text like “Click here” and “Read more” are useless.
Check your web page to ensure that links clearly describe the content they link to when read out of context.
Better link text also improves the flow and clarity of your content and so improves the experience for everyone.
Supporting high contrast mode
Some people find it easier to read content when it’s in a particular colour against a specific background colour. Operating systems provide options to allow users to configure this to their preference. Websites must support support the user’s ability to apply this.
On a Windows computer go to Settings > Ease of access and turn on High contrast mode. On macOS go to System preferences > Accessibility settings > Display and select “Invert colours”.
Having changed the contrast, check that your web page’s content is fully visible and understandable, that images are still visible and that buttons are still discernible.
Skip links
Websites should provide a “Skip to content” link because this provides an important accessibility aid to keyboard users and those who use specialised input devices. For these users, having to step through (typically via the tab key) all of the navigation links on every page would be tiring and frustrating. Providing a skip link allows them to bypass the navigation and skip to the page’s main content.
To test that a website contains a skip link, visit a page then press the tab key and the skip link should appear. Then activate it using the enter key and check that focus moves to the main content area. Press tab again to ensure that focus moves to the first actionable element in the main content.
Navigation and menus
When developing a collapsible menu, place your menu <button>
within your <nav>
element and hide the inner list rather than hiding the <nav>
element itself. That way, we are not obscuring from Assistive Technologies the fact that a navigation still exists. ATs can still access the nav via landmark navigation. This is important because landmark discovery is one of the fundamental ways AT users scan, determine and navigate a site’s structure.
Modal dialogues
You probably don’t want to set the modal’s heading as an <h1>
. It likely displays content that exists on the page (which already has an <h1>
) at a lower level of the document hierarchy.
References
- Using HTML landmark roles to improve accessibility MDN article. And Adrian R’s suggestions for additions
- Navigation (landmark) role, on MDN
- Tetralogical’s Quick Accessibility Tests YouTube playlist
- Basic accessibility mistakes I often see in audits by Chris Ferdinandi
- Sara Soueidan’s video tutorial Practical tips for building more accessible front-ends
- Adrian Roselli’s Responsive type and zoom
- Heydon Pickering’s tweet about buttons in navs and Scott O’Hara’s follow up article Landmark Discoverability
- Tetralogical’s Foundations: native versus custom components
- Ben Myers on where to use aria labelling attributes
How to Favicon in 2021 (on CSS-Tricks)
Some excellent favicon tips from Chris Coyier, referencing Andrey Sitnik’s recent article of the same name.
I always appreciate someone looking into and re-evaluating the best practices of something that literally every website needs and has a complex set of requirements.
Chris is using:
<link rel="icon" href="/favicon.ico"><!-- 32x32 -->
<link rel="icon" href="/icon.svg" type="image/svg+xml">
<link rel="apple-touch-icon" href="/apple-touch-icon.png"><!-- 180x180 -->
<link rel="manifest" href="/manifest.webmanifest">
And in manifest.webmanifest
:
{
"icons": [
{ "src": "/192.png", "type": "image/png", "sizes": "192x192" },
{ "src": "/512.png", "type": "image/png", "sizes": "512x512" }
]
}
(via @mxbck)
How I subset web fonts
On my personal website I currently use three web fonts from the Source Sans 3 group: regular, italic and semibold. I self-host my fonts because that’s a good practice. Additionally I use a variety of special characters to add some typographic life to the text.
When self-hosting it’s important from a performance perspective to minimise the weight of the font files your visitors must download. To achieve this I subset my fonts so as to include only the characters my pages use but no more. Here’s how I do it.
Note: to follow these steps, you’ll need to install glyphhanger. The Github page includes installation and usage guidelines however there are a few common installation pitfalls so if you’re on a Mac and run into trouble I recommend checking Sara Soueidan’s How I set up Glyphhanger on macOS to get you back on track.
For the purposes of this walkthrough I’ll assume you have a directory in your application named fonts
.
Start by deleting any existing custom font files from your application’s fonts
directory.
Run your site locally in an incognito browser window. For my Eleventy-based site, I run npm run serve
which serves the site at http://localhost:8080
.
Visually check your locally-running site to ensure that now you’ve deleted your web fonts it’s no longer serving them and is instead rendering text using system fonts.
Visit the source page for your custom fonts—in my case the Github repository for Source Sans 3. Download in .ttf
format the latest versions of the fonts you need and place them in your fonts
directory. For me these are:
- Regular,
- Italic; and
- Semibold.
You’ll notice the large file sizes of these .ttf
files. For example Source Sans 3’s Regular.ttf
font is 299 kb.
At the command line, cd
into your fonts
directory.
Now we’re going to run glyphhanger on one font at a time. This fantastic tool will intelligenty crawl your website to check which glyphs are currently in use for the specified weight then include those in a subset file which it outputs in .ttf
, .woff
and .woff2
formats. I use glyphhanger’s spider
option so that it spiders multiple pages (rather than just one) at a time, meaning that it is more likely to catch all the special characters I’m using.
glyphhanger http://localhost:8080/posts/ --subset=SourceSans3-Regular.ttf --spider-limit=0
If all went well you should see output like this:
U+20-23,U+25-2A,U+2C-5B,U+5D,U+5F,U+61-7D,U+A9,U+B7,U+BB,U+D7,U+E9,U+F6,U+200B,U+200E,U+2013,U+2014,U+2018,U+2019,U+201C,U+201D,U+2026,U+2122,U+2190,U+2192,U+2615,U+FE0F
Subsetting SourceSans3-Regular.ttf to SourceSans3-Regular-subset.ttf (was 292.24 KB, now 46.99 KB)
Subsetting SourceSans3-Regular.ttf to SourceSans3-Regular-subset.zopfli.woff (was 292.24 KB, now 22.14 KB)
Subsetting SourceSans3-Regular.ttf to SourceSans3-Regular-subset.woff2 (was 292.24 KB, now 17.77 KB)
The .woff2
subset file has reduced the file size from 299 kb to 17.77 kb which is pretty impressive!
Update your CSS to point at the new woff2
and woff
subset files for your font. My updated CSS looks like this:
@font-face {
font-family: Source Sans Pro;
src: url(/fonts/sans/SourceSans3-Regular-subset.woff2) format("woff2"),
url(/fonts/sans/SourceSans3-Regular-subset.zopfli.woff) format("woff");
font-weight: 400;
font-display: swap;
}
Check your locally running application to ensure that the relevant text (body copy in this case) is now being served using the web font rather than fallback font, and that special characters are also being served using the web font.
I’ll usually crack open the Fonts panel in Firefox’s DevTools and check that, amongst other things, my pagination links which use the rightward pointing arrow character (→ or unicode U+2192
) are rendering it using Source Sans Pro and not sticking out like a sore thumb by using Helvetica due to the glyph not being present in the subset.
Delete the .ttf
file you started with and any .ttf
subsets generated, because you won’t serve files in that format to your website visitors.
Repeat the glyphhanger subsetting and CSS updating process for any other weights (italic, semibold) or custom fonts you want to subset.
One last handy tip: if there’s a weight for which I don’t need a fancy character set (for example the Semibold I use for headings), I might just grab default latin charset woff
and woff2
files from the Google Webfonts Helper. The files tend to be small and well-optimised and this can save a little time. (This is only possible if the font is available from Google Fonts which is true in the case of Source Sans 3.)
How to hide elements on a web page
In order to code modern component designs we often need to hide then reveal elements. At other times we want to provide content to one type of user but hide it from another because it’s not relevant to their mode of browsing. In all cases accessibility should be front and centre in our thoughts. Here’s my approach, heavily inspired by Scott O’Hara’s definitive guide Inclusively Hidden.
Firstly, avoid the need to hide stuff. With a bit more thought and by using existing fit-for-purpose HTML tools, we can perhaps create a single user interface and experience that works for all. That approach not only feels like a more equal experience for everyone but also removes margin for error and code maintenance overhead.
With that said, hiding is sometimes necessary and here are the most common categories:
- Hide from everyone
- Hide visually (i.e. from sighted people)
- Hide from Assistive Technologies (such as screen readers)
Hide from everyone
We usually hide an element from everyone because the hidden element forms part of a component’s interface design. Typical examples are tab panels, off-screen navigation, and modal dialogues that are initially hidden until an event occurs which should bring them into view. Initially these elements should be inaccessible to everyone but after the trigger event, they become accessible to everyone.
Implementation involves using JavaScript to toggle an HTML attribute or class on the relevant element.
For basic, non-animated show-and-hide interactions you can either:
- toggle a class which applies
display: none
in CSS; or - toggle the boolean
hidden
attribute, which has the same effect but is native to HTML5.
Both options work well but for me using the hidden
attribute feels a little simpler and more purposeful. My approach is to ensure resilience by making the content available in the first instance in case JavaScript should fail. Then, per Inclusive Components’ Tabs example, JavaScript applies both the “first hide” and all subsequent toggling.
Here’s some CSS that supports both methods. (The hidden
attribute doesn’t strictly need this but it’s handy to regard both options as high-specifity, “trump-everything-else” overrides.)
.u-hidden-from-everyone,
[hidden] {
display: none !important;
}
For cases where you are animating or sliding the hidden content into view, toggle the application of CSS visibility: hidden
because this also removes the element from the accessibility tree but unlike display
, can be animated. Note that with visibility: hidden
the physical space occupied by the element is still retained, therefore it’s best to pair it with position: absolute
or max-height: 0px; overflow: hidden
to prevent that “empty space while hidden” effect. For example:
.off-canvas-menu {
visibility: hidden;
position: absolute;
transform: translateX(-8em);
transition: 250ms ease-in;
}
[aria-expanded="true"] + off-canvas-menu {
visibility: visible;
transform: translateX(0);
transition: visibility 50ms, transform 250ms ease-out;
}
Hide visually (i.e. from sighted people)
We’ll usually want to hide something visually (only) when its purpose is solely to provide extra context to Assistive Technologies. An example would be appending additional, visually-hidden text to a “Read more” link such as “about Joe Biden” since that would be beneficial to screen reader users.
We can achieve this with a visually-hidden
class in CSS and by applying that class to our element.
.visually-hidden:not(:focus):not(:active) {
clip: rect(0 0 0 0);
clip-path: inset(50%);
height: 1px;
overflow: hidden;
position: absolute;
white-space: nowrap;
width: 1px;
}
Essentially this hides whatever it’s applied to unless it’s a focusable element currently being focused by screen reader controls or the tab key, in which case it is revealed.
Note that if adding to link text to make it more accessible, always append rather than inserting words into the middle of the existing text. That way, you avoid solving an accessibility for one group but creating another for another group (Dragon speech recognition software users).
Visually hidden until focused
There are other CSS approaches to hiding visually. One approach is to not only add position: absolute
(removing the element from the document flow) but also position it off-screen with left: -100vw
or similar. The use case for this approach might be when you want your visually hidden element to support being revealed and for that reveal to occur via a transition/animation from off-screen into the viewport. See Scott O’Hara’s off screen skip-links example.
Hide from Assistive Technologies (such as screen readers)
We sometimes hide visual elements from Assistive Technologies because they are decorative and have accompanying text, for example a “warning” icon with the text “warning” alongside. If we did not intervene then Assistive Technologies would read out “warning” twice which is redundant.
To achieve this we can apply aria-hidden="true"
to our element so that screen readers know to ignore it. In the following examples we hide the SVG icons within buttons and links, safe in the knowledge that the included “Search” text is providing each interactive element with its accessible name.
<button>
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</button>
<a href="/search">
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</a>
Reference: Contextually Marking up accessible images and SVGs
Setting an accessibility standard for a UK-based commercial website
When advocating accessible web practices for a commercial website, the question of “what does the law require us to do?” invariably arises.
The appropriate answer to that question should really be that it doesn’t matter. Regardless of the law there is a moral imperative to do the right thing unless you are OK with excluding people, making their web experiences unnecessarily painful, and generally flouting the web’s founding principles.
However as Web Usability’s article What is the law on accessibility? helpfully advises, in the UK the legal situation is as follows:
“The accessibility of a UK web site is covered by the Equality Act 2010” (which states that) “Site owners are required to make ‘reasonable adjustments’ to make their sites accessible to people with disabilities”. While “there is no legal precedent about what would constitute a ‘reasonable adjustment’”, “given that the Government has adopted the WCAG 2.1 level AA as a suitable standard for public sector sites and it is more broadly recognised as a ‘good’ approach, any site which met these guidelines would have a very strong defence against any legal action.”
So, WCAG 2.1 Level AA is the sensible accessibility standard for your commercial UK-based website to aim for.
While not aimed specifically at the UK market, deque.com’s article What to look for in an accessibility audit offers similar advice:
The most common and widely-accepted standard to test against is WCAG, a.k.a. Web Content Accessibility Guidelines. This standard created by the World Wide Web Consortium (W3C) defines technical guidelines for creating accessible web-based content.
WCAG Success Criteria are broken down into different “levels of conformance”: A (basic conformance), AA (intermediate conformance), and AAA (advanced conformance). The current standard for compliance is both WCAG 2.1 Level A and AA.
If you don’t have specific accessibility regulations that apply to your organization but want to avoid legal risk, WCAG 2.1 A and AA compliance is a reasonable standard to adopt.
Additional references
Best practice techniques for SVG Icons
Here’s how I’d handle various common SVG icon scenarios with accessibility in mind.
Just an icon
So this is an icon that’s not within a link or button and has no adjacent text. This might be, for example, an upward-pointing arrow icon in a <td>
in a “league table” where the arrow is intended to indicate a trend such as “The figure has increased” or “Moving up the table”.
The point here is that in this scenario the SVG is content rather than decoration.
<svg
role="img"
focusable="false"
aria-labelledby="arrow-title"
>
<title id="arrow-title">Balance has increased</title>
<path …>…</path
</svg>
Note: Fizz Studio’s article Reliable valid SVG accessibility suggests that the addition of aria-labelledby
pointing to an id for the <title>
(as Léonie originally recommended) is no longer necessary. That’s encouraging, but as it does no harm to keep it I think I’ll continue to include it for the moment.
The same article also offers that maybe we should not use the SVG <title>
element (and use aria-label
to provide an accessible name instead) due to the fact that it leads to a potentially undesirable tooltip, much like the HTML title
attribute does. To be honest I’m OK with this and don’t see it as a problem, and as I mention later have heard probably even more problematic things about aria-label
so will stick with <title>
.
Button (or link) with icon plus text
This is easy. Hide the icon from Assistive Technology using aria-hidden
to avoid unnecessary repetition and rely on the text as the accessible name for the button or link.
<button>
<svg aria-hidden="true" focusable="false" ><!--...--></svg>
Search
</button>
<a href="/search">
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</a>
Button (or link) with icon alone
In this case the design spec is for a button with no accompanying text, therefore we must add the accessible name for Assistive Technologies ourselves.
<button>
<svg focusable="false" aria-hidden="true"><!--...--></svg>
<span class="visually-hidden">Search</span>
</button>
<a href="/search">
<svg focusable="false" aria-hidden="true"><!--...--></svg>
<span class="visually-hidden">Search</span>
</a>
The reason I use text that’s visually-hidden using CSS for the accessible name rather than adding aria-label
on the button or link is because I’ve heard that the former option is more reliable. In greater detail: aria-label is announced inconsistently and not always translated.
References
- Accessible SVG Icons, by Chris Coyier;
- Tips for accessible SVG, by Léonie Watson;
- Reliable, valid SVG accessibility, by Fizz Studio;
- Accessible icon buttons, by Sara Soueidan;
- Every Layout’s Icon component; and
- How to hide elements on a web page, by my bad self.
How to manage JavaScript dependencies
Managing JavaScript dependencies is about as much fun as a poke in the eye. However even if—like me—you prefer to keep things lean and dependency-free as far as possible, it’s something you’re going to need to do either in large work projects or as your personal side-project grows. In this post I tackle it head-on to reduce the problem to some simple concepts and practical techniques.
In modern JavaScript applications, we can add tried-and-tested open source libraries and utilities by installing packages from the NPM registry. This can aid development by letting you concentrate on your application’s unique features rather than reinventing the wheel for already-solved common tasks.
A typical example might be to add axios or node-fetch to a Node.js project to provide a means of making API calls.
We can use a package manager such as yarn or npm to install packages. When our package manager installs a package it logs it as a project dependency which is to say that the project depends upon its presence to function properly.
It then follows that anyone who wants to run the application should first install its dependencies.
And it’s the responsibility of the project owner (you and your team) to manage the project’s dependencies over time. This involves:
- updating packages when they release security patches;
- maintaining compatibility by staying on package upgrade paths; and
- removing installed packages when they are no longer necessary for your project.
While it’s important to keep your dependencies updated, in a recent survey by Sonatype 52% of developers said they find dependency management painful. And I have to agree that it’s not something I generally relish. However over the years I’ve gotten used to the process and found some things that work for me.
A simplified process
The whole process might go something like this (NB install yarn if you haven’t already).
# Start installing and managing 3rd-party packages.
# (only required if your project doesn’t already have a package.json)
yarn init # or npm init
# Install dependencies (in a project which already has a package.json)
yarn # or npm i
# Add a 3rd-party library to your project
yarn add package_name # or npm i package_name
# Add package as a devDependency.
# For tools only required in the local dev environment
# e.g. CLIs, hot reload.
yarn add -D package_name # or npm i package_name --save-dev
# Add package but specify a particular version or semver range
# https://devhints.io/semver
# It’s often wise to do this to ensure predictable results.
# caret (^) is useful: allows upgrade to minor but not major versions.
# is >=1.2.3 <2.0.0
yarn add package_name@^1.2.3
# Remove a package
# use this rather than manually deleting from package.json.
# Updates yarn.lock, package.json and removes from node_modules.
yarn remove package_name # or npm r package_name
# Update one package (optionally to a specific version/range)
yarn upgrade package_name
yarn upgrade package_name@^1.3.2
# Review (in a nice UI) all packages with pending updates,
# with the option to upgrade whichever you choose
yarn upgrade-interactive
# Upgrade to latest versions rather than
# semver ranges you’ve defined in package.json.
yarn upgrade-interactive -—latest
Responding to a security vulnerability in a dependency
If you host your source code on GitHub it’s a great idea to enable Dependabot. Essentially Dependabot has your back with regard to any dependencies that need updated. You set it to send you automated security updates by email so that you know straight away if a vulnerability has been detected in one of your project dependencies and requires action.
Helpfully, if you have multiple Github repos and more than one of those include the vulnerable package you also get a round-up email with a message something like “A new security advisory on lodash affects 8 of your repositories” with links to the alert for each repo, letting you manage them all at once.
Dependabot also works for a variety of languages and techologies—not just JavaScript—so for example in a Rails project it might email you to suggest bumping a package in your Gemfile
.
Automated upgrades
Sometimes the task is straightforward. The Dependabot alert email tells you about a vulnerability in a package you explicitly installed and the diligent maintainer has already made a patch release available.
A simple upgrade
to the relevant patch version would do the job, however Dependabot can even take care of that for you! Dependabot can automatically open a new Pull Request which addresses the vulnerability by updating the relevant dependency. It’ll give the PR a title like
Bump
lodash
from4.17.11
to4.17.19
You just need to approve and merge that PR. This is great; it’s really simple and takes care of lots of cases.
Note 1: if you work on a corporate repo that is not set up to “automatically open PRs”, often you can still take advantage of Github’s intelligence with just one or two extra manual steps. Just follow the links in your Github security alert email.
Note 2: Dependabot can also be set to do automatic version updates even when your installed version does not have a vulnerability. You can enable this by adding a dependabot.yml
to your repo. But so far I’ve tended to avoid unpredictability and excess noise by having it manage security updates only.
Manual upgrades
Sometimes Dependabot will alert you to an issue but is unable to fix it for you. Bummer.
This might be because the package owner has not yet addressed the security issue. If your need to fix the situtation is not super-urgent, you could raise an issue on the package’s Github repo asking the maintainer (nicely) if they’d be willing to address it… or even submit a PR applying the fix for them. If you don’t have the luxury of time, you’ll want to quickly find another package which can do the same job. An example here might be that you look for a new CSS minifier package because your current one has a longstanding security issue. Having identified a replacement you’d then remove
package A, add
package B, then update your code which previously used package A to make it work with package B. Hopefully only minimal changes will be required.
Alternatively the package may have a newer version or versions available but Depandabot can’t suggest a fix because:
- the closest new version’s version number is beyond the allowed range you specified in
package.json
for the package; or - Dependabot can’t be sure that upgrading wouldn’t break your application.
If the package maintainer has released newer versions then you need to decide which to upgrade to. Your first priority is to address the vulnerability, so often you’ll want to minimise upgrade risk by identifying the closest non-vulnerable version. You might then run yarn upgrade <package…>@1.3.2
. Note also that you may not need to specify a specific version because your package.json
might already specify a semver range which includes your target version, and all that’s required is for you to run yarn upgrade
or yarn upgrade <package>
so that the specific “locked” version (as specified in yarn.lock
) gets updated.
On other occasions you’ll read your security advisory email and the affected package will sound completely unfamiliar… likely because it’s not one you explicitly installed but rather a sub-dependency. Y’see, your dependencies have their own package.json
and dependencies, too. It seems almost unfair to have to worry about these too, however sometimes you do. The vulnerability might even appear several times as a sub-dependency in your lock file’s dependency tree. You need to check that lock file (it contains much more detail than package.json
), work out which of your top-level dependencies are dependent on the sub-dependency, then go check your options.
Update: use yarn why sockjs
(replacing sockjs
as appropriate) to find out why a module you don’t recognise is installed. It’ll let you know what module depends upon it, to help save some time.
When having to work out the required update to address a security vulnerability in a package that is a subdependency, I like to quickly get to a place where the task is framed in plain English, for example:
To address a vulnerability in
xmlhttprequest-ssl
we need to upgradekarma
to the closest available version above4.4.1
where its dependency onxmlhttprequest-ssl
is>=1.6.2
Case Study 1
I was recently alerted to a “high severity” vulnerability in package xmlhttprequest-ssl
.
Dependabot cannot update
xmlhttprequest-ssl
to a non-vulnerable version. The latest possible version that can be installed is1.5.5
because of the following conflicting dependency:@11ty/eleventy@0.12.1
requiresxmlhttprequest-ssl@~1.5.4
via a transitive dependency onengine.io-client@3.5.1
. The earliest fixed version is1.6.2
.
So, breaking that down:
xmlhttprequest-ssl
versions less than1.6.2
have a security vulnerability;- that’s a problem because my project currently uses version
1.5.5
(via semver range~1.5.4
), which I was able to see from checkingpackage-lock.json
; - I didn’t explicitly install
xmlhttprequest-ssl
. It’s at the end of a chain of dependencies which began at thedependencies
of the package@11ty/eleventy
, which I did explicitly install; - To fix things I want to be able to install a version of Eleventy which has updated its own dependencies such there’s no longer a subdependency on the vulnerable version of
xmlhttprequest-ssl
; - but according to the Dependabot message that’s not possible because even the latest version of Eleventy (0.12.1) is indirectly dependent on a vulnerable version-range of
xmlhttprequest-ssl
(~1.5.4
); - based on this knowledge, Dependabot cannot recommend simply upgrading Eleventy as a quick fix.
So I could:
- decide it’s safe enough to wait some time for Eleventy to resolve it; or
- request Eleventy apply a fix (or submit a PR with the fix myself); or
- stop using Eleventy.
Case Study 2
A while ago I received the following security notification about a vulnerability affecting a side-project repo.
dot-prop < 4.2.1 “Prototype pollution vulnerability in dot-prop npm package before versions 4.2.1 and 5.1.1 allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
I wasn’t familar with dot-prop
but saw that it’s a library that lets you “Get, set, or delete a property from a nested object using a dot path”. This is not something I explicitly installed but rather a sub-dependency—a lower-level library that my top-level packages (or their dependencies) use.
Github was telling me that it couldn’t automatically raise a fix PR, so I had to fix it manually. Here’s what I did.
- looked in
package.json
and found no sign ofdot-prop
; - started thinking that it must be a sub-dependency of one or more of the packages I had installed, namely
express
,hbs
,request
ornodemon
; - looked in
package-lock.json
and via a Cmd-F search fordot-prop
I found that it appeared twice; - the first occurrence was as a top-level element of
package-lock.json
s top-leveldependencies
object. This object lists all of the project’s dependencies and sub-dependencies in alphabetical order, providing for each the details of the specific version that is actually installed and “locked”; - I noted that the installed version of
dot-prop
was4.2.0
, which made sense in the context of the Github security message; - the other occurrence of
dot-prop
was buried deeper within the dependency tree as a dependency ofconfigstore
; - I was able to work backwards and see that
dot-prop
is required byconfigstore
then Cmd-F search forconfigstore
to find that it was required byupdate-notifier
, which is turn is required bynodemon
; - I had worked my way up to a top-level dependency
nodemon
(installed version1.19.2
) and worked out that I would need to updatenodemon
to a version that had resolved thedot-prop
vulnerability (if such a version existed); - I then googled “nodemon dot-prop” and found some fairly animated Github issue threads between Remy the maintainer of
nodemon
and some users of the package, culminating in a fix; - I checked nodemon’s releases and ascertained that my only option if sticking with
nodemon
was to installv2.0.3
—a new major version. I wouldn’t ideally install a version which might include breaking changes but in this casenodemon
was just adevDependency
, not something which should affect other parts of the application, and a developer convenience at that so I went for it safe in the knowledge that I could happily remove this package if necessary; - I opened
package.json
and withindevDependencies
manually updatednodemon
from^1.19.4
to^2.0.4
. (If I was in ayarn
context I’d probably have done this at the command line). I then rannpm i nodemon
to reinstall the package based on its new version range which would also update the lock file. I was then prompted to runnpm audit fix
which I did, and then I was done; - I pushed the change, checked my Github repo’s security section and noted that the alert (and a few others besides) had disappeared. Job’s a goodun!
Proactively checking for security vulnerabilities
It’s a good idea on any important project to not rely on automated alerts and proactively address vulnerabilities.
Check for vulnerabilities like so:
yarn audit
# for a specific level only
yarn audit --level critical
yarn audit --level high
Files and directories
When managing dependencies, you can expect to see the following files and directories.
package.json
yarn.lock
node_modules
(this is the directory into which packages are installed)
Lock files
As well as package.json
, you’re likely to also have yarn.lock
(or package.lock
or package-lock.json
) under source control too. As described above, while package.json
can be less specific about a package’s version and suggest a semver range, the lock file will lock down the specific version to be installed by the package manager when someone runs yarn
or npm install
.
You shouldn’t manually change a lock file.
Choosing between dependencies
and devDependencies
Whether you save an included package under dependencies
(the default) or devDependencies
comes down to how the package will be used and the type of website you’re working on.
The important practical consideration here is whether the package is necessary in the production environment. By production environment I don’t just mean the customer-facing website/application but also the enviroment that builds the application for production.
In a production “build process” environment (i.e. one which likely has the environment variable NODE_ENV
set to production
) the devDependencies
are not installed. devDependencies
are packages considered necessary for development only and therefore to keep production build time fast and output lean, they are ignored.
As an example, my personal site is JAMstack-based using the static site generator (SSG) Eleventy and is hosted on Netlify. On Netlify I added a NODE_ENV
environment variable and set it to production
(to override Netlify’s default setting of development
) because I want to take advantage of faster build times where appropriate. To allow Netlify to build the site on each push I have Eleventy under dependencies
so that it will be installed and is available to generate my static site.
By contrast, tools such as Netlify’s CLI and linters go under devDependencies
. Netlify’s build prorcess does not require them, nor does any client-side JavaScript.
Upgrading best practices
- Check the package CHANGELOG or releases on Github to see what has changed between versions and if there have been any breaking changes (especially when upgrading to the latest version).
- Use a dedicated PR (Pull Request) for upgrading packages. Keep the tasks separate from new features and bug fixes.
- Upgrade to the latest minor version (using
yarn upgrade-interactive
) and merge that before upgrading to major versions (usingyarn upgrade-interactive -—latest
). - Test your work on a staging server (or Netlify preview build) before deploying to production.
References
Making a slider with just HTML and CSS (on CSS-Tricks)
Sliders (or carousels) are a fairly common practical requirement in web projects. Here, Chris Coyier shows us how far we can get in 2019 with HTML and CSS alone.
Fading out siblings on hover in CSS (by Trys Mudford)
Here’s a nice CSS-only hover technique from Trys Mudford incorporating scale transforms, opacity transitions and mouse pointer events.
Don’t set cursor: pointer on buttons
For many years I’ve been applying cursor: pointer
to buttons because it felt right and would improve the user experience.
As Florens Verschelde explains, that approach is probably best avoided. I was going against the W3C’s spec that cursor: pointer
should be reserved for links, and was adding to the usability antipattern where “everything resembles a link”.
I’ll leave button cursor behaviour as it is from here on in.
$$ in the DevTools Console
I learned something new today when developing in the Firefox Dev Tools console (although this applies to Chrome too)—something which was really useful and which I thought I’d share.
Basically, type $$('selector')
into the console (replacing selector as desired) and it’ll give you back all matching elements on the page.
So for example, $$('script')
or $$('li')
.
Similarly you can select a single element by instead using one dollar sign ($
).
These seem to be console shortcuts for document.querySelector()
(in the case of $
) and document.querySelectorAll()
(in the case of $$
).
The other really cool thing is that the resultant nodeList
is returned as an array, so you could do e.g. $$('li').forEach(…)
or similar.
via @rem (Remy Sharp)
Definitive web font @font-face syntax
These days, whenever I’m about to use a web font on a new site I generally find myself running a google search for the latest “definitive @font-face
syntax” that covers all modern browser/device needs.
For a long time I headed straight for Paul Irish’s Bulletproof @font-face Syntax but I noted a few years back that he’d stopped updating it.
When buying web fonts from type foundries such as Fontsmith the foundries do tend to provide their own guidelines. However, I’m not convinced that these are sufficiently cross-platform compatible.
Recently I’ve been reading Flexible Typesetting by Tim Brown and in it he recommends Webfont Handbook by Bram Stein. That’s now next on my reading list, however in the meantime I found an excerpt on A List Apart which specifically covers the best modern @font-face
syntax.
Based on Stein’s advice, here’s what I’m now using.
@font-face {
font-family: Elena;
src: url(elena.woff2) format("woff2"),
url(elena.woff) format("woff"),
url(elena.otf) format("opentype");
}
Soon, when all browsers support the woff
format we’ll be able to reduce this to simply:
@font-face {
font-family: Elena;
src: url(elena.woff2) format("woff2"),
url(elena.woff) format("woff");
}
See all tags.