Tagged “web”
Web Components Demystified: completed

Website updates
I’ve recently been updating my website over a series of nights and weekends. The changes aren’t very noticeable to the eye but involved some careful modernisation and streamlining of back-end features and technology, plus improvements to accessibility and performance. I’m really happy to have made them.
Lightning Fast Web Performance course by Scott Jehl
I purchased Scott’s course back in 2021 and immediately liked it, but hadn’t found the space to complete it until now. Anyway, I’m glad I have as it’s well structured and full of insights and practical tips. I’ll use this post to summarise my main takeaways.
Having completed the course I have much more rounded knowledge in the following areas:
- Why performance matters to users and business
- Performance-related metrics and “moments”: defining fast and slow
- Identifying performance problems: the tools and how to use them
- Making things faster, via various good practices and fixes
I’ll update this post soon to add some key bullet-points for each of the above headings.
TODS – a typographic and OpenType default stylesheet, by Richard Rutter
I loved books like Tim Brown’s Flexible Typesetting, Jason Santa Maria’s On Web Typography and Richard’s own Web Typography. And I’ve used lots of their tips in my work. But I’ll be honest: they’re esoteric, complicated, hard to remember, changing rapidly with browser support… and the advice varies from one expert to the other. So I’m very grateful that Richard has provided this reusable stylesheet of great typographic defaults, making it easier to handle all the complexities of good web typography.
Testing the 11ty Image plugin
I’m testing out the Eleventy Image plugin. Here’s a post with an image which, if all goes well, will be converted by the plugin from source jpeg
into lightweight avif
and webp
formats and the underlying code transformed from a basic img
element into comprehensive modern HTML image syntax.

Features of my personal website
I like the metaphor for personal websites of tending to a digital garden.
Like all gardens, they can become a bit unruly and need some weeding. Right now, as I consider updating some software and freshening things up, I realise that I’ve let it overgrow a tad.
So, here’s a post in which I’ll log my website’s current features. This should be useful in and of itself as a stepping stone to writing a proper readme. However it’ll also help me reflect on my website’s health and maintainability so I can decide which features to nourish and which to prune.
Note: this post will take a bit of time and a few sessions, so please regard it as a work in progress.
What I want
Before getting lost in stuff I have, I thought it’d be good to set out my higher-level goals and what I feel I have the time to sustain. I think I’d like:
- to retain URLs and SEO through updates
- excellent accessibility and performance (four hundos on lighthouse is a good start)
- it to use the best of modern web standards
- simplicity: minimal dependencies, easy to make technical updates
- to maintain some documentation to support ease of updating
- minimal noise: I don’t want a bunch of emails from third parties, nor ongoing dependency update alerts
- sensibly organised content
- a search function
- a way for folks to contact me
- some personality in the design and content
- be able to add and edit content easily (a mobile-friendly CMS rather than via code only)
- be able to insert photos into content easily
And here are a few secondary and lower-level wants:
- code snippets should look good
- images complexities handled behind the scenes
- some indieweb features supporting interactivity with other bloggers and friends
What I actually have
This is gonna be a much lower-level set of features than the above goals, but that’s OK. I can ask myself whether each supports my wider goals and are worth the effort.
Main tech stack
It’s a statically generated site powered by Eleventy.
The code is hosted on GitHub.
I use Netlify for production builds, deployments and hosting.
I’m happy with this stack. The parts play well together, it’s free, and it brings a lot of flexibility and performance benefits.
CMS
I use Decap CMS. It’s free and is working OK, however the UI is rubbish on a small screen.
I previously tried both Netlify CMS and Forestry for a while then gave up on them. I also sometimes use github.com as my CMS. That works but isn’t ideal.
SEO
I provide an XML sitemap which is intended for search engines and a human-readable sitemap.
Key pages
Home
An intro, and a list of latest posts.
About
Some information about me that’s currently split between my interests in the web and music.
Contact
It’s a form, and for its backend I use Netlify Forms. That gives me the server-side handling, database storage and admin management aspects of a form for my otherwise-static site.
Journal Archive
Access to all published posts.
Search
A JS-based form for searching all posts, with an autosuggest function. I use pagefind to power the search.
I don’t like how it’s JavaScript dependant and in future I should look at trying Zach Leatherman’s web component.
404 page
I have a 404.md
file which sets a permalink of (i.e. is built as) 404.html
. Having made that file available, Netlify does the rest… which is nice!
Detailed features
Avatar
I serve an avatar from a conventional location per Jim Neilsen’s idea – see my avatar.
Environment variables
I set NODE_ENV
to production
as an environment variable in my Netlify dashboard. This should mean that packages under devDependencies
are not built in production which is good because that’d be a waste of time. With the dotenv
module installed, if the NODE_ENV variable is present its value is loaded into Node.js’s process.env
property. That allows me to check in JavaScript whether or not the current environment is production. With that, I might avoid outputting draft posts to physical files in production, or avoid hitting API resources in local development.
Only JS files can access process.env
. So, in order to be able to check “is this production?” in other files such as Nunjucks templates I have an eleventy data file named app.js
which makes the environment value available via `production.
Incidentally I used to set another custom environemt variable called ELEVENTY_ENV
to production
in the source-controlled package.json
, within the build
NPM script. I think this is now redundant given that I can use NODE_ENV
for the same purpose. I previously used it within a Netlify lambda function that posted to the Github API to create new bookmark posts. I don’t do that any more so I can delete this environment variable.
Excerpts
I use gray-matter’s default approach for including, delimiting and parsing excerpts from posts. The excerpt is both part of the post content but also accessible separately, which is useful for showing only the excerpt in post lists.
It’s not perfect. It’d be useful to have a class on the excerpt. It’d be useful to be able to apply different styling to it on-demand and not on every post.
Favicons
To do: write a description.
Image plugin
I use Eleventy Image to perform build-time image transformations. It takes images I’ve added in posts and pages and converts and saves them into multiple formats and sizes, and swaps the original markup for modern, responsive, multi-format image markup using picture
and source
and pointing to the converted image files.
Linting and code formatting tools
I use a .editorconfig
file to set how my editor should handle things like nested line indentation, inserting an end of file newline and so on. I just go pretty much with what the 11ty base blog repo uses, although I haven’t yet switched from spaces to tabs.
I have a .prettierrc
config file which sets things like a preference for single rather than double quotes. The idea is that you also have a prettier editor extension enabled (I have one for VS Code enabled) and in your editor settings you set your editor’s default formatter to that prettier extension. It’ll then format files on save.
Netlify config
I have a .netlify.toml
file in which I specify the build command (npm run build
) and the directory to publish to. I also use it to set far-future expires on custom font .woff2
files. As far as I’m aware, this is still required. Lastly I have some redirects in there too.
Node.js
Eleventy is written in JavaScript and running it requires Node.js, both locally and in production. The minimum Node.js version is set in a .nvmrc
configuration file. I do it this way because that’s how it’s done in the Eleventy Base Blog. I’m happy to follow that to avoid confusion when doing future 11ty upgrades, and it also seems sensible to set this value in the project code rather than only in Netlify as the latter could lock me into Netlify and cause confusion in future. Things to remember (from experience) are that this setting in .nvrmc
overrides any node version set in Netlify’s Build and deploy settings and also that I should avoid setting a node version in netlify.toml
too otherwise they fight with each other.
Readable time
I created this Eleventy filter to show “time of post” on posts of type note
. That’s a situation where the readableDate
filter included with the Eleventy starter blog wasn’t precise enough.
Tags
When I create a post I apply relevant tags to it. The tag post
is applied automatically to all posts. And when I create a note using my custom Decap note template it also applies tag note
. (I should do the same for entry
and bookmark
). But aside from those special tags, I apply tag names arbitrarily.
Each post page shows its associated tags (as links) at the bottom. Each post shows beside its title the “most notable” tag. (Currently it just grabs the third tag since the first tag should automatically be post
and in second place should be note
, entry
or bookmark
).
And I have the following tag-related pages and templates:
- an all tags page
- a template for each tag that generates a page listing all posts tagged with that tag
Actions I’ve realised I can take
- rename
eleventy.js
toeleventy.config.js
like in the Eleventy base blog - …
To be continued!
Update 15-08-24
I added Decap CMS.
Update 30-06-24
I’ve addressed the “a means of contacting me” item on my wants list by adding a contact form using Netlify forms.
Update 39-12-24
I recently removed a bunch of features and pages that had gone stale and only served to make my website harder to maintain.
- Photos section, including an 11ty JavaScript data file where I fetched photos from Cloudinary using an 11ty
- Bookshelf
- Inspiration page
- Records for sale page and data file
- Forestry CMS stuff
- Bookmarker script and lambda folder
- DIY search feature
- JavaScript patch for CSS’s
min()
for grids, since that now has wide browser support
January blues-banishing in Edinburgh
I’m starting 2024 as I mean to continue – by seeing and hanging out with friends more often. Yesterday Tom and I had a great day moseying around Edinburgh.
After meeting at Waverley and grabbing a quick coffee and bite, we headed to the Scottish National Gallery on The Mound. Tom was keen to see the Turner watercolours exhibition which is on every January – apparently the ideal time of year to best show off the works. I probably wouldn’t have visited this unprompted but I’m glad I did. My favourites were perhaps The Falls of Clyde and Lake Albano.
From there we made the short walk to Cockburn Street and into an old haunt, Underground Solush’n, for some record shopping. I got a few, with the pick being the album Flying Wig by Devendra Banhart.
Next stop was St Andrew’s Square for a saunter around the menswear floor in Harvey Nichols. Neither of us were in the market for anything in particular but still it was nice to browse their sale including coats by Copenhagen-based brand NN.07 as sported by Jeremy Allen White in The Bear.
We hopped on an Edinburgh tram outside (first time for both of us) headed for Port of Leith. We caught the Leith Saturday market where Tom was beguiled by a purveyor of exotic olive oil (I can't believe I'm typing this) before we stumbled upon a great wee record stall. To my surprise it wasn’t just the usual selection of records no-one wants but instead had plenty of gems. I picked up two LPs from my wishlist – Vangelis's Earth (featuring the glorious Let it happen) and Spacek’s Curvatia.
All that record shopping gives you an appetite so it was off to Teuchter’s Landing for some food and refreshments. With Burns Night just around the corner we honoured the bard by enjoying haggis, neeps and tatties and a dram (Craigellachie 13y).
Our final stop was the Shore Bar where we caught up with a couple of Edinburgh-dwelling pals, Gav and Nick.
After all that I was back on the train home by around 10 pm – mission accomplished.
It's 2023, here is why your web design sucks (by Heather Buchel)
Heather explores why we no longer have “web designers”.
It's been belittled and othered away. It's why we've split that web design role into two; now you're either a UX designer and you can sit at that table over there or you're a front-end developer and you can sit at the table with the people that build websites.
Heather makes lots of good points in this post. But the part that resonates most with me is the observation that we have split design and engineering in a way that is dangerous for building proper websites.
We all lost when the web design role was split in two.
if our design partners are now at a different table, how do we expect them to acquire the deeply technical knowledge they need to know? The people we task with designing websites, I've found, often have huge gaps in their understanding of… the core concepts of web design.
Heather argues that designers don’t need to learn to code but that “design requires a deep understanding of a subject”. Strong agree!
Likewise, the arrival and evolution of the front-end role such that i) developers are separate from designers; and ii) within developers there’s a further split where the majority lack “front of the front-end” skills has meant that:
We now live in a world where our designers aren't allowed to… acquire the technical design knowledge they need to actually do their job and our engineers never learn about the technical design knowledge that they need to build the thing correctly.
Heather’s post does a great job of articulating the problem. They understandably don’t have the answers, but suggest that firstly education gaps and secondly how companies hire are contributing factors. So changes there could be impactful.
Blog development decisions
Here are some recurring development decisions I make when maintaining my personal website/blog, with some accompanying rationale.
Where should landmark-related HTML elements be in the source?
I set one header
, one main
and one footer
element as direct children of the body element.
<body>
<header>…</header>
<main>…</main>
<footer>…</footer>
</body>
This isn’t arbitrary. A header
at this level will be treated as a banner
landmark. A footer
is regarded as the page’s contentinfo
landmark. Whereas when they are nested more deeply such as within a “wrapper” div they are not automatically given landmark status. You’d have to bolt-on ARIA attributes. My understanding is that it’s better to use elements with implicit semantics than to bolt on semantics manually.
How should I centre the main content in a way that’s responsive and supports full-width backgrounds?
My hard-learned approach is to use composition rather than try to do everything with “god” layouts.
I mentally break the page up from top to bottom into slices that correspond to logical groups of content and/or parts that need a dedicated full-width background. I give each slice padding
on all sides. The lateral padding handily gives you the gutters you need on narrow screens. (You could use a Box layout for these sections. I tend not to consider them to be “true boxes” because usually their lateral padding differs from their vertical padding. So I just apply their styles on a case by case basis.)
Within each section, nest a dedicated Center layout to handle your fluid width-constraining wrappers.
This approach offers the best of all worlds. It doesn’t constrain your markup, which I find useful for achieving appropriate semantics and accessibility. You don’t need to put a “wrapper div” around everything. Instead you can have landmark-related elements as direct children of body
, applying padding to those and nesting centred wrappers inside them.
By making proper use of padding
, this approach also avoids problems of “collapsing margins” and other margin weirdness that make life difficult when you have sections with background colours. You don’t want to be using vertical margins in situations where “boxes with padding” would be more appropriate. Relatedly, I find that flow (or stack) layouts generally work best within each of your nested wrappers rather than at the top level.
How should I mark up lists of articles?
Should they be a bunch of sibling article
s? Should they be in a list element like a ul
?
Different switched-on developers tackle this differently, so it’s hard to offer a definitive best approach. Some developers even do it differently across different pages on their own site! Here are some examples in the wild:
- Leonie Watson’s homepage employs a list of
article
elements with no wrapping list - Manuel Matuzovic’s homepage does the same as Léonie’s
- Adrian Roselli’s posts page uses the same structure as above whereas Adrian’s homepage uses a
ul
with noarticle
elements! - Tetralogical’s blog section employs sibling
section
elements whereas their news section uses anol
but with nothing special nested inside - Ethan Marcotte’s journal uses nested lists to group by years then months, with each teaser marked up as an
article
inside anli
So, clear as mud!
I currently use sibling article
s with no wrapping list. Using article
elements feel right because each (per MDN’s definition of article) “represents a self-contained composition… intended to be independently distributable or reusable (e.g., in syndication)”. I could be persuaded to wrap these in a list because that would announce to screen reader users upfront that it’s a list of articles and say how many there are. (It tends to make your CSS a tad gnarlier but that’s not the end of the world.)
Should a blog post page be marked up as an article or just using main?
I mark it up as an article
for the same reasons as above. That article is nested inside a main
because all of my pages have one main
element wrapping around the primary, non-repeated content of the page.
To be continued
I’ll add more to this article over time.
A blog post which uses every HTML element (by Patrick Weaver)
An interesting article which helps the author – and his readers – understand some of the lesser-used and more obscure HTML elements.
While Patrick confesses he is still learning certain things and therefore I won’t regard his implementations as gospel in the way I might an article by someone with greater HTML and accessibility expertise such as Adrian Roselli, I see this as another useful resource to help me when deciding whether or not an HTML choice is the semantic and or correct tool for a given situation.
Thanks, Patrick!
Shoelace: a forward-thinking library of web components
I’m interested by Shoelace’s MO as a collection of pre-rolled, customisable web components. The idea is that it lets individuals and teams start building with web components – components that are web-native, framework-agnostic and portable – way more quickly.
I guess it’s a kind of Bootstrap for web components? I’m interested to see how well it’s done, how customisable the components are, and how useful it is in real life. Or if nothing else, I’m interested to see how they built their components!
It’s definitely an interesting idea.
I'll delve into Shoelace in more detail in the future when I have time, but in the meantime I was able to very quickly knock together a codepen that renders a Dropdown instance.
Thanks to Chris Ferdinandi for sharing Shoelace.
Specs and standards
Something Adrian Roselli said recently has stuck with me. The gist was that when developers need definitive guidance they shouldn’t treat MDN as gospel, but rather refer to the proper specifications for web standards.
Note: this post is a work in progress. I’ll refine it over time.
HTML
The Edition for Web Developers version looks handy. It seems to be streamlined and you can also use the forward-slash key to jump straight into a search then type something like “popover” to access that specification quickly.
WCAG Accessibility
How to Meet WCAG (Quick Reference)
Adrian referenced the above in one of his blog articles.
ARIA
Adrian will quote or reference this when talking about roles, landmarks and the like… for example when he tweeted about developers using section
.
Note: the above resource somewhat confusingly describes itself as a W3C recommendation. But despite that naming it should be regarded as definitive guidance. That description links to an explainer of W3C recommendation confirming that these are specifications which have been endorsed by W3C, that software manufacturers should implement, and that may be cited as W3C standards. My understanding is that if a specification is at an earlier stage it will be described as a W3C proposed recommendation.
Miscellaneous notes
How useful is MDN? I’ve read before that it’s not definitive. But it has recently had some good people work to improve its references to accessibility
MDN typically references WHATWG HTML, which often gets accessibility… well, not quite right. Part of my efforts included updating the accessibility content to point to the W3C specs wherever appropriate.
Use z-index only when necessary
There’s a great section on Source order and layers in Every Layout’s Imposter layout. It’s a reminder that when needing to layer one element on top of the other you should:
- favour a modern layout approach such as CSS Grid over absolute positioning; and
- not apply
z-index
unless it’s necessary.
which elements appear over which is, by default, a question of source order. That is: if two elements share the same space, the one that appears above the other will be the one that comes last in the source.
z-index
is only necessary where you want to layer positioned elements irrespective of their source order. It’s another kind of override, and should be avoided wherever possible.
An arms race of escalating z-index values is often cited as one of those irritating but necessary things you have to deal with using CSS. I rarely have z-index problems, because I rarely use positioning, and I’m mindful of source order when I do.
To delete something, use a form rather than a link
In web-based products from e-commerce stores to email clients to accounting software you often find index pages where each item in a list (or row in a table) has a Delete option. This is often coded as a link… but it shouldn’t be.
I liked this comment by Rails developer Dan where he advises a fellow Rails developer that to create his Delete control he should use a form rather than a link, via Rails’s button_to
method.
Dan mentions that in the past Rails UJS set an unsdesirable historical precedent by including a pattern of hijacking links for non-GET reqests.
But per the HTML standard, links are for navigation:
Hyperlinks… are links to other resources that… cause the user agent to navigate to those resources, e.g. to visit them in a browser or download them.
And as Dan goes on to say that’s why links make a GET request.
A GET request is a visit, it says “show me this” and it’s idempotent. When you make the same request it’ll show the same thing.
If on the other hand you want a control that performs an action (in this case request an entity to be deleted) then the appropriate HTML element is usually a button, and in this case a submit button within a form.
Relatedly, Jeremy Keith previously wrote about how to use request methods properly in his excellent post Get safe.
The fear of keeping up (on gomakethings)
Great post by Chris here on the double-edged-sword of our rapidly-evolving web standards, and how to stay sane. On the one hand the latest additions to the HTML, CSS and JavaScript standards are removing the need for many custom tools which is positive. However:
it can also leave you feeling like it’s impossible to keep up or learn it all. And that’s because you can’t! The field is literally too big to learn everything. “Keeping up” is both impossible and overrated. It’s the path to burnout.
Chris’s suggestion – something I find reassuring and will return to in moments of doubt – is that we focus on:
- a good understanding of the fundamentals,
- staying aware of general trends in the industry (such as important forthcoming native HTML elements; and the different approaches to building a website etc),
- problem-solving: being good at “solving problems with tech” rather than just knowing a bunch of tools.
Design Systems should avoid “God components” and Swiss Army Knives
Something we often talk about in our Design System team is that components should not be like Swiss Army Knives. It’s better for them to be laser-focused because by limiting their scope to a single task they are more reusable and support a more extensible system through composition.
Discussions often arise when we consider the flip-side – components which do too much, know too much, or care too much! When they cover too much ground or make assumptions about their context, things go wrong. Here are some examples.
Card
In websites where many elements have a “rounded panel”-like appearance so as to pop off the background, you can run into problems. Because of the somewhat Card-like appearance, people start to regard many semantically distinct things as “Cards” (rather than limiting the meaning of Card to a more conventional definition). Here are some of the problems this can cause:
- If the name covers a million use cases, then how can you describe it sensibly, or define its boundaries?
- When do you stop piling on different things it can mean? How do you stop it growing? How do you avoid bloat?
- Ongoing naming/confusion issues: you’re setting yourself up for continued confusion and code disparity. If something is “semantically” a note, or a comment, or a message etc then you can expect that future staff are gonna describe it as that rather than a Card! They’ll likely (understandably) write code that feels appropriate too. The problem will continue.
I appreciate that often we need pragmatic solutions, so if our designs have lots of similar-looking elements then there is still something we can do. If the repeated thing is more of a “shape” than a something with common-purpose, then just call it out as that! That could either be by name – for example Every Layout have a Box layout which could be a starting point – or by categorisation i.e. by moving the non-ideally named thing into a clearly demarcated Utilities (or similar) category in your Design System.
Flex
It seems that a number of Design Systems have a Flex component. My feeling, though, is that these represent an early reaction to the emergence of CSS’s Flexbox, rather than necessarily being sensible system-friendly or consumer-friendly components. CSS layout covers a lot and I think breaking this down into different smaller tools (Stack, Inline, Grid etc) works better.
Button
I’ve talked before about the “Everything is a button” mindset and how it’s harmful. Buttons and links are fundamentally different HTML elements with totally different purposes, and bundling them together has various ill effects that I see on a regular basis.
References
Displaying tables on narrow screens
Responsive design for tables is tricky. Sure, you can just make the table’s container horizontally scrollable but that’s more a developer convenience than a great user experience. And if you instead try to do something more clever, you can run into challenges as I did in the past. Still, we should strive to design good narrow screen user experiences for tables, alongside feasible technical solutions to achieve them.
In terms of UI design, I was interested to read Erik Kennedy’s recent newsletter on The best way to display tables on mobile. Erik lists three different approaches, which are (in reverse order of his preference):
- Hide the least important columns
- Cards with rows of Label-Value pairs
- More radical “remix” as a “Mobile List”
Another article worth checking is Andrew Coyle’s The Responsive Table. He describes the following approaches:
- Horizontal overflow table (inc. fixed first column)
- Transitional table
- Priority responsive table
For the transitional table, Andrew links to Charlie Cathcart’s Responsive & Accessible Data Table codepen. It looks similar (perhaps better looking but not quite as accessible) to Adrian Roselli’s Responsive Accessible Table.
Native CSS Nesting
I’ve started reading some entries from Manuel Matuzovic’s 100 days of (more or less) modern CSS series, and began with the excellent Day 99: Native Nesting. It clearly explains how to use the now-agreed syntax for various common scenarios.
The syntax is pretty close to what we’re used to doing with Sass, which is great!
Also, I’m now also clear that nested selectors must always start with a symbol rather than a letter. Often they would naturally do so anyway, for example when nesting a class since that already starts with a symbol (a full stop). But in cases where they wouldn’t – essentially only when nesting an “element selector” – we start it with an “&”. So:
main { & article { ... } }
Straightforward enough!
Regarding browser support for CSS nesting, at the time of writing it is available in Chrome and Safari Technology Preview only.
I would therefore only use it for demos and for the most non-essential enhancements. We’ll need to hold off any full-scale switch from Sass nesting to CSS nesting for large and important production websites until this is in Firefox and standard Safari, and until a sufficient percentage of users has the up-to-date versions. So a little while away yet, but given the current rate of browser updates, likely sooner than we might think!
The “how web requests work” interview question
There’s a classic web developer interview question that goes something like this:
What happens when you type in “bbc.co.uk” into a browser? Describe the journey that results in you seeing a page.
You could answer it like this:
Your browser sends an HTTP request which gets routed through a local modem/router then gets sent to a nameserver. That nameserver routes the request to the correct IP address, which will resolve to some sort of web server. That server will serve up either some static files, or run some backend code in order to generate a resource (probably an html page). When the HTML page is returned, your browser will parse it, which will likely generate more requests, and the cycle will repeat.
To do:
- add something about HTTPS
- add more about the front-end aspects: DOM, CSSOM, Accessibility tree, render blocking resources etc
References:
- My favorite interview question, by Ben McCormick
- How browsers work, on MDN
Full disclosure
Whether I’m thinking about inclusive hiding, hamburger menus or web components one UI pattern I keep revisiting is the disclosure widget. Perhaps it’s because you can use this small pattern to bring together so many other wider aspects of good web development. So for future reference, here’s a braindump of my knowledge and resources on the subject.
A disclosure widget is for collapsing and expanding something. You might alternately describe that as hiding and showing something. The reason we collapse content is to save space. The thinking goes that users have a finite amount of screen estate (and attention) so we might want to reduce the space taken up by secondary content, or finer details, or repeated content so as to push the page’s key messages to the fore and save the user some scrolling. With a disclosure widget we collapse detailed content into a smaller snippet that acts as a button the user can activate to expand the full details (and collapse them again).
Adrian Roselli’s article Disclosure Widgets is a great primer on the available native and custom ARIA options, how to implement them and where each might be appropriate. Adrian’s article helpfully offers that a disclosure widget (the custom ARIA flavour) can be used as a base in order to achieve some other common UI requirements so long as you’re aware there are extra considerations and handle those carefully. Examples include:
- link and disclosure widget navigation
- table with expando rows
- accordion
- hamburger navigation
- highly custom
select
alternatives whenlistbox
is innapropriate because it needs to include items that do not have theoption
role - a toggle-tip
Something Adrian addresses (and I’ve previously written about) is the question around for which collapse/expand use cases we can safely use the native details
element. There’s a lot to mention but since I’d prefer to present a simple heuristic let’s go meta here and use a details
:
Use details
for basic narrative content and panels but otherwise use a DIY disclosure
It’s either a bad idea or at the very least “challenging” to use a native `details` for:
- a hamburger menu
- an accordion
In terms of styling terms it’s tricky to use a `details` for:
- a custom appearance
- animation
The above styling issues are perhaps not insurmountable. It depends on what level of customisation you need.
Note to self: add more detail and links to this section when I get the chance.
I’ve also noticed that Adrian has a handy pen combining code for numerous disclosure widget variations.
Heydon Pickering’s Collapsible sections on Inclusive Components is excellent, and includes consideration of progressive enhancement and an excellent web component version. It’s also oriented toward multiple adjacent sections (an accordion although it doesn’t use that term) and includes fantastic advice regarding:
- appropriate markup including screen reader considerations
- how best to programmatically switch state (such as open/closed) within a web component
- how to make that state accessible via an HTML attribute on the web component (e.g.
<toggle-section open=true>
) - how that attribute is then accessible outside the component, for example to a button and script that collapses and expands all sections simultaneously
There’s my DIY Disclosure widget demo on Codepen. I first created it to use as an example in a talk on Hiding elements on the web, but since then its implementation has taken a few twists and turns. In its latest incarnation I’ve taken some inspiration from the way Manuel Matuzovic’s navigation tutorial uses a template
in the markup to prepare the “hamburger toggle” button.
I’ve also been reflecting on how the hidden
attribute’s boolean nature is ideal for a toggle button in theory – it’s semantic and therefore programattically conveys state – but how hiding with CSS can be more flexible, chiefly because hidden
(like CSS’s display
) is not animatible. If you hide with CSS, you could opt to use visibility: hidden
(perhaps augmented with position
so to avoid taking up space while hidden) which similarly hides from everyone in terms of accessibilty.
As it happens, the first web component I created was a disclosure widget. It could definitely be improved by some tweaks and additions along the lines of Heydon Pickering’s web component mentioned above. I’ll try to do that soon.
Troubleshooting
For some disclosure widget use cases (such as a custom link menu often called a Dropdown) there are a few events that typically should collapse the expanded widget. One is the escape key. Another is when the user moves focus outside the widget. One possible scenario is that the user might activate the trigger button, assess the expanded options and subsequently decide none are suitable and move elsewhere. The act of clicking/tapping elsewhere should collapse the widget. However there’s a challenge. In order for the widget to be able to fire unfocus
so that an event listener can act upon that, it would have to be focused in the first place. And in Safari – unlike other browsers – buttons do not automatically receive focus when activated. (I think Firefox used to be the same but was updated.) The workaround is to set focus manually via focus()
in your click event listener for the trigger button.
The organisation of work versus the job itself
Here’s a half-formed thought (the sort of thing personal websites that nobody else reads are perfect for). As a web developer, something I’ve noticed when interviewing candidates and hearing how they do things, or when I myself am being assessed in expectations reviews is that our industry seems obsessed with discussing the organisation of work. You know – PR review protocol, agile ceremonies, organising a Trello board, automation, linters. All of which is really important, of course. But the amount of airtime that gets leaves me frustrated. What about the actual job?
Are responsive strategy, web typography, layout, interactive JS components, animation (to name but a few very high-level topics) not interesting, complex and impactful enough as to warrant a higher percentage of the conversation? I want an insight into other folks’ knowledge of and opinions on how best to build things, and feel it gets relegated behind organisational topics.
Or do people just see the nitty-gritty stuff as the domain of enthusiasts on Twitter and personal blogs, or as “implementation details” that are secondary to the organisation of something – anything – that will “get the job done”?
As I say, a half-formed thought and probably just reveals my leanings! But writing it helps me gather my thoughts even if I eventually decide I’m in the wrong.
Safari is getting Web Push! (on the Webventures blog)
Roderick E.J.H. Gadellaa, author of the Webventures blog writes that at their June 2022 Worldwide Developers Conference (WWDC) Apple announced that it will bring Web Push (web-based push notifications) to Safari, including iOS Safari.
MacOS is going to get it first and iOS will receive it in a later iOS 16.x update, sometime in 2023.
This could be a big deal, because…
The lack of the web being able to do push notifications on iOS is probably the biggest reason why web developers see a potential project end up being built as a native app instead of a web app
…and…
Web Push on iOS will change the “we need to build a native app” decision.
I don’t like the idea that native mobile apps are superior to mobile web experiences, nor the notion that by having a native app you can ignore your small-screen web experience. PWAs and native apps can co-exist in harmony and address different use cases. But also web APIs are becoming more powerful all the time, and this announcement by Apple provides fuel for the argument that “you might not need a native app for that!”
A front-end developer’s job
Recently I’ve been reflecting on what we front-end developers do in the modern era. Working on a design system in 2022, I feel now more than ever that my job represents a convergence of a range of interesting disciplines, goals, skills and experiences. These include UX knowledge and usability testing, a degree of design savvy, systems and atomic thinking, accessibility knowledge and strong skills with the core web standards. That’s my understanding of front-end development.
Yet not long ago a colleague recalled the time a teammate teased him that front-end developers “put the froth on the cappuccino”. While this gave us all a laugh, I imagine it also reflects one common misunderstanding and undervaluing of our role.
Meanwhile there’s another image of front-end development that’s very engineering rather than user experience oriented. This focuses on JavaScript and tooling and arose in the era of NPM and JavaScript frameworks. In this definition, front-end developers spend their time wrangling JavaScript, configuring build tools and manipulating API data.
I’m conscious of the great divide and while my career has straddled that divide, I’ll freely admit that at heart I’m a front of the front-ender.
Here’s a description of a “Design System engineer” that I recently compiled during while my team were recruiting for a software engineer:
- Very strong knowledge and understanding of the core web standards: HTML, CSS and JavaScript
- Strong appreciation of the need for appropriate HTML semantics to achieve resilience and accessibility
- Understands component-based architecture and delivery including concepts like atomic design, composition, variants and versioning
- Advanced understanding of web accessibility including how to create accessible interactive components
- Excellent attention to detail in implementing designs in code
- Strong appreciation of responsive / multi-device considerations
- Comfortable in modern CSS including BEM-like methodologies, ITCSS architecture, and modern approaches such as Flexbox, CSS Grid, custom properties
- Some experience in testing JavaScript and/or server-side components - Comfortable with Git
- Committed to constantly learning and improving technical knowledge and skills
Something else I remember noting down was that:
User interfaces should be user-centric, purpose-driven, appropriate, accessible and consistent (not arbitrary).
I guess my point there was that good front-end developers build user interfaces in a very considered manner.
Areas of interest
Here are some of the key areas I find myself thinking about or working on regularly.
Accessibility
I list this first not due to alphabetical order, but because it’s arguably the cornerstone of our job. That’s for two reasons. Firstly, because the web was designed to be accessible to all therefore it’s incumbent on us to uphold that. (Of course it’s not just our job, however we tend to be both the primary evangelists and last line of defence). Secondly, so many other aspects of front-end development can only be done well when you start from an accessible foundation. This is something that becomes clearer and clearer the longer you do this job.
Resilience
I might add more about this later.
Performance
I might add more about this later.
UX
I might add more about this later.
Documentation
I might add more about this later.
Adaptability
I might add more about this later.
Design System considerations
- Componentization
- Creating component APIs
- Composition
- Documentation
Scalability
At scale, you can’t just write new code for everything. You have to focus on creating reusable things.
Maintainability and sustainability
I might add more about this later.
Integration into the company’s language framework
This can be challenging. I might add more about this later.
Adding interactivity
I might add more about this later.
Adherence to designs, and making things look good.
I might add more about this later.
What open-source design systems are built with web components?
Alex Page, a Design System engineer at Spotify, has just asked:
What open-source design systems are built with web components? Anyone exploring this space? Curious to learn what is working and what is challenging. #designsystems #webcomponents
And there are lots of interesting examples in the replies.
I plan to read up on some of the stories behind these systems.
I really like Web Components but given that I don’t take a “JavaScript all the things” approach to development and design system components, I’ve been reluctant to consider that web components should be used for every component in a system. They would certainly offer a lovely, HTML-based interface for component consumers and offer interoperability benefits such as Figma integration. But if we shift all the business logic that we currently manage on the server to client-side JavaScript then:
- the user pays the price of downloading that additional code;
- you’re writing client-side JavaScript even for those of your components that aren’t interactive; and
- you’re making everything a custom element (which as Jim Neilsen has previously written brings HTML semantics and accessibility challenges).
However maybe we can keep the JavaScript for our Web Component-based components really lightweight? I don’t know. For now I’m interested to just watch and learn.
Saving CSS changes in DevTools without leaving the browser
Browser devtools have made redesigning a site such a pleasure. I love writing and adjusting a CSS file right in the sources panel and seeing design changes happen as I type, and saving it back to the file. (…) Designing against live HTML allows happy accidents and discoveries to happen that I wouldn't think of in an unconstrained design mockup
I feel very late to the party here. I tend to tinker in the DevTools Element Styles panel rather than save changes. So, inspired by Scott, I’ve just tried this out on my personal website. Here’s what I did.
- started up my 11ty-based site locally which launches a
localhost
URL for viewing it in the browser; - opened Chrome’s DevTools at Sources;
- checked the box “Enable local overrides” then followed the prompts to allow access to the folder containing my SCSS files;
- opened an SCSS file in the Sources tab for editing side-by-side with my site in the browser;
- made a change, hit Cmd-S to save and marvelled at the fact that this updated that file, as confirmed by a quick
git status
check. - switched to the Elements panel, opened its Styles subpanel, made an element style change there too, then confirmed that this alternative approach also saves changes to a file.
This is a really interesting and efficient way of working in the browser and I can see me using it.
There are also a couple of challenges which I’ll probably want to consider. Right now when I make a change to a Sass file, the browser takes a while to reflect that change, which diminishes the benefit of this approach. My site is set up such that Eleventy watches for changes to the sass folder as a trigger for rebuilding the static site. This is because for optimal performance I’m purging the compiled and combined CSS and inlining that into the <head>
of every file… which unfortunately means that when the CSS is changed, every file needs rebuilt. So I need to wait for Eleventy to do its build thing until the page I’m viewing shows my CSS change.
To allow my SCSS changes to be built and reflected faster I might consider no longer inlining CSS, or only inlining a small amount of critical stuff… or maybe (as best of all worlds) only do the inlining for production builds but not in development. Yeah, I like that latter idea. Food for thought!
Partnering with Google on web.dev (on adactio.com)
At work in our Design System team, we’ve been doing a lot of content and documentation writing for a new reference website. So it was really timely to read Jeremy Keith of Clearleft’s new post on the process of writing Learn Responsive Design for Google’s web.dev resource. The course is great, very digestible and I highly recommend it to all. But I also love this new post’s insight into how Google provided assistance, provided a Content handbook as “house style” for writing on web.dev and managed the process from docs and spreadsheets to Github. I’m sure there will be things my team can learn from that Content Handbook as we go forward with our technical writing.
Building a toast component (by Adam Argyle)
Great tutorial (with accompanying video) from Adam Argyle which starts with a useful definition of what a Toast
is and is not:
Toasts are non-interactive, passive, and asynchronous short messages for users. Generally they are used as an interface feedback pattern for informing the user about the results of an action. Toasts are unlike notifications, alerts and prompts because they're not interactive; they're not meant to be dismissed or persist. Notifications are for more important information, synchronous messaging that requires interaction, or system level messages (as opposed to page level). Toasts are more passive than other notice strategies.
There are some important distinctions between toasts and notifications in that definition: toasts are for less important information and are non-interactive. I remember in a previous work planning exercise regarding a toast component a few of us got temporarily bogged down in working out the best JavaScript templating solution for SVG icon-based “Dismiss” buttons… however we were probably barking up the wrong tree with the idea that toasts should be manually dismissable.
There are lots of interesting ideas and considerations in Adam’s tutorial, such as:
- using the
<output>
element for each toast - some crafty use of CSS Grid and logical properties for layout
- combining
hsl
and percentages in custom properties to proportionately modify rather than redefine colours for dark mode - animation using
keyframes
andanimation
- native JavaScript modules
- inserting an element before the
<body>
element (TIL that this is a viable option)
Thanks for this, Adam!
(via Adam’s tweet)
There’s some nice code in here but the demo page minifies and obfuscates everything. However the toast component source is available on GitHub.
Related links
- A toast to accessible toasts by Scott O’Hara
Web animation tips
Warning: this entry is a work-in-progress and incomplete. That said, it's still a useful reference to me which is why I've published it. I’ll flesh it out soon!
There are lots of different strands of web development. You try your best to be good at all of them, but there’s only so much time in the day! Animation is an area where I know a little but would love to know more, and from a practical perspective I’d certainly benefit from having some road-ready solutions to common challenges. As ever I want to favour web standards over libraries where possible, and take an approach that’s lean, accessible, progressively-enhanced and performance-optimised.
Here’s my attempt to break down web animation into bite-sized chunks for ocassional users like myself.
Defining animation
Animation lets us make something visually move between different states over a given period of time.
Benefits of animation
Animation is a good way of providing visual feedback, teaching users how to use a part of the interface, or adding life to a website and making it feel more “real”.
Simple animation with transition
properties
CSS transition
is great for simple animations triggered by an event.
We start by defining two different states for an element—for example opacity:1
and opacity:0
—and then transition
between those states.
The first state would be in the element’s starting styles (either defined explicitly or existing implicitly based on property defaults) and the other in either its :hover
or :focus
styles or in a class applied by JavaScript following an event.
Without the transition
the state change would still happen but would be instantaneous.
You’re not limited to only one property being animated and might, for example, transition between different opacity
and transform
states simultaneously.
Here’s an example “rise on hover” effect, adapted from Stephanie Eckles’s Smol CSS.
<div class="u-animate u-animate--rise">
<span>rise</span>
</div>
.u-animate > * {
--transition-property: transform;
--transition-duration: 180ms;
transition: var(--transition-property) var(--transition-duration) ease-in-out;
}
.u-animate--rise:hover > * {
transform: translateY(-25%);
}
Note that:
- using custom properties makes it really easy to transition a different property than
transform
without writing repetitious CSS. - we have a parent and child (
<div>
and<span>
respectively in this example) allowing us to avoid the accidental flicker which can occur when the mouse is close to an animatable element’s border by having the child be the effect which animates when the trigger (the parent) is hovered.
Complex animations with animation
properties
If an element needs to animate automatically (perhaps on page load or when added to the DOM), or is more complex than a simple A to B state change, then a CSS animation
may be more appropriate than transition
. Using this approach, animations can:
- run automatically (you don’t need an event to trigger a state change)
- go from an initial state through multiple intermediate steps to a final state rather than just from state A to state B
- run forwards, in reverse, or alternate directions
- loop infinitely
The required approach is:
- use
@keyframes
to define a reusable “template” set of animation states (or frames); then - apply
animation
properties to an element we want to animate, including one or more@keyframes
to be used.
Here’s how you do it:
@keyframes flash {
0% { opacity: 0; }
20% { opacity: 1; }
80% { opacity: 0; }
100% { opacity: 1; }
}
.animate-me {
animation: flash 5s infinite;
}
Note that you can also opt to include just one state in your @keyframes
rule, usually the initial state (written as either from
or 0%
) or final state (written as either to
or 100%
). You’d tend to do that for a two-state animation where the other “state” is in the element’s default styles, and you’d either be starting from the default styles (if your single @keyframes
state is to
) or finishing on them (if your single @keyframes
state is from
).
Should I use transition
or animation
?
As far as I can tell there’s no major performance benefit of one over the other, so that’s not an issue.
When the animation will be triggered by pseudo-class-based events like :hover
or :focus
and is simple i.e. based on just two states, transition
feels like the right choice.
Beyond that, the choice gets a bit less binary and seems to come down to developer preference. But here are a couple of notes that might help in making a decision.
For elements that need to “animate in” on page load such as an alert, or when newly added to the DOM such as items in a to-do list, an animation
with keyframes
feels the better choice. This is because transition
requires the presence of two CSS rules, leading to dedicated JavaScript to grab the element and apply a class, whereas animation
requires only one and can move between initial and final states automatically including inserting a delay before starting.
For animations that involve many frames; control over the number of iterations; or looping… use @keyframes
and animation
.
For utility classes and classes that get added by JS to existing, visible elements following an event, either approach could be used. Arguably transition
is the slightly simpler and more elegant CSS to write if it covers your needs. Then again, you might want to reuse the animations applied by those classes for both existing, visible elements and new, animated-in elements, in which case you might feel that instead using @keyframes
and animation
covers more situations.
Performance
A smooth animation should run at 60fps (frames per second). Animations that are too computationally expensive result in frames being dropped, i.e. a reduced fps rate, making the animation appear janky.
Cheap and slick properties
The CSS properties transform
and opacity
are very cheap to animate. Also, browsers often optimise these types of animation using hardware acceleration. To hint to the browser that it should optimise an animation property (and to ensure it is handled by the GPU rather than passed from CPU to GPU causing a noticeable glitch) we should use the CSS will-change
property.
.my-element {
will-change: transform;
}
Expensive properties
CSS properties which affect layout such as height
are very expensive to animate. Animating height causes a chain reaction where sibling elements have to move too. Use transform
over layout-affecting properties such as width
or left
if you can.
Some other CSS properties are less expensive but still not ideal, for example background-color
. It doesn't affect layout but requires a repaint per frame.
Test your animations on a popular low-end device.
Timing functions
- linear goes at the same rate from start to finish. It’s not like most motion in the real world.
- ease-out starts fast then gets really slow. Good for things that come in from off-screen, like a modal dialogue.
- ease-in starts slow then gets really fast. Good for moving somethng off-screen.
- ease-in-out is the combination of the previous two. It‘s symmetrical, having an equal amount of acceleration and deceleration. Good for things that happen in a loop such as element fading in and out.
- ease is the default value and features a brief ramp-up, then a lot of deceleration. It’s a good option for most general case motion that doesn’t enter or exit the viewport.
Practical examples
You can find lots of animation inspiration in libraries such as animate.css (and be sure to check animate.css on github where you can search their source for specific @keyframe
animation styles).
But here are a few specific examples of animations I or teams I’ve worked on have had to implement.
Skip to content
The anchor’s State A sees its position fixed
—i.e. positioned relative to the viewport—but then moved out of sight above it via transform: translateY(-10em)
. However its :focus
styles define a State B where the intial translate
has been undone so that the link is visible (transform: translateY(0em)
). If we transition
the transform
property then we can animate the change of state over a chosen duration, and with our preferred timing function for the acceleration curve.
HTML:
<div class="u-visually-hidden-until-focused">
<a
href="#skip-link-target"
class="u-visually-hidden-until-focused__item"
>Skip to main content</a>
</div>
<nav>
<ul>
<li><a href="/">Home</a></li>
<li><a href="/">News</a></li>
<li><a href="/">About</a></li>
<!-- …lots more nav links… -->
<li><a href="/">Contact</a></li>
</ul>
</nav>
<main id="skip-link-target">
<h1>This is the Main content</h1>
<p>Lorem ipsum <a href="/news/">dolor sit amet</a> consectetur adipisicing elit.</p>
<p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p>
</main>
CSS:
.u-visually-hidden-until-focused {
left: -100vw;
position: absolute;
&__item {
position: fixed;
top: 0;
left: 0;
transform: translateY(-10em);
transition: transform 0.2s ease-in-out;
&:focus {
transform: translateY(0em);
}
}
}
To see this in action, visit my pen Hiding: visually hidden until focused and press the tab key.
Animating in an existing element
For this requirement we want an element to animate from invisible to visible on page load. I can imagine doing this with an image or an alert, for example. This is pretty straightforward with CSS only using @keyframes
, opacity
and animation
.
Check out my fade in and out on page load with CSS codepen.
Animating in a newly added element
Stephanie Eckles shared a great CSS-only solution for animating in a newly added element which handily includes a Codepen demo. She mentions “CSS-only” because it’s common for developers to achieve the fancy animation via transition
but that means needing to “make a fake event” via a JavaScript setTimeout()
so that you can transition from the newly-added, invisible and class-free element state to adding a CSS class (perhaps called show
) that contains the opacity:1
, fancy transforms and a transition
. However Stephanie’s alternative approach combines i) hiding the element in its default styles; with ii) an automatically-running animation
that includes the necessary delay and also finishes in the keyframe’s single 100%
state… to get the same effect minus the JavaScript.
Avoiding reliance on JS and finding a solution lower down the stack is always good.
HTML:
<button>Add List Item</button>
<ul>
<li>Lorem ipsum dolor sit amet consectetur adipisicing elit. Nostrum facilis perspiciatis dignissimos, et dolores pariatur.</li>
</ul>
CSS:
li {
animation: show 600ms 100ms cubic-bezier(0.38, 0.97, 0.56, 0.76) forwards;
// Prestate
opacity: 0;
// remove transform for just a fade-in
transform: rotateX(-90deg);
transform-origin: top center;
}
@keyframes show {
100% {
opacity: 1;
transform: none;
}
}
Jhey Tompkins shared another CSS-only technique for adding elements to the DOM with snazzy entrance animations. He also uses just a single @keyframes
state but in his case the from
state which he uses to set the element’s initial opacity:0
, then in his animation he uses an animation-fill-mode
of both
(rather than forwards
as Stephanie used).
I can’t profess to fully understand both
however if you change Jhey’s example to use forwards
instead, then the element being animated in will temporarily appear before the animation starts (which ain’t good) rather than being initially invisible. Changing it to backwards
gets us back on track, so I guess the necessary value relates to whether you’re going for from
/0%
or to
/100%
… and both
just covers you for both cases. I’d probably try to use the appropriate one rather than both
just in case there’s a performance implication.
Animated disclosure
Here’s an interesting conundrum.
For disclosure (i.e. collapse and expand) widgets, I tend to either use the native HTML <details>
element if possible or else a simple, accessible DIY disclosure in which activating a trigger
toggles a nearby content element’s visibility. In both cases, there’s no animation; the change from hidden to revealed and back again is immediate.
To my mind it’s generally preferable to keep it simple and avoid animating a disclosure widget. For a start, it’s tricky! The <details>
element can’t be (easily) animated. And if using a DIY widget it’ll likely involve animating one of the expensive properties. Animating height
or max-height
is also gnarly when working with variable (auto) length content and often requires developers to go beyond CSS and reach for JavaScript to calculate computed element heights. Lastly, forgetting the technical challenges, there’s often no real need to animate disclosure; it might only hinder rather than help the user experience.
But let’s just say you have to do it, perhaps because the design spec requires it (like in BBC Sounds’ expanding and collapsing tracklists when viewed on narrow screens).
Options:
- Animate the
<details>
element. This is a nice, standards-oriented approach. But it might only be viable for when you don’t need to mess with<details>
appearance too much. We’d struggle to apply very custom styles, or to handle a “show the first few list items but not all” requirement like in the BBC Sounds example; - Animate CSS Grid. This is a nice idea but for now the animation only works in Firefox*. It’d be great to just consider it a progressive enhancement so it just depends on whether the animation is deemed core to the experience;
- Animate from a max-height of 0 to “something sufficient” (my pen is inspired by Scott O’Hara’s disclosure example). This is workable but not ideal; you kinda need to set a max-height sweetspot otherwise your animation will be delayed and too long. You could of course add some JavaScript to get the exact necessary height then set it. BBC use
max-height
for their tracklist animation and those tracklists likely vary in length so I expect they use some JavaScript for height calculation.
* Update 20/2/23: the “animate CSS Grid” option now has wide browser support and is probably my preferred approach. I made a codepen that demonstrates a disclosure widget with animation of grid-template-rows
.
Ringing bell icon
To be written.
Pulsing “radar” effect
To be written.
Accessibility
Accessibility and animation can co-exist, as Cassie Evans explains in her CSS-Tricks article Empathetic Animation. We should consider which parts of our website are suited to animation (for example perhaps not on serious, time-sensitive tasks) and we can also respect reduced motion preferences at a global level or in a more finer-grained way per component.
Notes
transition-delay
can be useful for avoiding common annoyances, such as when a dropdown menu that appears on hover disappears when you try to move the cursor to it.
References
- Inspiration: the animate.css library
- animate.css on github (good for searching for keyframe CSS)
- CSS transitions and transforms on Thoughtbot
- CSS Transitions by Josh Comeau
- Keyframe Animations by Josh Comeau
- Transition vs animation on CSS Animation
- Keyframe animation syntax on CSS-Tricks
- CSS animation for beginners on Thoughtbot
- Using CSS Transions on auto dimensions on CSS-Tricks
- Jhey Tompkins’s Image fade with interest codepen
GOV.UK introduce an experimental block link component
Here’s an interesting development in the block link saga: GOV.UK have introduced one (named .chevron-card
) on their Homepage, citing how it’ll improve accessibility by increasing mobile touch targets. It’s not yet been added to their Design System while they’re monitoring it to see if it is successful. They’ve chosen the approach which starts with a standard, single, non-wrapping anchor then “stretches” it across the whole card via some pseudo elements and absolute positioning magic. I’m slightly surprised at this choice because it breaks the user’s ability to select text within the link. Really interested to see how it pans out!
Lovely write-up, & great rationale re. larger mobile tap targets! I’ve wrestled with “block links” & found that each approach has issues so it’s v. interesting that you chose the route that impacts text selection. Is that the lesser of the evils? Keen to hear how it pans out!
— Laurence Hughes (@fuzzylogicx) December 13, 2021
Front-end architecture for a new website (in 2021)
Just taking a moment for some musings on which way the front-end wind is blowing (from my perspective at least) and how that might practically impact my approach on the next small-ish website that I code.
I might lean into HTTP2
Breaking CSS into small modules then concatenating everything into a single file has traditionally been one of the key reasons for using Sass, but in the HTTP2 era where multiple requests are less of a performance issue it might be acceptable to simply include a number of modular CSS files in the <head>
, as follows:
<link href="/css/base.css" rel="stylesheet">
<link href="/css/component_1.css" rel="stylesheet">
<link href="/css/component_2.css" rel="stylesheet">
<link href="/css/component_3.css" rel="stylesheet">
The same goes for browser-native JavaScript modules.
This isn’t something I’ve tried yet and it’d feel like a pretty radical departure from the conventions of recent years… but it‘s an option!
I’ll combine ES modules and classes
It’s great that JavaScript modules are natively supported in modern browsers. They allow me to remove build tools, work with web standards, and they perform well. They can also serve as a mustard cut that allows me to use other syntax and features such as async/await
, arrow functions, template literals, the spread operator etc with confidence and without transpilation or polyfilling.
In the <head>
:
<script type="module" src="/js/main.js"></script>
In main.js
import { Modal } from '/components/modal.js';
const Modal = new Modal();
modal.init();
In modal.js
export class Modal {
init() {
// modal functionality here
}
}
I’ll create Web Components
I’ve done a lot of preparatory reading and learning about web components in the last year. I’ll admit that I’ve found the concepts (including Shadow DOM) occasionally tough to wrap my head around, and I’ve also found it confusing that everyone seems to implement web components in different ways. However Dave Rupert’s HTML with Superpowers presentation really helped make things click.
I’m now keen to create my own custom elements for javascript-enhanced UI elements; to give LitElement a spin; to progressively enhance a Light DOM baseline into Shadow DOM fanciness; and to check out how well the lifecycle callbacks perform.
I’ll go deeper with custom properties
I’ve been using custom properties for a few years now, but at first it was just as a native replacement for Sass variables, which isn’t really exploiting their full potential. However at work we’ve recently been using them as the special sauce powering component variations (--gap
, --mode
etc).
In our server-rendered components we’ve been using inline style
attributes to apply variations via those properties, and this brings the advantage of no longer needing to create a CSS class per variation (e.g. one CSS class for each padding variation based on a spacing scale), which in turn keeps code and specificity simpler. However as I start using web components, custom properties will prove really handy here too. Not only can they be updated by JavaScript, but furthermore they provide a bridge between your global CSS and your web component because they can “pierce the Shadow Boundary”, make styling Shadow DOM HTML in custom elements easier.
I’ll use BEM, but loosely
Naming and structuring CSS can be hard, and is a topic which really divides opinion. Historically I liked to keep it simple using the cascade, element and contextual selectors, plus a handful of custom classes. I avoided “object-oriented” CSS methodologies because I found them verbose and, if I’m honest, slightly “anti-CSS”. However it’s fair to say that in larger applications and on projects with many developers, this approach lacked a degree of structure, modularisation and predictability, so I gravitated toward BEM.
BEM’s approach is a pretty sensible one and, compared to the likes of SUIT, provides flexibility and good documentation. And while I’ve been keeping a watchful eye on new methodologies like CUBE CSS and can see that they’re choc-full of ideas, my feeling is that BEM remains the more robust choice.
It’s also important to me that BEM has the concept of a mix because this allows you to place multiple block classes on the same element so as to (for example) apply an abstract layout in combination with a more implementation-specific component class.
<div class="l-stack c-news-feed">
Where I’ll happily deviate from BEM is to favour use of certain ARIA attributes as selectors (for example [aria-current=page]
or [aria-expanded=true]
because this enforces good accessibility practice and helps create equivalence between the visual and non-visual experience. I’m also happy to use the universal selector (*
) which is great for owl selectors and I’m fine with adjacent sibling (and related) selectors.
Essentially I’m glad of the structure and maintainability that BEM provides but I don’t want a straitjacket that stops me from using my brain and applying CSS properly.
Enhance! by Jeremy Keith—An Event Apart video (on Vimeo)
A classic talk by Jeremy Keith on progressive enhancement and the nature of the web and its technologies.
Learn Responsive Design (on web.dev)
Jeremy Keith’s new course for Google’s web.dev learning platform is fantastic and covers a variety of aspects of responsive design including layout (macro and micro), images, icons and typography.
Resources for learning front-end web development
A designer colleague recently asked me what course or resources I would recommend for learning front-end web development. She mentioned React at the beginning but I suggested that it’d be better to start by learning HTML, CSS, and JavaScript. As for React: it’s a subset or offshoot of JavaScript so it makes sense to understand vanilla JS first.
For future reference, here are my tips.
Everything in one place
Google’s web.dev training resource have also been adding some excellent guides, such as:
Another great one-stop shop is MDN Web Docs. Not only is MDN an amazing general quick reference for all HTML elements, CSS properties, JavaScript APIs etc but for more immersive learning there are also MDN’s guides.
Pay attention to HTML
One general piece of advice is that when people look at lists of courses (whether or not they are new to web development) they ensure to learn HTML. People tend to underestimate how complicated, fast-moving and important HTML is.
Also, everything else – accessibility, CSS, JavaScript, performance, resilience – requires a foundation of good HTML. Think HTML first!
Learning CSS, specifically
CSS is as much about concepts and features – e.g. the cascade and specificity, layout, responsive design, typography, custom properties – as it is about syntax. In fact probably more so.
Most tutorials will focus on the concepts but not necessarily so much on practicalities like writing-style or file organisation.
Google’s Learn CSS course should be pretty good for the modern concepts.
Google also have Learn Responsive Design.
If you’re coming from a kinda non-CSS-oriented perspective, Josh W Comeau’s CSS for JavaScript Developers (paid course) could be worth a look.
If you prefer videos, you could check out Steve Griffith’s video series Learning CSS. Steve’s videos are comprehensive and well-paced. It contains a whole range of topics (over 100!), starting from the basics like CSS Box Model.
In terms of HTML and CSS writing style (BEM etc) and file organisation (ITCSS etc), here’s a (version of a) “style guide” that my team came up with for one of our documentation websites. I think it’s pretty good!
CSS and HTML Style Guide (to do: add link here)
For more on ITCSS and Harry Roberts’s thoughts on CSS best practices, see:
- Manage large projects with ITCSS
- Harry’s Skillshare course on ITCSS
- Harry’s CSS Guidelines rulebook
- Harry’s Discovr demo project
Learning JavaScript
I recommended choosing a course or courses from CSS-Tricks’ post Beginner JavaScript notes, especially as it includes Wes Bos’s Beginner JavaScript Notes + Reference.
If you like learning by video, check out Steve Griffith’s JavaScript playlist.
Once you start using JS in anger, I definitely recommend bookmarking Chris Ferdinandi’s Methods and APIs reference guide.
If you’re then looking for a lightweight library for applying sprinkles of JavaScript, you could try Stimulus.
Learning Responsive Design
I recommend Jeremy Keith’s Learn Responsive Design course on web.dev.
Lists of courses
You might choose a course or courses from CSS-Tricks’ post Where do you learn HTML and CSS in 2020?
Recommended books
- Resilient Web Design by Jeremy Keith. A fantastic wide-screen perspective on what we’re doing, who we’re doing it for, and how to go about it. Read online or listen as an audiobook.
- Inclusive Components by Heydon Pickering. A unique, accessible approach to building interactive components, from someone who’s done this for BBC, Bulb, Spotify.
- Every Layout by Heydon Pickering & Andy Bell. Introducing layout primitives, for handling responsive design in Design Systems at scale (plus so many insights about the front-end)
- Atomic Design by Brad Frost. A classic primer on Design Systems and component-composition oriented thinking.
- Practical SVG by Chris Coyier. Learn why and how to use SVG to make websites more aesthetically sharp, performant, accessible and flexible.
- Web Typography by Richard Rutter. Elevate the web by applying the principles of typography via modern web typography techniques.
Collected web accessibility guidelines, tips and tests
At work, I’m sometimes asked accessibility questions or to provide guidelines. I’m with Anna Cook in considering myself an accessibility advocate rather than an expert however I have picked up lots of tips and knowledge over many years of developing websites. So I thought it’d be useful to gather some general web accessibility tips and tests in one place as a useful reference.
Caveats and notes:
- this is a living document which I’ll expand over time;
- I’m standing on the shoulders of real experts and I list my references at the foot of the article; and
- if I’ve got anything wrong, please let me know!
Table of contents
- If you only had 5 minutes
- Content structure
- Semantic HTML and ARIA
- Favour native over custom components except where they have known issues
- Make custom components convey state accessibly
- Forms
- Links and buttons
- Ensure keyboard support
- Content resizing
- Better link text
- Supporting high contrast mode
- Skip links
- Navigation and menus
- Modal dialogues
If you only had 5 minutes
If someone had a web page and only had 5 minutes to find and tackle the lowest hanging fruit accessibility-wise, I’d probably echo Jeremy Keith’s advice to ensure that the page covers the following:
- uses heading elements sensibly
- uses landmarks (representing roles like
banner
,navigation
,main
,contentinfo
) - marks up forms sensibly (for example using labels and appropriate buttons)
- provides images with decent text alternatives
(Note: headings and landmarks are used by screen reader users to get a feel for the page then jump to areas of interest.)
Spending just 5 minutes would be bad, of course, and you shouldn’t do that. The point is that if pushed, the above give good bang-for-your-buck.
Content structure
The page’s content should be well structured as this makes it easier to understand for all, especially people with reading and cognitive disabilities.
It should consist of short sections of content preceded by clear headings. Use the appropriate heading level for the place in the page. Don’t use an inappropriate heading level to achieve a given appearance such as a smaller size. Instead use the appropriate heading element then use CSS to achieve your desired style.
It should employ lists where appropriate. It should place the most important content at the beginning of the page or section to give it prominence.
Check your page for any long passages of text with no structure. Ensure that sufficient prominence is given to the most important information and calls to action.
Semantic HTML and ARIA
While there are generic HTML elements like div
and span
, there are many more HTML elements that perform a specific role and convey that role to browers and other technologies. Choosing and using semantic HTML elements appropriately is a very good practice.
Also, using semantic HTML elements is preferable to bolting on semantics via attributes since the semantics are conveyed natively avoiding redundancy and duplication. As Bruce Lawson says, “Built-in beats bolt-on, bigly”.
Apply ARIA carefully. No ARIA is better than bad ARIA.
Landmarks
Create a small number of landmarks using the appropriate HTML elements.
For some landmark-generating elements it’s appropriate to bolster them with a label or accessible name. For example with nav
and aside
, i) there’s a decent chance there might be multiple on the page; and ii) each instance creates a landmark even when it’s nested within a deeper HTML element. So it’s helpful to distinguish each different landmark of the same type by using sensible accessible names otherwise you’d get multiple navigation menus all represented by the same “navigation” in the Landmarks menu. In the case of the section
element it needs an acessible name in order for it to act as a region
landmark. For all of these you can use aria-labelledby
set to the id
of an inner heading, or use aria-label
.
Note that when using multiple <header>
(or footer
) elements on a page, where one and one only is a direct child of body
while the others are used within article
or similar elements, there’s perhaps less need to add custom accessible names. That’s because only a direct child of body
will be treated as a landmark and the others won’t, therefore they won’t be butting against each other in a screen reader’s Landmarks menu and need distinguished.
Correct use of aria-label and aria-labelledby
Use the aria-label
or aria-labelledby
attributes (only when necessary) on interactive elements – buttons, links, form controls – and on landmark regions. Don’t use them on <div>
s, <span>
s, or other elements representing static/noninteractive text-level semantics, such as <p>
, <strong>
, <em>
, and so forth, unless those elements’ roles have been overridden with roles that expect accessible names.
Favour native over custom components except where they have known issues
Native components require very little work, are familiar to users, and are generally accessible by default. Custom components can be built to appear and behave as designers want, but require much more effort to build and are challenging to make accessible.
There are exceptions. Since the native options are flawed across browsers, accessibility experts recommend using custom solutions for:
- form error field messages
- focus indicator styles
Make custom components convey state accessibly
Now that you’re building a custom component you don’t get accessibility out of the box. Whether it’s a Like button or a disclosure widget, you can’t rely on a visual change alone to convey a UI change to all users. You’ll need to use the right element (note – it often starts with a button
) and then use ARIA to convey states such as pressed or expanded to screen reader users.
Forms
Because in the industry form fields are often handled with JavaScript and not submitted, people sometimes question whether form fields should live inside a form (<form>
). My answer is yes, and here’s why.
Using the form element improves usability and accessibility
Using a <form>
provides additional semantics allowing additional accessibility. It helps assistive devices like screen readers better understand the content of the page and gives the person using them more meaningful information.
By putting form fields inside a form we also ensure we match user expectations. We support the functionality (such as the different ways of submitting a form) that users expect when presented with form fields.
If you’re thinking “but what about form fields that don’t look like form fields?” then you’ve entered the problem territory of “deceptive user interfaces” – the situation where perceived affordances don’t match actual functionality, which causes confusion for some people. This is to be avoided. We shouldn’t use form fields (nor a <form>
) when they are not appropriate. A checkbox, radio button, or select menu is meant to gather information. So if your goal is instead to let the user manipulate the current view, use a button
rather than checkboxes or radio buttons.
References:
- Why use a form element when submitting fields with JavaScript
- Lea Verou and Leonie Watson’s discussion regarding Toggles
- My conversation about forms with accessibility expert Adrian Roselli
Using the form element simplifies your JavaScript for event handling
Using the form
element can also make it easier for you to meet user expectations in your JS-powered experience. This is because it gives you a single element (form
) and event combination that allows listening to multiple interactions. With a form element you can add a listener for the submit()
event. This event fires automatically in response to the various ways users expect to submit a form, including pressing enter inside a field.
Anchors and buttons
To let the user navigate to a page or page section, or download a file, use an anchor element.
To let the user trigger an action such as copying to clipboard, launching a modal or submitting a form, use a button element.
Anchors should include an href
attribute otherwise the browser will treat it like a non-interactive element. This means the link will not be included in the expected focus order and will not present a pointer to mouse users like it should. These days there is no remaining use case for an anchor without an href
. We no longer need named anchors to create link-target locations within the page because we can use the id
attribute (on any element) for that. And if you want an interactive element that does not link somewhere, you should use button
.
Do not remove the focus outline from links and buttons in CSS, unless it’s to provide a better version.
Ensure you always give links and buttons an accessible name, even when they use icons rather than text. This might be through visually hidden text or perhaps using an ARIA-related attribute.
Ensure keyboard support
Web pages need to support those who navigate the page by keyboard.
Use the tab key to navigate your page and ensure that you can reach all actionable controls such as links, buttons and form controls. Press the enter key or space bar to activate each control.
If during your test any actionable control is skipped, receives focus in an illogical order, or you cannot see where the focus is at any time, then keyboard support is not properly implemented.
Content resizing
Try zooming your page up to 400%. In Chrome, Zoom is available from the kebab menu at the top-right, or by holding down command with plus or minus.
Content must resize and be available and legible. Everything should reflow.
Relative font settings and responsive design techniques are helpful in effectively handling this requirement.
Relatedly, setting font-sizes in px
should be avoided because although a user can override the “fixed-ness” with zoom, it breaks the user’s ability to choose a larger or smaller default font size (which users often prefer over having to zoom every single page).
Better link text
Blind and visually impaired users use a screen reader to browse web pages, and screen readers provide user-friendly access to all the links on the page via a Links menu. When links are encountered in that context, link text like “Click here” and “Read more” are useless.
Check your web page to ensure that links clearly describe the content they link to when read out of context.
Better link text also improves the flow and clarity of your content and so improves the experience for everyone.
Supporting high contrast mode
Some people find it easier to read content when it’s in a particular colour against a specific background colour. Operating systems provide options to allow users to configure this to their preference. Websites must support support the user’s ability to apply this.
On a Windows computer go to Settings > Ease of access and turn on High contrast mode. On macOS go to System preferences > Accessibility settings > Display and select “Invert colours”.
Having changed the contrast, check that your web page’s content is fully visible and understandable, that images are still visible and that buttons are still discernible.
Skip links
Websites should provide a “Skip to content” link because this provides an important accessibility aid to keyboard users and those who use specialised input devices. For these users, having to step through (typically via the tab key) all of the navigation links on every page would be tiring and frustrating. Providing a skip link allows them to bypass the navigation and skip to the page’s main content.
To test that a website contains a skip link, visit a page then press the tab key and the skip link should appear. Then activate it using the enter key and check that focus moves to the main content area. Press tab again to ensure that focus moves to the first actionable element in the main content.
Navigation and menus
When developing a collapsible menu, place your menu <button>
within your <nav>
element and hide the inner list rather than hiding the <nav>
element itself. That way, we are not obscuring from Assistive Technologies the fact that a navigation still exists. ATs can still access the nav via landmark navigation. This is important because landmark discovery is one of the fundamental ways AT users scan, determine and navigate a site’s structure.
Modal dialogues
You probably don’t want to set the modal’s heading as an <h1>
. It likely displays content that exists on the page (which already has an <h1>
) at a lower level of the document hierarchy.
References
- Using HTML landmark roles to improve accessibility MDN article. And Adrian R’s suggestions for additions
- Navigation (landmark) role, on MDN
- Tetralogical’s Quick Accessibility Tests YouTube playlist
- Basic accessibility mistakes I often see in audits by Chris Ferdinandi
- Sara Soueidan’s video tutorial Practical tips for building more accessible front-ends
- Adrian Roselli’s Responsive type and zoom
- Heydon Pickering’s tweet about buttons in navs and Scott O’Hara’s follow up article Landmark Discoverability
- Tetralogical’s Foundations: native versus custom components
- Ben Myers on where to use aria labelling attributes
Collapsible sections, on Inclusive Components
It’s a few years old now, but this tutorial from Heydon Pickering on how to create an accessible, progressively enhanced user interface comprised of multiple collapsible and expandable sections is fantastic. It covers using the appropriate HTML elements (buttons) and ARIA attributes, how best to handle icons (minimal inline SVG), turning it into a web component and plenty more besides.
Icon has Cheezburger (a Clearleft dConstruct newsletter)
Jeremy Keith deconstructs the cheeseburger icon and—referencing Luke Wroblewski’s Obvious Always Wins mantra—argues that while icons alone look tasty they risk users failing to understand and engage.
BBC WebCore Design System
A Storybook UI explorer containing the components and layouts for making the front end of a BBC web experience.
From designing interfaces to designing systems (on The history of the web)
A history of Design Systems by Jay Hoffman taking in (amongst other milestones) the notion of Front-end Style Guides, followed by the arrival of Bootstrap, then Brad Frost’s Atomic Design, culminating in the dawn of the Design System movement with Jina Anne’s Clarity Conference.
Buttons and links: definitions, differences and tips
On the web buttons and links are fundamentally different materials. However some design and development practices have led to them becoming conceptually “bundled together” and misunderstood. Practitioners can fall into the trap of seeing the surface-level commonality that “you click the thing, then something happens” and mistakenly thinking the two elements are interchangeable. Some might even consider them as a single “button component” without considering the distinctions underneath. However this mentality causes our users problems and is harmful for effective web development. In this post I’ll address why buttons and links are different and exist separately, and when to use each.
Problematic patterns
Modern website designs commonly apply the appearance of a button to a link. For isolated calls to action this can make sense however as a design pattern it is often overused and under-cooked, which can cause confusion to developers implementing the designs.
Relatedly, it’s now common for Design Systems to have a Button component which includes button-styled links that are referred to simply as buttons. Unless documented carefully this can lead to internal language and comprehension issues.
Meanwhile developers have historically used faux links (<a href="#">
) or worse, a DIY clickable div
, as a trigger for JavaScript-powered functionality where they should instead use native buttons.
These patterns in combination have given rise to a collective muddle over buttons and links. We need to get back to basics and talk about foundational HTML.
Buttons and anchors in HTML
There are two HTML elements of interest here.
Hyperlinks are created using the HTML anchor element (<a>
). Buttons (by which I mean real buttons rather than links styled to appear as buttons) are implemented with the HTML button element (<button>
).
Although a slight oversimplification, I think David MacDonald’s heuristic works well:
If it GOES someWHERE use a link
If it DOES someTHING use a button
A link…
- goes somewhere (i.e. navigates to another place)
- normally links to another document (i.e. page) on the current website or on another website
- can alternatively link to a different section of the same page
- historically and by default appears underlined
- when hovered or focused offers visual feedback from the browser’s status bar
- uses the “pointing hand” mouse pointer
- results in browser making an HTTP
GET
request by default. It’s intended to get a page or resource rather than to change something - offers specific right-click options to mouse users (open in new tab, copy URL, etc)
- typically results in an address which can be bookmarked
- can be activated by pressing the return key
- is announced by screen readers as “Link”
- is available to screen reader users within an overall Links list
A button…
- does something (i.e. performs an action, such as “Add”, “Update” or "Show")
- can be used as
<button type=submit>
within a form to submit the form. This is a modern replacement for<input type=submit />
and much better as it’s easier to style, allows nested HTML and supports CSS pseudo-elements - can be used as
<button type=button>
to trigger JavaScript. This type of button is different to the one used for submitting a<form>
. It can be used for any type of functionality that happens in-place rather than taking the user somewhere, such as expanding and collapsing content, or performing a calculation. - historically and by default appears in a pill or rounded rectangle
- uses the normal mouse pointer arrow
- can be activated by pressing return or space.
- implictly gets the ARIA button role.
- can be extended with further ARIA button-related states like
aria-pressed
- is announced by screen readers as “Button”
- unlike a link is not available to screen reader users within a dedicated list
Our responsibilities
It’s our job as designers and developers to use the appropriate purpose-built element for each situation, to present it in a way that respects conventions so that users know what it is, and to then meet their expectations of it.
Tips
- Visually distinguish button-styled call-to-action links from regular buttons, perhaps with a more pill-like appearance and a right-pointing arrow
- Avoid a proliferation of call-to-action links by linking content itself (for example a news teaser’s headline). Not only does this reduce “link or button?” confusion but it also saves space, and provides more accessible link text.
- Consider having separate Design System components for Button and ButtonLink to reinforce important differences.
- For triggering JavaScript-powered interactions I’ll typically use a
button
. However in disclosure patterns where the trigger and target element are far apart in the DOM it can make sense to use a link as the trigger. - For buttons which are reliant on JavaScript, it’s best to use them within a strategy of progressive enhancement and not render them on the server but rather with client-side JavaScript. That way, if the client-side JavaScript is unsupported or fails, the user won’t be presented with a broken button.
Update: 23 November 2024
Perhaps a better heuristic than David MacDonald’s mentioned above, is:
Links are for a simple connection to a resource; buttons are for actions.
What I prefer about including a resource is that the “goes somewhere” definition of a link breaks down for anchors that instruct the linked resource to download (via the download
attribute) rather than render in the browser, but this doesn’t. I also like the inclusion of simple because some buttons (like the submit button of a search form) might finish by taking you to a resource (the search results page) but that’s a complex action not a simple connection; you’re searching a database using your choice of search query.
References
- Get safe, by Jeremy Keith
- Buttons vs. Links, by Eric Eggert
- The Button Cheat Sheet, by Manuel Matuzović
- A complete guide to links and buttons on CSS-Tricks
- The Links vs Buttons Showdown, by Marcy Sutton
Broken Copy, on a11y-101.com
Here’s an accessibility tip that’s new to me. When the content of a heading, anchor, or other semantic HTML element contains smaller “chunks” of span
and em
(etc), the VoiceOver screen reader on Mac and iOS annoyingly fails to announce the content as a single phrase and instead repeats the parent element’s role for each inner element. We can fix that by adding an inner “wrapper” element inside our parent and giving it role=text
.
Make sure not to add this role directly to your parent element since it will override its original role causing it to lose its intended semantics.
The text
role is not yet in the official ARIA spec but is supported by Safari.
(via @Seraphae and friends on Twitter)
Motion One: The Web Animations API for everyone
A new animation library, built on the Web Animations API for the smallest filesize and the fastest performance.
This JavaScript-based animation library—which can be installed via npm—leans on an existing web API to keep its file size low and uses hardware accelerated animations where possible to achieve impressively smooth results.
For fairly basic animations, this might provide an attractive alternative to the heavier Greensock. The Motion docs do however flag the limitation that it can only animate “CSS styles”. They also say “SVG styles work fine”. I hope by this they mean SVG presentation attributes rather than inline CSS on an SVG, although it’s hard to tell. However their examples look promising.
The docs website also contains some really great background information regarding animation performance.
Testing ES modules with Jest
Here are a few troubleshooting tips to enable Jest, the JavaScript testing framework, to be able to work with ES modules without needing Babel in the mix for transpilation. Let’s get going with a basic set-up.
package.json
…,
"scripts": {
"test": "NODE_ENV=test NODE_OPTIONS=--experimental-vm-modules jest"
},
"type": "module",
"devDependencies": {
"jest": "^27.2.2"
}
Note: take note of the crucial "type": "module"
part as it’s the least-documented bit and your most likely omission!
After that set-up, you’re free to import
and export
to your heart’s content.
javascript/sum.js
export const sum = (a, b) => {
return a + b;
}
spec/sum.test.js
import { sum } from "../javascript/sum.js";
test('adds 1 + 2 to equal 3', () => {
expect(sum(1, 2)).toBe(3);
});
Hopefully that’ll save you (and future me) some head-scratching.
(Reference: Jest’s EcmaScript Modules docs page)
Harry Roberts says “Get Your Head Straight”
Harry Roberts (who created ITCSS for organising CSS at scale but these days focuses on performance) has just given a presentation about the importance of getting the content, order and optimisation of the <head>
element right, including lots of measurement data to back up his claims. Check out the slides: Get your Head Straight
While some of the information about asset loading best practices is not new, the stuff about ordering of head
elements is pretty interesting. I’ll be keeping my eyes out for a video recording of the presentation though, as it’s tricky to piece together his line of argument from the slides alone.
However one really cool thing he’s made available is a bookmarklet for evaluating any website’s <head>
:
— ct.css
The accessibility of conditionally revealed questions (on GOV.UK)
Here’s something to keep in mind when designing and developing forms. GOV.UK’s accessibility team found last year that there are some accessibility issues with the “conditional reveal” pattern, i.e. when selecting a particular radio button causes more inputs to be revealed.
The full background story is really interesting but the main headline seems to be: Keep it simple.
- Don’t reveal any more than a single input, otherwise the revealed section should not be in a show-and-hide but rather in its own form in the next step of the process.
- Conditionally show questions only (i.e. another form input such as Email address)—do not show or hide anything that’s not a question.
Doing otherwise causes some users confusion making it difficult for them to complete the form.
See also the Conditionally revealing a related question section on the Radios component on the GDS Design System
W3C Design System
The W3C have just published a new Design System. It was developed by British Digital Agency Studio 24, who are also working (in the open) on the redesign of the W3C website.
My initial impression is that this Design System feels pretty early-stage and work-in-progress. I’m not completely sold on all of the technical details, however it definitely contains a number of emergent best practices and lots of interesting parts.
I particularly liked the very detailed Forms section which assembles lots of good advice from Adam Silver and GOV.UK, and I also found it interesting and useful that they include a Page Templates section rather than just components and layouts.
It’s cool to see an institution like the W3C have a Design System, and I’m looking forward to seeing how it evolves.
Accessibility Testing (on adactio.com)
In this journal entry, Jeremy Keith argues that when it comes to accessibility testing it’s not just about finding issues—it’s about finding the issues at the right time.
Here’s my summary:
- Accessibility Audits performed by experts and real Assistive Technology users are good!
- But try to get the most out of them by having them focus on the things that you can’t easily do yourself.
- We ourselves can handle things like colour contrast. It can be checked at the design stage before a line of code is written.
- Likewise HTML structure such as ensuring accessible form labels, ensuring images have useful
alt
values, using landmarks likemain
andnav
, heading structure etc. These are not tricky to find and fix ourselves and they have a big accessibility impact. - As well as fixing those issues ourselves we should also put in place new processes, checks and automation if possible to stop them recurring
- As for custom interactive elements (tabs, carousels, navigation, dropdowns): these are specific to our site and complicated/error-prone by nature, so those are the things we should be aiming to have professional Accessibility Audits focus on in order to get best value for money.
Practical front-end performance tips
I’ve been really interested in the subject of Web Performance since I read Steve Souders’ book High Performance Websites back in 2007. Although some of the principles in that book are still relevant, it’s also fair to say that a lot has changed since then so I decided to pull together some current tips. Disclaimer: This is a living document which I’ll expand over time. Also: I’m a performance enthusiast but not an expert. If I have anything wrong, please let me know.
Inlining CSS and or JavaScript
The first thing to know is that both CSS and JavaScript are (by default) render-blocking, meaning that when a browser encounters a standard .css
or .js
file in the HTML, it waits until that file has finished downloading before rendering anything else.
The second thing to know is that there is a “magic file size” when it comes to HTTP requests. File data is transferred in small chunks of about 14 kb. So if a file is larger than 14 kb, it requires multiple roundtrips.
If you have a lean page and minimal CSS and or JavaScript to the extent that the page in combination with the (minified) CSS/JS content would weigh 14 kb or less after minifying and gzipping, you can achieve better performance by inlining your CSS and or JavaScript into the HTML. This is because there’d be only one request, thereby allowing the browser to get everything it needs to start rendering the page from that single request. So your page is gonna be fast.
If your page including CSS/JS is over 14 kb after minifying and gzipping then you’d be better off not inlining those assets. It’d be better for performance to link to external assets and let them be cached rather than having a bloated HTML file that requires multiple roundtrips and doesn’t get the benefit of static asset caching.
Avoid CSS @import
JavaScript modules in the head
Native JavaScript modules are included on a page using the following:
<script type="module" src="main.js"></script>
Unlike standard <script>
elements, module scripts are deferred (non render-blocking) by default. Rather than placing them before the closing </body>
tag I place them in the <head>
so as to allow the script to be downloaded early and in parallel with the DOM being processed. That way, the JavaScript is already available as soon as the DOM is ready.
Background images
Sometimes developers implement an image as a CSS background image rather than a “content image”, either because they feel it’ll be easier to manipulate that way—a typical example being a responsive hero banner with overlaid text—or simply because it’s decorative rather than meaningful. However it’s worth being aware of how that impacts the way that image loads.
Outgoing requests for images defined in CSS rather than HTML won’t start until the browser has created the Render Tree. The browser must first download and parse the CSS then construct the CSSOM before it knows that “Element X” should be visible and has a background image specified, in order to then decide to download that image. For important images, that might feel too late.
As Harry Roberts explains it’s worth considering whether the need might be served as well or better by a content image, since by comparison that allows the browser to discover and request the image nice and early.
By moving the images to <img> elements… the browser can discover them far sooner—as they become exposed to the browser’s preload scanner—and dispatch their requests before (or in parallel to) CSSOM completion
However if still makes sense to use a background image and performance is important Harry recommends including an accompanying hidden image inline or preloading it in the <head>
via link rel=preload
.
Preload
From MDN’s preload docs, preload allows:
specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.
The benefits are most clearly seen on large and late-discovered resources. For example:
- Resources that are pointed to from inside CSS, like fonts or images.
- Resources that JavaScript can request such as JSON and imported scripts.
- Larger images and videos.
I’ve recently used the following to assist performance of a large CSS background image:
<link rel="preload" href="bg-illustration.svg" as="image" media="(min-width: 60em)">
Self-host your assets
Using third-party hosting services for fonts or other assets no longer offers the previously-touted benefit of the asset potentially already being in the user’s browser cache. Cross domain caching has been disabled in all major browsers.
You can still take advantage of the benefits of CDNs for reducing network latency, but preferably as part of your own infrastructure.
Miscellaneous
Critical CSS is often a wasted effort due to CSS not being a bottleneck, so is generally not worth doing.
References
- Inlining literally everything on Go Make Things
- MDN’s guide to native JavaScript modules
- How and when browsers download images by Harry Roberts
Doppler: Type scale with dynamic line-height
line-height
on the web is a tricky thing, but this tool offers a clever solution.
It’s relatively easy to set a sensible unit-less default ratio for body text (say 1.5
), but that tends to need tweaked and tested for headings (where spacious line-height doesn’t quite work; but tight line-height is nice until the heading wraps, etc).
Even for body text it’s a not a one-size-fits-all where a line-height like 1.5
is appropriate for all fonts.
Then you’ve got different devices to consider. For confined spaces, tighter line-height works better. But this can mean you might want one line-height for narrow viewports and another for wide.
Then, factor in vertical rhythm based on your modular type and spacing scales if you really want to blow your mind.
It can quickly get really complicated!
Doppler is an interesting idea and tool that I saw in CSS-Tricks’ newsletter this morning. It lets you apply line-height
using calc()
based on one em
-relative value (for example 1em
) and one rem
-relative value (for example 0.25rem
).
In effect you’ll get something like:
set line-height to the font-size of the current element plus a quarter of the user’s preferred font-size
The examples look pretty promising and seem to work well across different elements. I think I’ll give it a spin.
Accessible Color Generator
There are many colour contrast checking tools but I like this one from Erik Kennedy (of Learn UI Design) a lot. It features an intuitive UI using simple, human language that mirrors the task I’m there to achieve, and it’s great that if your target colour doesn’t have sufficient contrast to meet accessibility guidelines it will intelligently suggest alternatives that do.
I’m sure everyone has their favourite tools; I just find this one really quick to use!
SVG Gobbler
SVG Gobbler is a browser extension that finds the vector content on the page you’re viewing and gives you the option to download, optimize, copy, view the code, or export it as an image.
This is a pretty handy Chrome extension that grabs all the SVGs on a webpage and lets you see them all in a grid.
Progressively enhanced burger menu tutorial by Andy Bell
Here’s a smart and comprehensive tutorial from Andy Bell on how to create a progressively enhanced narrow-screen navigation solution using a custom element. Andy also uses Proxy
for “enabled” and “open” state management, ResizeObserver
on the custom element’s containing header
for a Container Query like solution, and puts some serious effort into accessible focus management.
One thing I found really interesting was that Andy was able to style child elements of the custom element (as opposed to just elements which were present in the original unenhanced markup) from his global CSS. My understanding is that you can’t get styles other than inheritable properties through the Shadow Boundary so this had me scratching my head. I think the explanation is that Andy is not attaching the elements he creates in JavaScript to the Shadow DOM but rather rewriting and re-rendering the element’s innerHTML
. This is an interesting approach and solution for getting around web component styling issues. I see elsewhere online that the innerHTML
based approach is frowned upon however Andy doesn’t “throw out” the original markup but instead augments it.
Manage Design Tokens in Eleventy
One interesting aspect of the Duet Design System is that they use Eleventy to not only generate their reference website but also to generate their Design Tokens.
When I think about it, this makes sense. Eleventy is basically a sausage-machine; you put stuff in, tell it how you want it to transform that stuff, and you get something new out the other end. This isn’t just for markdown-to-HTML, but for a variety of formatA-to-formatB transformation needs… including, for example, using JSON to generate CSS.
Now this is definitely a more basic approach than using a design token tool like StyleDictionary. StyleDictionary handles lots of low-level stuff that would otherwise be tricky to implement. So I’m not suggesting that this is a better approach than using StyleDictionary. However it definitely feels pretty straightforward and low maintenance.
As Heydon Pickering explains, it also opens up the opportunity to make the Design Tokens CMS-editable in Netlify CMS without content editors needing to go near the code.
So you’d have a tokens.json
file containing your design tokens, but it’d be within the same repo as your reference website. That’s probably not as good as having the tokens in a separate repo and making them available as a package, but of course a separate 11ty repo is an option too if you prefer.
For a smaller site at least, the “manage design tokens with 11ty” is a nice option, and I think I might give it a try on my personal website.
Duet Design System
Here’s a lovely Design System that interestingly uses Eleventy for its reference website and other generated artefacts:
We use Eleventy for both the static documentation and the dynamically generated parts like component playgrounds and design tokens. We don’t currently use a JavaScript framework on the website, except Duet’s own components.
I find Duet interesting both from the Design System perspective (it contains lots of interesting component techniques and options) but also in terms of how far 11ty can be pushed.
Favourite Eleventy (11ty) Resources
Here are my current go-to resources when building a new site using Eleventy (11ty).
Build an Eleventy site from scratch by Stephanie Eckles. As the name suggests, this is for starting from a blank canvas. It includes a really simple and effective way of setting up a Sass watch-and-build pipeline that runs alongside that of Eleventy, using only package.json
scripts rather than a bundler.
Eleventy Base Blog from 11ty. If rather than a blank canvas you want a boilerplate that includes navigation, a blog, an RSS feed and prism CSS for code block styling (among other things) then this is a great option. Of course, you can also just cherry-pick the relevant code you need, as I often do.
Eleventy Navigation Plugin. This allows you to set a page or post as a navigation item. It handily supports ordering and hierarchical nesting (for subnavigation). You can then render out your navigation from a layout in a one-liner or in a custom manner.
Eleventy Cache Assets Plugin. This is really handy for caching fetched data so as not to exceed API limits or do undue work on every build.
11ty Netlify Jumpstart is another from Stephanie Eckles but this time a “quick-start boilerplate” rather than blank canvas. It includes a minimal Sass framework, generated sitemap, RSS feed and social share preview images. The About page it generates contains lots of useful info on its features.
forestry.io settings for 11ty Base Blog and forestry.io settings for Hylia (Andy Bell’s 11ty starter)
Add Netlify CMS to an 11ty-based website
More to follow…
Astro
Astro looks very interesting. It’s in part a static site builder (a bit like Eleventy) but it also comes with a modern (revolutionary?) developer experience which lets you author components as web components or in a JS framework of your choice but then renders those to static HTML for optimal performance. Oh, and as far as I can tell theres no build pipeline!
Astro lets you use any framework you want (or none at all). And if most sites only have islands of interactivity, shouldn’t our tools optimize for that?
People have been posting some great thoughts and insights on Astro already, for example:
- Chris Coyier’s review
- Review in CSS-Tricks Newsletter #255 including links to Chris’s Astro demo site
- The web is too damn complex, by Robin Rendle
- Astro’s introductory blog post
(via @css)
clipboard.js - Copy to clipboard without Flash
Here’s a handy JS package for “copy to clipboard” functionality that’s lightweight and installable from npm.
It also appears to have good legacy browser support plus a means for checking/confirming support too, which should assist if your approach is to only add a “copy” button to the DOM as a progressive enhancement.
(via @chriscoyier)
How to Favicon in 2021 (on CSS-Tricks)
Some excellent favicon tips from Chris Coyier, referencing Andrey Sitnik’s recent article of the same name.
I always appreciate someone looking into and re-evaluating the best practices of something that literally every website needs and has a complex set of requirements.
Chris is using:
<link rel="icon" href="/favicon.ico"><!-- 32x32 -->
<link rel="icon" href="/icon.svg" type="image/svg+xml">
<link rel="apple-touch-icon" href="/apple-touch-icon.png"><!-- 180x180 -->
<link rel="manifest" href="/manifest.webmanifest">
And in manifest.webmanifest
:
{
"icons": [
{ "src": "/192.png", "type": "image/png", "sizes": "192x192" },
{ "src": "/512.png", "type": "image/png", "sizes": "512x512" }
]
}
(via @mxbck)
Images on the Web: The Big Picture, Part 1
In modern web development there are a myriad ways to present an image on a web page and it can often feel pretty baffling. In this series I step through the options, moving from basic to flexible images; then from modern responsive images to the new CSS for fitting different sized images into a common shape. By the end I’ll arrive at a flexible, modern boilerplate for images.
Scope
This article is primarily about the HTML img
element (and related markup). I might mention CSS background images at some point, but by and large I’m focusing on images as content rather than decoration.
Similarly I probably won’t mention icons at all. I see them as a separate concern and recommend you use inline SVG rather than any image-based approach.
Terminology
Replaced element
The image element is a replaced element which means that the element is replaced by the resource (file) referenced in its src
attribute.
Aspect Ratio
You get an image’s aspect ratio by dividing its width by its height.
A 160px wide × 90px tall image can be represented as 16:9, or 1.777.
Aspect Ratio is an intrinsic characteristic of an image—i.e. it is “part of the image”—therefore outside of our control as developers. We can apply extrinsic settings which change the dimensions of the rendered image on our web page, however its aspect ratio was determined when the image was originally created and cropped.
Assumptions
CSS-wise, assume that we’ll start with nothing more complex than the following boilerplate:
html {
box-sizing: border-box;
}
*, *:before, *:after {
box-sizing: inherit;
}
img {
border-style: none;
display: block;
}
A basic image
Let’s start by going back to basics. I can include an image on a web page like so:
<img src="/img/250x377.jpg" alt="A Visit… by Jennifer Egan" />
Note that our markup contains no width
or height
attributes, just an alt
for accessibility. With no size-related attributes (and in the absence of any CSS acting on its width or height) the image simply displays at its intrinsic dimensions i.e. the dimensions at which the file was saved, in this case 250 × 377 pixels. The image is output as follows:

Now I know that narrow and static images like this feel pretty old-school. Since the Responsive Web Design movement we’re more accustomed to seeing full-column-width media and complex markup for intelligently selecting one file from many options.
However I still occasionally encounter use cases for displaying a relatively narrow image as-is.
Sticking with the book image example, given its aspect ratio you probably wouldn’t want it to be full-column-width on anything other than the narrowest screens simply because of how tall it could become at the expense of the reading experience. You might also be loading your images from a third party bookshop with which you have an affiliate scheme, and therefore have little control over file size and other factors influencing performance and responsive behaviour. As such you might do well to keep things simple and just load a sensibly-sized thumbnail.
See also this figure illustrating a simple database schema on the Ruby on Rails Tutorial website. On wide viewports, the author’s preference is to simply display the image at its natural, small size and centre it rather than blowing it up unnecessarily.
In summary, there remain times when you might need a narrow, fixed-size image so I want to keep that option open.
Include size attributes
When we know the dimensions of our image in advance, we can improve upon our previous markup by explicitly adding the width
and height
attributes.
<img src="/img/250x377.jpg" width="250" height="377" alt="…" />
Don’t expect fireworks; this renders the image exactly as before.

However, this addition allows the browser to reserve the appropriate space in the page for the image before it has loaded. If we don’t do this, we risk a situation where text immediately after our image renders higher up the page than it should while the image is loading, only to shift jarringly after the image loads.
We’re not overriding width or height with CSS (remember CSS rules would trump HTML attributes because attributes have no specificity) so the browser will render the image at whatever size attribute values we provide, regardless of the image’s intrinsic dimensions. As such, to avoid squashing and distortion we would ensure that our size attribute values matched the image’s real dimensions.
Flexible Images
At the dawn of the mobile web, many authors sought to handle mobile devices by creating a separate, dedicated mobile website. However this meant duplication of cost, code, content and maintenance. Responsive Web Design provided a counterpoint with a “create once, publish everywhere” philosophy which embraced the fluidity of the web and suggested we could serve desktop and mobile devices alike from a single codebase.
RWD proposed making layout and content adaptable. This included the idea of flexible images—that for any given image you need just a single, wide version plus some clever CSS to enable it to work not only on desktop but also to adapt its size for narrower contexts and still look good.
By default when a wide image is rendered within a narrower container it will overflow that container, breaking the layout. The key to avoiding this is to set a tolerance for how wide the image is permitted to go. Generally, we’ll tolerate the image being 100% as wide as its container but no wider. We can achieve this using max-width
.
<img src="/img/wide.png" alt="…" />
img {
max-width: 100%;
}
The eagle-eyed will have noticed that the above snippet once again excludes the HTML width
and height
attributes. That’s because when we began working responsively many of us stopped adding those size attributes, feeling that for flexible images the practice was redundant. The image’s dimensions were now a moving target so the space needing reserved by the browser while the image loaded was variable rather than fixed. And we were right: for a long time, browsers were not capable of reserving space for a moving target, so including the size attributes served no real purpose.
Regardless, some content management systems (most notably Wordpress) continued to output images with HTML width
and height
attributes as standard. This introduced a challenge. Without the attributes we could rely on the browser to take our simple max-width:100%
declaration and also implicitly apply height:auto
thereby always preserving the image‘s aspect ratio when scaling it down. To achieve the same goal when the HTML height
attribute is present, we needed the following revised CSS:
img {
max-width: 100%;
height: auto;
}
Here’s an example of a flexible image. It’s 2000 pixels wide, but shrinks to fit inside its narrower parent. Magic!
Jank-free responsive images
There’s been a recent development wherein modern browsers can now reserve appropriate space for flexible images while they are loading (rather than only for fixed images).
This means that adding the width
and height
attributes is once again a good idea.
If you know the image’s aspect ratio in advance, you can now use any combination of width
and height
attribute values which represent that ratio, and the browser will dynamically calculate and reserve the appropriate required space in the layout while the image loads, once again avoiding those jarring layout shifts I mentioned before.
However this presents a couple of challenges.
Firstly, having the height
HTML attribute once again means that for any image we want flexibly scaled and safely constrained by CSS max-width
, we’ll also need to override that explicit height attribute value with CSS.
Secondly, having the width
attribute can be problematic when the image is one which we explicitly want to be full-container-width, such as the featured image in a blog post. The problem arises when the width
attribute value is less than the containing element’s current width. If the only CSS you have on your image is max-width:100%
then the image will adopt the value from its width
attribute and consequently be narrower than its parent, ruining the effect. One approach might be to always use a sufficiently high width
value but that feels a tad brittle; I’d rather employ a solution that is more explicit and decisive.
To solve both of the above challenges, we can apply some additional CSS.
/*
Ensure correct aspect ratio is preserved when
max-width: 100% is triggered and image
has the HTML height attribute set,
while doing no harm otherwise.
*/
img[height] {
height: auto;
}
/*
Optional class to make an image 100% container-width.
Overrides the 'width' attribute, avoiding the risk of the image
being too narrow because its width value is narrower than the container.
When using this try to ensure your image’s intrinsic width is at least as
wide as its container’s maximum width because otherwise on wide
viewports the image would stretch and the results might not be great.
*/
.u-full-parent-width {
width: 100%;
}
Pros and cons of the “one large image” approach
I’d like to quickly take stock.
Let’s say we have a source image which is 1200px wide. Let’s also say that it’s the featured image for a blog post and therefore will be placed in our main content column, and that this column is never wider than 600px.
If we make the image flexible using max-width:100%
, it’ll work on wide viewports and narrow viewports (such as a mobile phone) alike.
On the plus-side, we only need to create one image for our blog post and we’re done.
Another positive is that on devices with retina screens—capable of displaying a comparatively greater density of pixel information in the same physical space—our oversized image will appear at higher quality and look great.
On the downside, we are delivering a much larger image and therefore file size than is required for, say, a 320px wide context. This has performance implications since bigger files mean longer download times, and this is exacerbated when the device is not connected to high-speed wifi (as is often the case with a mobile phone).
Another issue which is perhaps less obvious is that your content may not be read on your website with your CSS operating on it. For example if your website has an RSS feed (like mine does) then someone may well be reading your article in another environment (such as a feed reader application or website) and who is to say how that oversized image will look there?
Dealing with these challenges using modern Responsive Images will be the subject of Part #2.
References
Front-of-the-front-end and back-of-the-front-end web development (by Brad Frost)
The Great Divide between so-called front-end developers is real! Here, Brad Frost proposes some modern role definitions.
A front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS, and presentational JavaScript code.
A back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.
Brad also offers:
A succinct way I’ve framed the split is that a front-of-the-front-end developer determines the look and feel of a button, while a back-of-the-front-end developer determines what happens when that button is clicked.
I’m not sure I completely agree with his definitions—I see a bit more nuance in it. Then again, maybe I’m biased by my own career experience. I’m sort-of a FOTFE developer, but one who has also always done both BOTFE and “actual” back-end work (building Laravel applications, or working in Ruby on Rails etc).
I like the fact that we are having this discussion, though. The expectations on developers are too great and employers and other tech people need to realise that.
Issues with Source Code Pro in Firefox appear to be fixed
Last time I tried Source Code Pro as my monospaced typeface for code examples in blog posts, it didn’t work out. When viewed in Firefox it would only render in black meaning that I couldn’t display it in white on black for blocks of code. This led to me conceding defeat and using something simpler.
It now looks like I can try Source Code Pro again because the issue has been resolved. This is great news!
So, I should grab the latest release and give it another go. Actually, for optimum subsetting and performance I reckon in this case I can just download the default files from Source Code Pro on Google Webfonts Helper and that’ll give me the lightweight woff2
file I need.
I’d also mentioned the other day that I was planning to give Source Serif another bash so if everything works out, with these two allied to my existing Source Sans Pro I could have a nice complimentary set.
Design system components, recipes, and snowflakes (on bradfrost.com)
An excellent article from Brad Frost in which he gives us some vocabulary for separating context-agnostic components intended for maximal use from specific variants and one-offs.
In light of some recent conversations at work, this was in equal measure interesting, reassuring, and thought-provoking.
On the surface, a design system and process can seem generally intuitive but in reality every couple of weeks might throw up practical dilemmas for engineers. For example:
- this new thing should be a component in programming terms but is it a Design System component?
- is everyone aware that component has a different meaning in programming terms (think WebComponent, ViewComponent, React.Component) than in design system terms? Or do we need to talk about that?
- With this difference in meaning, do we maybe need to all be more careful with that word component and perhaps define its meaning in Design Systems terms a bit better, including its boundaries?
- should we enshrine a rule that even though something might be appropriate to be built as a component in programming terms under-the-hood, if it’s not a reusable thing then it doesn’t also need to be a Design System component?
- isn’t it better for components to be really simple because the less opinionated one is, the more reusable it is, therefore the more we can build things by composition?
When I read Brad’s article last night it kind of felt like it was speaking to many of those questions directly!
Some key points he makes:
- If in doubt: everything should be a component
- The key thing is that the only ones you should designate as “Design System Components” are the ones for maximal reuse which are content and context-agnostic.
- After that you have 1) Recipes—specific variants which are composed of existing stuff for a specific purpose rather than being context-agnostic; and 2) Snowflakes (the one-offs).
Then there was this part that actually felt like it could be talking directly to my team given the work we have been doing on the technical implementation details of our Card
recently:
This structure embraces the notion of composition. In our design systems, our Card components are incredibly basic. They are basically boxes that have slots for a CardHeader, CardBody, and CardFooter.
We’ve been paring things back in exactly the same way and it was nice to get this reassurance we are on the right track.
(via @jamesmockett)
A First Look at aspect-ratio (on CSS-Tricks)
Chris Coyier takes the new CSS aspect-ratio
property for a spin and tests how it works in different scenarios.
Note that he’s applying it here to elements which do not have an intrinsic aspect-ratio. So, think a container element (div
or whatever is appropriate) rather than an img
. This is line with a Jen’s Simmons’ recent replies to me when I asked her whether or not we should apply aspect-ratio
to an img
after she announced support for aspect-ratio
in Safari Technical Preview 118.
A couple of interesting points I took from Chris’s article:
- this simple new means of declaring aspect-ratio should soon hopefully supersede all the previous DIY techniques;
- if you apply a CSS aspect-ratio to an element which has no explicit
width
set, we still get the effect because the element’sauto
(rendered) width is used, then by combining that with the CSSaspect-ratio
the browser can calculate the required height, then apply that height; - if the content would break out of the target aspect-ratio box, then the element will expand to accommodate the content (which is nice). If you ever need to override this you can by applying
min-height: 0
; - if the element has either a height or a width set, the other of the two is calculated from the aspect ratio;
- if the element has both a height and width set,
aspect-ratio
is ignored.
Regarding browser support: at the time of writing aspect-ratio
is supported in Chrome and Edge (but not IE), is coming in Firefox and Safari, but as yet there’s no word regarding mobile. So I’d want to use it as a progressive enhancement rather than for something mission critical.
Vanilla JS List
Here’s Chris Ferdinandi’s curated list of organisations which use vanilla JS to build websites and web apps.
You don’t need a heavyweight JavaScript framework, and vanilla JS does scale.
At the time of writing the list includes Marks & Spencer, Selfridges, Basecamp and GitHub.
(via @ChrisFerdinandi)
Use CSS Clamp to create a more flexible wrapper utility (on Piccalilli)
Here’s Andy Bell recommending using CSS clamp()
to control your wrapper/container width
because it supports setting a preferred value in vw
to ensure sensible gutters combined with a maximum tolerance in rem
—all in a single line of code.
If we use clamp() to use a viewport unit as the ideal and use what we would previously use as the max-width as the clamp’s maximum value, we get a much more flexible setup.
The code looks like this:
.container {
width: clamp(16rem, 90vw, 70rem);
margin-left: auto;
margin-right: auto;
}
This is pretty cool because I know from experience that coding responsive solutions for wrappers can be tricky and you can end up with a complex arrangement of max-width and media queries whilst still—as Andy highlights—not providing optimal readability for medium-sized viewports.
Using CSS Grid with minmax() is one possible approach to controlling wrappers however this article offers another (potentially better) tool for your kit.
It’s worth noting that Andy could probably have just used width: min(90vw, 70rem)
here (as Christopher suggested) because setting the lower bound provided by clamp()
is only necessary if your element is likely to shrink unexpectedly and a regular block-level element wouldn’t do that. The clamp
approach might be handy for flex items, though.
(via @piccalilli_)
Accessible interactions (on Adactio)
Jeremy Keith takes us through his thought process regarding the choice of link or button
when planning accessible interactive disclosure elements.
A button
is generally a solid choice as it’s built for general interactivity and carries the expectation that when activated, something somewhere happens. However in some cases a link might be appropriate, for example when the trigger and target content are relatively far apart in the DOM and we feel the need move the user to the target / give it focus.
For a typical disclosure pattern where some content is shown/hidden by an adjacent trigger, a button
suits perfectly. The DOM elements are right next to each other and flow into each other so there’s no need to move or focus anything.
However in the case of a log-in link in a navigation menu which—when enhanced by JavaScript—opens a log-in form inside a modal dialogue, a link might be better. In this case you might use an anchor with a fragment identifier (<a href="#login-modal">Log in</a>
) pointing to a login-form far away at the bottom of the page. This simple baseline will work if JavaScript is unavailable or fails, however when JavaScript is available we can intercept the link’s default behaviour and enhance things. Furthermore because the expectation with links is that you’ll go somewhere and modal dialogues are kinda like faux pages, the link feels appropriate.
While not explicit in the article, another thing I take from this is that by structuring your no-JavaScript experience well, this will help you make appropriate decisions when considering the with-JavaScript experience. There’s a kind of virtuous circle there.
Meta Tags - Preview, Edit and Generate
A handy tool which lets you type in a URL then inspects that page’s meta tags and shows you how it will be presented on popular websites.
This is really useful for testing how an article will look as a Google search result or when shared on Facebook, Slack and Twitter based on different meta tag values.
Comparing Browsers for Responsive Design (on CSS-Tricks)
Chris Coyier checks out Sizzy, Polypane et al and decides which suits him best.
There are a number of these desktop apps where the goal is showing your site at different dimensions all at the same time. So you can, for example, be writing CSS and making sure it’s working across all the viewports in a single glance.
I noticed Andy Bell recommending Sizzy so I’m interested to give it a go. Polypane got Chris’s vote, but is a little more expensive at ~£8 per month versus ~£5, so I should do a little shoot-out of my own.
Progressively enhanced JavaScript In Real Life
Over the last couple of days I’ve witnessed a good example of progressive enhancement “In Real Life”. And I think it’s good to log and share these validations of web development best practices when they happen so that their benefits can be seen as real rather than theoretical.
A few days ago I noticed that the search function on my website wasn’t working optimally. As usual, I’d click the navigation link “Search” then some JavaScript would reveal a search input and set keyboard focus to it, prompting me to enter a search term. Normally, the JavaScript would then “look ahead” as I type characters, searching the website for matching content and presenting (directly underneath) a list of search result links to choose from.
The problem was that although the search input was appearing, the search result suggestions were no longer appearing as I typed.
Fortunately, back when I built the feature I had just read Phil Hawksworth’s Adding Search to a Jamstack site which begins by creating a non-JavaScript baseline using a standard form
which submits to Google Search (scoped to your website), passing as search query the search term you just typed. This is how I built mine, too.
So, just yesterday at work I was reviewing a PR which prompted me to search for a specific article on my website by using the term “aria-label”. And although the enhanced search wasn’t working, the baseline search functionally was there to deliver me to a Google search result page (site:https://fuzzylogic.me/ aria-label
) with the exact article I needed appearing top of the search results. Not a rolls-royce experience, but perfectly serviceable!
Why had the enhanced search solution failed? It was because the .json
file which is the data source for the lookahead search had at some point allowed in a weird character and become malformed. And although the site’s JS was otherwise fine, this malformed data file was preventing the enhanced search from working.
JavaScript is brittle and fails for many reasons and in many ways, making it different from the rest of the stack. Added to that there’s the “unavailable until loaded” aspect, or as Jake Archibald put it:
all your users are non-JS while they’re downloading your JS.
The best practices that we as web developers have built up for years are not just theoretical. Go watch a screen reader user browse the web if you want proof that providing descriptive link text rather than “click here”, or employing headings and good document structure, or describing images properly with alt
attributes are worthwhile endeavours. Those users depend on those good practices.
Likewise, JavaScript will fail to be available on ocassion, so building a baseline no-JS solution will ensure that when it does, the show still goes on.
VisualSitemaps: Autogenerate Beautiful Sitemaps and Screenshots
A great tool for automatically generating a visual sitemap (visual because it attaches a screenshot to each node) for any given website.
Simply enter a URL and get a thumbnail-based visual architecture of the entire site.
You can even have it crawl a password-protected website.
Choosing between online services
A recent issue of the dConstruct newsletter about choosing more ethical online services really chimed with me at a time when I’ve been reflecting on my online habits.
Clearleft produce an excellent regular technology-based newsletter – dConstruct – to which I heartily recommend subscribing.
A recent issue compared online services in the gig economy – such as Uber, Deliveroo and AirBnB – plus music services Spotify and Bandcamp, and considered the relative ethics of each with respect to the extent they exploit the sellers in their “marketplace”. For example, which services let the seller set the price? AirBnB do, and so do Bandcamp. But not so Uber and Spotify.
The success of services like Bandcamp – which is far more profitable to lesser-known producers than the likes of Spotify – show that we don’t need to follow the crowd and can make better choices about the online services we use.
I’ve used Bandcamp more than usual in 2020 because I like the way they are actively supporting artists during a difficult period. I also like the convention that when you buy a vinyl release, the digital is also bundled free.
I’m currently typing this post in a Firefox tab and have been making an effort to switch (back) to it from Chrome, for a less invasive browsing experience.
I use DuckDuckGo rather than Google search when I remember, and have recently made it the default “address bar search” tool in Firefox which should help break old habits.
As for Facebook, Twitter and other time-draining, sometimes harmful social media platforms, well, I’m weaning myself off those and recently wrote about how I’m using Feedbin to aggregate news and updates.
I don’t know about you, but I find it helpful to do a periodic health check on how I’m using the internet, and see where I can make better choices.
How-to: Create accessible forms - The A11Y Project
Here are five bite-sized and practical chunks of advice for creating accessible forms.
- Always label your inputs.
- Highlight input elements on focus.
- Break long forms into smaller sections/pages.
- Provide error messages (rather than just colour-based indicators)
- Avoid horizontal layout forms unless necessary.
I already apply some of these principles, but even within those I found some interesting takeaways. For example, the article advises that when labelling your inputs it’s better not to nest the input within a <label>
because some assistive technologies (such as Dragon NaturallySpeaking) don’t support it.
I particularly like the idea of using CSS to make the input which has focus more obvious than it would be by relying solely on the text cursor (or caret).
input:focus {
outline: 2px solid royalblue;
box-shadow: 1px 1px 8px 1px royalblue;
}
(via @adactio)
Cheating Entropy with Native Web Technologies (on Jim Nielsen’s Weblog)
This is why, over years of building for the web, I have learned that I can significantly cut down on the entropy my future self will have to face by authoring web projects in vanilla HTML, CSS, and JS. I like to ask myself questions like:
- Could this be done with native ES modules instead of using a bundler?
- Could I do this with DOM scripting instead of using a JS framework?
- Could I author this in CSS instead of choosing a preprocessor?
Fantastic post from Jim Neilson about how your future self will thank you if you keep your technology stack simple now.
(via @adactio)
Setting an accessibility standard for a UK-based commercial website
When advocating accessible web practices for a commercial website, the question of “what does the law require us to do?” invariably arises.
The appropriate answer to that question should really be that it doesn’t matter. Regardless of the law there is a moral imperative to do the right thing unless you are OK with excluding people, making their web experiences unnecessarily painful, and generally flouting the web’s founding principles.
However as Web Usability’s article What is the law on accessibility? helpfully advises, in the UK the legal situation is as follows:
“The accessibility of a UK web site is covered by the Equality Act 2010” (which states that) “Site owners are required to make ‘reasonable adjustments’ to make their sites accessible to people with disabilities”. While “there is no legal precedent about what would constitute a ‘reasonable adjustment’”, “given that the Government has adopted the WCAG 2.1 level AA as a suitable standard for public sector sites and it is more broadly recognised as a ‘good’ approach, any site which met these guidelines would have a very strong defence against any legal action.”
So, WCAG 2.1 Level AA is the sensible accessibility standard for your commercial UK-based website to aim for.
While not aimed specifically at the UK market, deque.com’s article What to look for in an accessibility audit offers similar advice:
The most common and widely-accepted standard to test against is WCAG, a.k.a. Web Content Accessibility Guidelines. This standard created by the World Wide Web Consortium (W3C) defines technical guidelines for creating accessible web-based content.
WCAG Success Criteria are broken down into different “levels of conformance”: A (basic conformance), AA (intermediate conformance), and AAA (advanced conformance). The current standard for compliance is both WCAG 2.1 Level A and AA.
If you don’t have specific accessibility regulations that apply to your organization but want to avoid legal risk, WCAG 2.1 A and AA compliance is a reasonable standard to adopt.
Additional references
itty.bitty
Here’s an interesting tool for creating and sharing small-ish web pages without having to build a website or organise hosting.
itty.bitty takes html (or other data), compresses it into a URL fragment, and provides a link that can be shared. When it is opened, it inflates that data on the receiver’s side.
While I find this idea interesting, I’m not yet 100% sure how or when I’ll use it! I’m sure it’ll come in handy at some point, though.
Here’s my first “itty bitty” page, just for fun.
(via @chriscoyier)
When there is no content between headings
Hidde de Vries explains why an HTML heading should never be immediately followed by another.
When you use a heading element, you set the expectation of content.
I have always prided myself on using appropriate, semantic HTML, however it’s recently become clear to me that there’s one thing I occasionally do wrongly. Sometimes I follow a page’s title (usually an h1
element) with a subtitle which I mark up as an h2
. I considered this the right element for the job and my choice had nothing to do with aesthetics.
However a recent article on subtitles by Chris Ferdinandi and now this article by Hidde have made me reconsider.
HTML headings are essentially ”names for content sections”. On screen readers they operate like a Table of Contents – one can use them to navigate to content.
Therefore I now reckon I should only use a hx
heading when it will be immediately followed by (non-heading) content – paragraphs and so on – otherwise I should choose a different element.
I should probably mark up my subtitles as paragraphs.
Jack McDade’s personal website
I’m Jack McDade and I’m tired of boring websites.
So many fun touches in the design for Jack’s personal website! It gave me plenty of chuckles while browsing over the weekend.
My favourite bits were the “Email Deposit Box” on the Radical Design Course section (make sure to have sound turned on) and the entire Design Work page (keep scrolling)!
Bustle
Here’s a beautiful, magazine style website design for digital publication Bustle. The typography, use of whitespace, responsive layout, menu pattern, colour palette and imagery are all on point!
SVG Backgrounds – Create Customizable, Hi-Def, and Scalable Backgrounds.
SVGs enable full-screen hi-res visuals with a file-size near 5KB and are well-supported by all modern browsers. What's not to love?
A neat tool for selecting, customising and applying SVG backgrounds.
(via @css)
Accessibility (on adactio.com)
Here’s Jeremy Keith, making the moral case for accessible websites and why we shouldn’t use “you can make more money by not turning people away” as an argument:
I understand how it’s useful to have the stats and numbers to hand should you need to convince a sociopath in your organisation, but when numbers are used as the justification, you’re playing the numbers game from then on. You’ll probably have to field questions like ”Well, how many screen reader users are visiting our site anyway?” (To which the correct answer is “I don’t know and I don’t care” – even if the number is 1, the website should still be accessible because it’s the right thing to do.)
(via @adactio)
Font Match
A font pairing app that helps you match fonts – useful for pairing a webfont with a suitable fallback. You can place the fonts on top of each other, side by side, or in the same line. You can adjust your fallback font’s size and position to get a great match.
Font style matcher
If you’re using a web font, you're bound to see a flash of unstyled text (or FOUC), between the initial render of your websafe font and the webfont that you’ve chosen. This usually results in a jarring shift in layout, due to sizing discrepancies between the two fonts. To minimize this discrepancy, you can try to match the fallback font and the intended webfont’s x-heights and widths. This tool helps you do exactly that.
Cassie Evans’s Blog
I love Cassie Evans’s new website design! It’s so full of personality while loaded with technical goodies too. Amazing work!
(via @stugoo)
How to use npm as a build tool
Kieth Cirkel explains how using npm to run the scripts
field of package.json
is a great, simple alternative to more complex build tools. The article is now quite old but because it contains so many goodies, and since I’ve been using the approach more and more (for example to easily compile CSS on my personal website), it’s definitely worth bookmarking and sharing.
npm’s scripts directive can do everything that these build tools can, more succinctly, more elegantly, with less package dependencies and less maintenance overhead.
It’s also worth mentioning that (as far as I can tell so far) Yarn also provides the same facility.
Related references:
JavaScript Arrow Functions
JavaScript arrow functions are one of those bits of syntax about which I occasionally have a brain freeze. Here’s a quick refresher for those moments.
Differences between arrow functions and traditional functions
Arrow functions are shorter than traditional function syntax.
They don’t bind their own this
value. Instead, the this
value of the scope in which the function was defined is accessible. That makes them poor candidates for methods since this
won’t be a reference to the object the method is defined on. However it makes them good candidates for everything else, including use within methods, where—unlike standard functions—they can refer to (for example) this.name
just like their parent method because the arrow function has no overriding this
binding of its own.
TL;DR: typical usage
const doStuff = (foo) => {
// stuff that spans multiple lines
};
// short functions
const add = (num1, num2) => num1 + num2;
Explainer
// Traditional Function
function (a) {
return a + 100;
}
// Arrow Function Breakdown
// 1. Remove "function", place => between argument and opening curly
(a) => {
return a + 100;
}
// 2. Remove braces and word "return". The return is implied.
(a) => a + 100;
// 3. Remove the argument parentheses
a => a + 100;
References
How to optimise performance when using Google-hosted fonts (on CSS Wizardry)
A combination of asynchronously loading CSS, asynchronously loading font files, opting into FOFT, fast-fetching asynchronous CSS files, and warming up external domains makes for an experience several seconds faster than the baseline.
Harry Roberts suggests that, while self-hosting your web fonts is likely to be the overall best solution to performance and availability problems, we’re able to design some fairly resilient measures to help mitigate a lot of these issues when using Google Fonts.
Harry then kindly provides a code snippet that we can use in the <head>
of our document to apply these measures.
I have to reluctanctly agree on this one. I’ve interviewed quite a few candidates for “front-end developer” (or similarly named) positions over recent years and the recurring pattern is that they are strong on JavaScript (though not necessarily the right time to use it) and weak on HTML, CSS and the “bigger picture”.
🧵 It's time for our industry to realize the title "frontend developer" is obsolete. The vast majority of these profiles are actually "JS engineers", and they're usually quite good at it, but they're not as good at all the other things contributing to great frontend experiences.
— Benjamin De Cock (@bdc) April 13, 2020
How to get started with web development (Go Make Things)
Here’s Chris Ferdinandi with a list of resources to help those who are new to web development get started. I’m keeping this one handy so I can share it with any friends who’re thinking of getting into this game.
Testing Stimulus Controllers
Stimulus JS is great but doesn’t provide any documentation for testing controllers, so here’s some of my own that I’ve picked up.
Required 3rd-party libraries
Basic Test
// hello_controller.test.js
import { Application as StimulusApp } from "stimulus";
import HelloController from "path/to/js/hello_controller";
describe("HelloController", () => {
beforeEach(() => {
// Insert the HTML and register the controller
document.body.innerHTML = `
<div data-controller="hello">
<input data-target="hello.name" type="text">
<button data-action="click->hello#greet">
Greet
</button>
<span data-target="hello.output">
</span>
</div>
`;
StimulusApp.start().register('hello', HelloController);
})
it("inserts a greeting using the name given", () => {
const helloOutput = document.querySelector("[data-target='hello.output']");
const nameInput = document.querySelector("[data-target='hello.name']");
const greetButton = document.querySelector("button");
// Change the input value and click the greet button
nameInput.value = "Laurence";
greetButton.click();
// Check we have the correct greeting
expect(helloOutput).toHaveTextContent("Hello, Laurence!");
})
})
My Ruby and Rails Cheatsheet
I’m no Ruby engineer however even as a front-end developer I’m sometimes called upon to work on Rails applications that require me to know my way around. Here are my notes and reminders.
This is not intended to be an authoritative guide but merely my notes from various lessons. It’s also a work-in-progress and a living, changing document.
Table of contents
- The Rails Console
- Rspec
- Debugging
- Helpers
- blank? vs empty?
- frozen_string_literal
- Class-level methods
- Constants
- Symbols
- Hashes
- ViewComponent
- Instance Variables
- Methods
- Empty-checking arrays
- The Shovel Operator
- Require
- Blocks
- Rendering HTML
- Generating random IDs or strings
- Views
- Policies
- Start local Rails server
- Miscellaneous
- Web fonts location
- Working with the Database
- Routing
- References
The Rails Console
The console
command lets you interact with your Rails application from the command line.
# launch a console (short version)
rails c
# long version
bundle exec rails console
Quickly find where a method is located:
Myobj.method(:methodname).source_location
# Returns a file and line which you can command-click
=> ["/local/path/to/mymodelname/model.rb", 99]
See an object’s methods:
Myobj.methods
# Search for a method using a search string
# this returns all of the object methods containing string “/pay/“
Myobj.methods.grep(/pay/)
Rspec
Run it like so:
bin/rspec spec/path/to/foo_spec.rb
# Run a particular line/method
bin/rspec spec/path/to/foo_spec.rb:195
If adding data variables to use in tests, declare them in a let block so as to keep them isolated and avoid them leaking elsewhere.
let(:example_data_obj) {
{
foo: "bar",
baz: "bat",
…
}
}
Note: if you need multiple data variables so as to handle different scenarios, it’s generally more readable to define the data being tested right next to the test.
Debugging
I’ll cover debugging related to more specific file types later but here’s a simple tip. You can check the value of a variable or expression at a given line in a method by:
- add
byebug
on a line of its own at the relevant place in your file, then save file - switch to the browser and reload your page
- in the terminal tab that’s running the Rails server (which should now be stopped at the debugging breakpoint), at the bottom type the variable name of interest. You won’t see any text but just trust that your typing is taking effect. Press return
- you’ll now see the value of that variable as it is at the debugging breakpoint
- When you’re done, remove your
byebug
. You may need to type continue (orc
for short) followed by return at the command prompt to get the server back on track.
Helpers
Helper methods are to there to support your views. They’re for extracting into methods little code routines or logic that don’t belong in a controller and are too complex or reusable to be coded literally into your view. They’re reusable across views because they become available to all your views automatically.
Don’t copy and reuse method names from other helpers. You’ll get conflicts because Helpers are leaky. Instead, start your helper methods with an appropriate namespace.
Unlike object methods (e.g. myobj.do_something
) helper methods (e.g. render_something
) are not available for us to use in the Rails console.
Helper specs
Basic format:
# frozen_string_literal: true
require "rails_helper"
RSpec.describe Foos::BarHelper do
let(:foo) { FactoryBot.create(:foo) }
describe "#foo_bars_sortable_link" do
context "when bat is not true" do
it "does a particular thing" do
expect(helper.foo_bars_sortable_link(foo, bat: "false")).to have_link(
# …
)
end
end
context "when bat is true" do
it "does something else" do
expect(helper.foo_bars_sortable_link(foo, bat: "true")).to have_link(
# …a different link from previous test
)
end
end
end
end
Notes:
- start with
describe
: it’s a good top-level. - describe a helper method using hash (
describe "#project_link" do
) - Helper methods should not directly access controller instance variables because it makes them brittle, less reusable and less maintainable. If you find you’re doing that you might see it as an opportunity to refactor your helper method.
Debugging Helper methods
If you want to debug a helper method by running it and stepping through it at the command line you should lean on a test to get into the method’s context.
# in foo_helper.rb, insert above line of interest
binding.pry # or byebug
# at command line, run helper’s spec (at relevant line/assertion)
bin/rspec spec/path/to/foo_helper_spec.rb:195
# the “debugger” drops you in at the line where you added your breakpoint
# and shows the body of the function being run by the line of the spec we requested.
From: /path/to/app/helpers/foo_helper.rb:26 FooHelper#render_foo:
# you’re now debugging in the context of the running helper method…
# with the arguments passed in by the test available to manipulate.
# this means you can run constituent parts of the method at the debugger prompt…
# for example…
# run this to get back the HTML being rendered.
render_user_profile(user)
blank? versus empty?
If you want to test whether something is “empty” you might use empty?
if you’re testing a string, however it’s not appropriate for testing object properties (such as person.nickname
) because objects can be nil
and the nil
object has no empty?
method. (Run nil.empty?
at the console for proof.) Instead use blank?
e.g. person.nickname.blank?
.
frozen_string_literal: true
I’ll often see this at the top of files, for example Ruby classes. This is just a good practice. It makes things more efficient and thereby improves performance.
frozen_string_literal: true
Class-level methods
They’re called class-level methods because they are methods which are never called by the instance, i.e. never called outside of the class. They are also known as macros.
Examples include attr_reader
and ViewComponent’s renders_one
.
Constants
Here’s an example where we define a new constant and assign an array to it.
ALLOWED_SIZES = [nil, :medium, :large]
Interestingly while the constant cannot be redefined later—i.e. it could not later be set to something other than an array—elements can still be added or removed. We don’t want that here. The following would be better because it locks things down which is likely what we want.
ALLOWED_SIZES = [nil, :medium, :large].freeze
Symbols
They’re not variables. They’re more like strings than variables however Strings are used to work with data whereas Symbols are identifiers.
You should use symbols as names or labels for things (for example methods). They are often used to represent method & instance variable names:
# here, :title is a symbol representing the @title instance variable
attr_reader :title
# refer to the render_foo method using a symbol
Myobj.method(:render_foo).source_location
# you can also use symbols as hash keys
hash = {a: 1, b: 2, c: 3}
From what I can gather, colons identify something as a Symbol and the colon is at the beginning when its a method name or instance variable but at the end when its a hash key.
Hashes
A Hash is a dictionary-like collection of unique keys and their values. They’re also called associative arrays. They’re similar to Arrays, but where an Array uses integers as its index, a Hash allows you to use any object type.
Example:
hash =
The fetch method for Hash
Use the fetch method as a neat one-liner to get the value of a Hash key or return something (such as false) if it doesn’t exist in the hash.
@options.fetch(:flush, false)
ViewComponents
ViewComponents (specifically the my_component.rb
file) are just controllers which do not access the database.
They use constructors like the following:
def initialize(size: nil, full_height: false, data: nil)
super
@size = allowed_value?(ALLOWED_CARD_SIZES, size)
@full_height = full_height
@data = data
end
(Note that you would never include a constructor in a Rails controller or model.)
ViewComponents in the Rails console
view = ActionView::Base.new
view.render(CardComponent.new)
Instance variables
def initialize(foo: nil)
super
@foo = foo
end
In the above example @foo
is an instance variable
. These are available to an instance of the controller and private to the component. (This includes ViewComponents, which are also controllers.)
In a view, you can refer to it using @foo
.
In a subsequent method within the controller, refer to it simply as foo
. There’s no preceding colon (it’s not a symbol; in a conditional a symbol would always evaluate to true
) and no preceding @
.
def classes
classes = ["myThing"]
classes << "myThing-foo" if foo
classes
end
Making instance variables publicly available
The following code makes some instance variables of a ViewComponent publicly available.
attr_reader :size, :full_height, :data
Using attr_reader
like this automatically generate a “getter” for a given instance variable so that you can refer to that instead of the instance variable inside your class methods. My understanding is that doing so is better than accessing the instance variable direct because, among other benefits, it provides better error messages. More about using attr_reader.
The ViewComponent docs also use attr_reader.
Methods
Every method returns a value. You don’t need to explicitly use return
, because without it it will be assumed that you’re returning the last thing in the method.
def hello
"hello world”
end
Define private methods
Add private
above the instance methods which are only called from within the class in which they are defined and not from outside. This makes it clear for other developers that they are internal and don’t affect the external interface. This lets them know, for example, that these method names could be changed without breaking things elsewhere.
Also: keep your public interface small.
Naming conventions
The convention I have worked with is that any method that returns a boolean
should end with a question mark. This saves having to add prefixes like “is-” to method names. If a method does not return a boolean, its name should not end with a question mark.
Parameters
The standard configuration of method parameters (no colon and no default value) sets them as required arguments that must be passed in order when you call the method. For example:
def write(file, data, mode)
…
end
write("cats.txt", "cats are cool!", "w")
By setting a parameter to have a default value, it becomes an optional argument when calling the method.
def write(file, data, mode = "w")
…
end
write("shopping_list.txt", "bacon")
Named Parameters
Configuring your method with named parameters makes the method call read a little more clearly (via the inclusion of the keywords in the call) and increases flexibility because the order of arguments is not important. After every parameter, add a colon. Parameters are mandatory unless configured with a default value.
Here’s an example.
def write(file:, data:, mode: "ascii")
…
end
write(data: 123, file: "test.txt")
And here’s how you might do things for a Card
ViewComponent.
def initialize(size: nil, full_height: false, data: nil)
…
end
<%= render(CardComponent.new(size: :small, full_height: true)) do %>
Check if thing is an array and is non-empty
You can streamline this to:
thing.is_a?(Array) && thing.present?
The shovel operator
The shovel operator (<<
) lets you add elements to an array. Here’s an example where we build up an HTML class
attribute for a BEM-like structure:
def classes
classes = []
classes << "card--#{size}" if size
classes << "card--tall" if full_height
classes.join(" ")
end
Double splat operator
My understanding is that when you pass **foo
as a parameter to a method call then it represents the hash that will be returned from a method def foo
elsewhere. The contents of that hash might be different under different circumstances which is why you’d use the double-splat rather than just specifying literal attributes and values. If there are multiple items in the hash, it’ll spread them out as multiple key-value pairs (e.g. as multiple HTML attribute name and attribute value pairs). This is handy when you don’t know which attributes you need to include at the time of rendering a component and want the logic for determining that to reside in the component internals. Here’s an example, based on a ViewComponent for outputting accessible SVG icons:
In the icon_component.html.erb
template:
<%= tag.svg(
class: svg_class,
fill: "currentColor",
**aria_role
) do %>
…
<% end %>
In IconComponent.rb
:
def aria_role
title ? { role: "img" } : { aria: { hidden: true } }
end
The **aria_role
argument resolves to the hash
output by the aria_role
method, resulting in valid arguments for calling Rails’s tag.svg
.
require
require
allows you to bring other resources into your current context.
Blocks
The do…end
structure in Ruby is called a “block”, and more specifically a multi-line block.
<%= render CardComponent.new do |c| %>
Card stuff in here.
<% end %>
Blocks are essentially anonymous functions.
When writing functions where we want a block passed in, we can specify that the block is required. For example:
def do_something(param, &block)
Here, the ampersand (&
) means that the block is required.
yield
When you have a method with a yield statement, it is usually running the block that has been passed to it.
You can also pass an argument to yield e.g. yield(foo)
and that makes foo
available to be passed into the block.
See the yield keyword for more information.
Single-line block
Sometimes we don’t need to use a multiline block. We can instead employ a single-line block. This uses curly braces rather than do…end
.
For example in a spec we might use:
render_inline(CardComponent.new) { "Content" }
expect(rendered_component).to have_css(".fe-CardV2", text: "Content")
The above two lines really just construct a “string” of the component and let you test for the presence of things in it.
Rendering HTML
We have the content_tag
helper method for rendering HTML elements. However you are arguably just as well coding the actual HTML rather than bothering with it, especially for the likes of div
and span
elements.
link_to
is a little more useful and makes more sense to use.
Multi-line HTML string
Return a multi-line HTML string like so:
output = "<p>As discussed on the phone, the additional work would involve:</p>
<ol>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ol>
<p>This should get your historic accounts into a good shape.</p>".html_safe
output
Interpolation
Here’s an example where we use interpolation to return a string that has a text label alongside an inline SVG icon, both coming from variables.
"#{link[:text]} #{icon_svg}".html_safe
tag.send()
send()
is not just for use on tag
. It’s a means of calling a method dynamically i.e. using a variable. I’ve used it so as to have a single line create either a th
or a td
dymamically dependent on context.
Only use it when you are in control of the arguments. Never use it with user input or something coming from a database.
Random IDs or strings
object_id
gives you the internal ruby object id for what you’re working on. I used this in the past to append a unique id to an HTML id
attribute value so as to automate an accessibility feature. However don’t use it unintentionally like I did there.
It’s better to use something like rand
, or SecureRandom
or SecureRandom.hex
.
Views
If you have logic you need to use in a view, this would tend to live in a helper method rather than in the controller.
Policies
You might create a method such as allowed_to?
for purposes of authorisation.
Start (local) Rails server
Note: the following is shorthand for bin/rails server -b 0.0.0.0
.
rails s
Miscellaneous
Use Ruby to create a local web server.
# to serve your site at localhost:5000 run this in the project’s document root
ruby -run -e httpd . -p 5000
Web fonts: where to put them in the Rails file structure
See https://gist.github.com/anotheruiguy/7379570.
The Database
Reset/wipe the database.
bundle exec rake db:reset
Routing
Get routes for model from terminal
Let’s say you’re working on the index page for pet_foods
and want to create a sort-by-column anchors where each link’s src
points to the current page with some querystring parameters added. You’re first going to need the route for the current page and in the correct format.
To find the existing routes for pet_foods
you can run:
rails routes | grep pet_foods
References
The new dot com bubble is here: it’s called online advertising (The Correspondent)
Is online advertising working? We simply don’t know
This article reveals that despite $273bn being spent on digital ads globally (figures from 2018) the effectiveness of digital advertising is actually borderline impossible to measure. It is highly likely that any favourable figures suggested by marketers owe more to a combination of the “selection effect” and blind faith.
I don’t particularly enjoy being followed around the web by “targeted ads” – especially having discovered that the intrusion is relatively pointless!
(via @clearleft)
Colors – a nicer color palette for the web
Skinning your prototypes just got easier - colors.css is a collection of skin classes to use while prototyping in the browser.
They also provide ninety examples of “A11Y compliant color combos” which is really handy.
Progressively Enhanced JavaScript with Stimulus
I’m dipping my toes into Stimulus, the JavaScript micro-framework from Basecamp. Here are my initial thoughts.
I immediately like the ethos of Stimulus.
The creators’ take is that in many cases, using one of the popular contemporary JavaScript frameworks is overkill.
We don’t always need a nuclear solution that:
- takes over our whole front end;
- renders entire, otherwise empty pages from JSON data;
- manages state in JavaScript objects or Redux; or
- requires a proprietary templating language.
Instead, Simulus suggests a more “modest” solution – using an existing server-rendered HTML document as its basis (either from the initial HTTP response or from an AJAX call), and then progressively enhancing.
It advocates readable markup – being able to read a fragment of HTML which includes sprinkles of Stimulus and easily understand what’s going on.
And interestingly, Stimulus proposes storing state in the HTML/DOM.
How it works
Stimulus’ technical purpose is to automatically connect DOM elements to JavaScript objects which are implemented via ES6 classes. The connection is made by data–
attributes (rather than id
or class
attributes).
data-controller
values connect and disconnect Stimulus controllers.
The key elements are:
- Controllers
- Actions (essentially event handlers) which trigger controller methods
- Targets (elements which we want to read or write to, mapped to controller properties)
Some nice touches
I like the way you can use the connect()
method – a lifecycle callback invoked whenever a given controller is connected to the DOM - as a place to test browser support for a given feature before applying a JS-based enhancement.
Stimulus also readily supports the ability to have multiple instances of a controller on the page.
Furthermore, actions and targets can be added to any type of element without the controller JavaScript needing to know or care about the specific element, promoting loose coupling between HTML and JavaScript.
Managing State in Stimulus
Initial state can be read in from our DOM element via a data-
attribute, e.g. data-slideshow-index
.
Then in our controller object we have access to a this.data
API with has()
, get()
, and set()
methods. We can use those methods to set new values back into our DOM attribute, so that state lives entirely in the DOM without the need for a JavaScript state object.
Possible Limitations
Stimulus feels a little restrictive if dealing with less simple elements – say, for example, a data table with lots of rows and columns, each differing in multiple ways.
And if, like in our data table example, that element has lots of child elements, it feels like there might be more of a performance hit to update each one individually rather than replace the contents with new innerHTML
in one fell swoop.
Summing Up
I love Stimulus’s modest and progressive enhancement friendly approach. I can see me adopting it as a means of writing modern, modular JavaScript which fits well in a webpack context in situations where the interactive elements are relatively simple and not composed of complex, multidimensional data.
Jank-free Responsive Images
Here’s how to improve performance and prevent layout jank when browsers load responsive images.
Since the advent of the Responsive Web Design era many of us, in our rush to make images flexible and adaptive, stopped applying the HTML width
and height
attributes to our images. Instead we’ve let CSS handle the image, setting a width
or max-width
of 100% so that our images can grow and shrink but not extend beyond the width of their parent container.
However there was a side-effect in that browsers load text first and images later, and if an image’s dimensions are not specified in the HTML then the browser can’t assign appropriate space to it before it loads. Then, when the image finally loads, this bumps the layout – affecting surrounding elements in a nasty, janky way.
CSS-tricks have written about this several times however I’d never found a solid conclusion.
Chrome’s Performance Warning
The other day I was testing this here website in Chrome and noticed that if you don’t provide images with inline width and height attributes, Chrome will show a console warning that this is negatively affecting performance.
Based on that, I made the following updates:
- I added width and height HTML attributes to all images; and
- I changed my CSS from
img { max-width: 100%; }
toimg { width: 100%; height: auto; }
.
NB the reason behind #2 was that I found that that CSS works better with an image with inline dimensions than max-width
does.
Which dimensions should we use?
Since an image’s actual rendered dimensions will depend on the viewport size and we can’t anticipate that viewport size, I plumped for a width
of 320 (a narrow mobile width) × height
of 240, which fits with this site’s standard image aspect ratio of 4:3.
I wasn’t sure if this was a good approach. Perhaps I should have picked values which represented the dimensions of the image on desktop.
Jen Simmons to the rescue
Jen Simmons of Mozilla has just posted a video which not only confirmed that my above approach was sound, but also provided lots of other useful context.
Essentially, we should start re-applying HTML width and height attributes to our images, because in soon-to-drop Firefox and Chrome updates the browser will use these dimensions to calculate the image’s aspect ratio and thereby be able to allocate the exact required space.
The actual dimensions we provide don’t matter too much so long as they represent the correct aspect ratio.
Also, if we use the modern srcset
and sizes
syntax to offer the browser different image options (like I do on this site), so long as the different images are the same aspect ratio then this solution will continue to work well.
There’s no solution at present for the Art Direction use case – where we want to provide different aspect ratios dependent on viewport size – but hopefully that will come along next.
I just tested this new feature in Firefox Nightly 72, using the Inspector’s Network tab to set “throttling” to 2G to simulate a slow-loading connection, and it worked really well!
Lazy Loading
One thing I’m keen to test is that my newly-added inline width
and height
attributes play well with loading="lazy"
. I don’t see why they shouldn’t and in fact they should in theory all support each other well. In tests so far everything seems good, however since loading="lazy"
is currently only implemented in Chrome I should re-test images in Chrome once it adds support for the new image aspect ratio calculating feature, around the end of 2019.
Beyond Automatic Testing (matuzo.at)
Six accessibility tests Viennese Front-end Developer Manuel Matusovic runs on every website he develops, beyond simply running a Lighthouse audit.
Includes “Test in Grayscale Mode” and “Test with no mouse to check tabbing and focus styles”.
U.S. Supreme Court Favors Digital Accessibility in Domino’s Case
Digital products which are a public accommodation must be accessible, or will be subject to a lawsuit (and probably lose).
This US Supreme Court decision on October 7, 2019 represents a pretty favourable win for digital accessibility against a big fish that was trying to shirk its responsibilities.
Replicating Jekyll’s markdownify filter in Nunjucks with Eleventy
Here, Ed provides some handy code to convert a Markdown-formatted string into HTML in Nunjucks via an Eleventy shortcode.
This performs the same role as the markdownify filter in Jekyll.
I’m now using it on this site in listings, using the shortcode to convert blog entry excerpts written in markdown (which might contain code or italics, etc) into the target HTML.
Semantic Commit Messages
A fairly rigid commit format (chore
, fix
, feat
etc) which should lead to your git log being an easy-to-skim changelog.
I’d noticed that my git commit messages could benefit from greater consistency. So I’ve started adopting Sparkbox’s approach.
Here’s how to use the different commit types:
chore: add Oyster build script
docs: explain hat wobble
feat: add beta sequence
fix: remove broken confirmation message
refactor: share logic between 4d3d3d3 and flarhgunnstow
style: convert tabs to spaces
test: ensure Tayne retains clothing
Relearn CSS layout: Every Layout
Every now and then something comes along in the world of web design that represents a substantial shift. The launch of Every Layout, a new project from Heydon Pickering and Andy Bell, feels like one such moment.
In simple terms, we get a bunch of responsive layout utilities: a Box, a Stack, a Sidebar layout and so on. However Every Layout offers so much more—in fact for me it has provided whole new ways of thinking and talking about modern web development. In that sense I’m starting to regard it in terms of classic, game-changing books like Responsive Web Design and Mobile First.
Every Layout’s components, or primitives, are self-governing and free from media queries. This is a great step forward because media queries tie layout changes to the viewport, and that’s suboptimal when our goal is to create modular components for Design Systems which should adapt to variously-sized containers. Every Layout describe their components as existing in a quantum state: simultaneously offering both narrow and wide configurations. Importantly, the way their layouts adapt is also linked to the dynamic available space in the container and the intrinsic width of its contents, which leads to more fluid, organic responsiveness.
Every Layout’s approach feels perfect for the new era of CSS layout where we have CSS Grid and Flexbox at our disposal. They use these tools to suggest rather than dictate, letting the browser make appropriate choices based on its native algorithms.
Native lazy-loading for the web
Now that we have the HTML attribute loading
we can set loading="lazy"
on our website’s media, and the loading of non-critical, below-the-fold media will be deferred until the user scrolls to them.
This can really improve performance so I’ve implemented it on images and iframes (youtube video embeds etc) throughout this site.
This is currently only supported in Chrome, but that still makes it well worth doing.
Flexible tag-like functionality for custom keys in Eleventy
I have an open-source, Eleventy-based project where the posts are restaurants, each of which is located in a particular city, and contributors to the repo can add a new restaurant as a simple markdown file.
I just had to solve a conundrum wherein I wanted a custom front matter key, city, to have similar features as tags, namely:
- it takes arbitrary values (e.g. Glasgow, or London, or Cañon City, or anything a contributor might choose);
- there is a corresponding cityList collection;
- there is a page which lists all cities in the cityList collection as links; and
- there’s a page for each city which lists all restaurants in that city (much like a “Tagged Glasgow” page).
You could be forgiven for asking: why didn’t I just implement the cities as tags? I could have tagged posts with “glasgow”, or “edinburgh” for example. Well, here was my rationale:
- for cities, I need the proper, correctly spelled, spaced and punctuated name so I can display it as a page title. A lowercased, squashed together “tag” version wouldn’t cut it;
- I list “all tags” elsewhere and wouldn’t want the cities amongst them. Although Eleventy allows you to filter given tag values out, in this case that would be a pain to achieve because the city values are not known up front;
- Lastly it just felt right for ease of future data manipulation that city should be a separate entity.
This task was a bit of a head-scratcher and sent me down a few blind alleys at first. Rightly or wrongly, it took me a while to realise that only all posts for tag values are automatically available as collections in Eleventy. So any other collections you need, you have to DIY. Once I worked that out, I arrived at a strategy of:
implement all the requisite functionalty on tags first, then emulate that functionality for my custom key.
First port of call was the Eleventy Tag Pages tutorial. That showed me how to use the existing “collection for each tag value” to create a page for each tag value – the “All posts tagged with X” convention. (Here’s an example on this site.)
I then referenced the eleventy-base-blog repo which takes things further by creating a page which lists “all tags”. To achieve this you first need to create a custom tagList collection, then you create a page which accesses that new collection using collections.tagList
, iterates it and displays each tag as a link. Each tag link points to its corresponding “All posts tagged with X” page we created in the step above.
So now I had everything working for tags. Next step: emulate that for cities.
Here’s what I ended up doing:
Create new file _11ty/getCityList.js
module.exports = function(collection) {
let citySet = new Set();
collection.getAll().forEach(function(item) {
if( "city" in item.data ) {
let city = item.data.city;
citySet.add(city);
}
});
return [...citySet];
};
Then add the following to .eleventy.js
// Create a collection of cities
eleventyConfig.addCollection("cityList", require("./_11ty/getCityList"));
// Create "restaurants in city" collections keyed by city name
eleventyConfig.addCollection("cityCollections", function(collection) {
let resultArrays = {};
collection.getAll().forEach(function(item) {
if(item.data["title"] && item.data["city"]) {
if( !resultArrays[item.data["city"]] ) {
resultArrays[item.data["city"]] = [];
}
resultArrays[item.data["city"]].push(item);
}
});
return resultArrays;
});
Next, create new file cities-list.njk
:
---
permalink: /cities/
layout: layouts/home.njk
---
<h1>All Cities</h1>
<ul>
{%- for city in collections.cityList -%}
{% set cityUrl -%}/cities/{{ city }}/{% endset %}
<li><a href="{{ cityUrl | url }}">{{ city }}</a></li>
{%- endfor -%}
</ul>
Finally, create new file posts-in-city.njk
:
---
renderData:
title: Restaurants in “{{ city }}”
pagination:
data: collections.cityList
size: 1
alias: city
permalink: /cities/{{ city | slug }}/
---
<h1>Restaurants in {{ city }}</h1>
{% set postslist = collections.cityCollections[ city ] %}
{% include 'postslist.njk' %}
And that’s a wrap! Eleventy will do the rest when it next runs, creating for each city a page which lists all restaurants in that city.
Footnote: I should acknowledge this 11ty Github issue in which Ed Horsford was trying to do something similar (create a separate tags network) leading to Zach Leatherman pitching in with how he created noteTags for his website’s Notes section. That led me to Zach’s website’s repo on Github, specifically .eleventy.js and tag-pages.njk, without which I wouldn’t have found my way.
Get Waves
I’ve been admiring the wave effect at the foot of banners on Netlify’s website and had noted that they were achieved using SVG. So this tool which helps you “make waves” is pretty timely!
Resources for special typographic characters
A collection of resources for finding that curly quote or em dash character quickly.
Saying bye-bye to autoprefixer
For a while now I’ve been using gulp-autoprefixer as part of my front-end build system. However, I’ve just removed it from my boilerplate. Here’s why.
The npm module gulp-autoprefixer
takes your standard CSS then automatically parses the rules and generates any necessary vendor-prefixed versions, such as ::-webkit-input-placeholder
to patch support for ::placeholder
in older Webkit browsers.
I’ve often felt it excessive—like using a hammer to crack a nut. And I’ve wondered if it might be doing more harm than good, by leading me to believe I have a magical sticking plaster for non-supporting browsers when actually (especially in the case of IE) the specific way in which a browser lacks support might be more nuanced. Furthermore I’ve never liked the noise generated by all those extra rules in my CSS output, especially when using the inspector to debug what would otherwise be just a few lines of CSS.
But I always felt it was a necessary evil.
However, I’ve just removed gulp-autoprefixer
from my boilerplate. Why? Because:
- Browsers are no longer shipping any new CSS with prefixes, and as at 2019, they haven’t been for years;
- With the browsers that do require prefixed CSS now old and in the minority, it feels like progressive enhancement rather than “kitchen sink” autoprefixing should take care of them. (Those browsers might not get the enhanced experience but what they’ll get will be fine.)
Jen Simmons’ tweet on this topic was the push I needed.
So I’ve removed one layer of complexity from my set-up, and so far nothing has exploded. Let’s see how it goes.
Real Favicon Generator
Knowing how best to serve, size and format favicons and other icons for the many different device types and operating systems can be a minefield. My current best practice approach is to create a 260px × 260px (or larger) source icon then upload it to Real Favicon Generator.
This is the tool recommended by CSS-Tricks and it takes care of most of the pain by not only generating all the formats and sizes you need but also providing some code to put in your <head>
and manifest.webmanifest
file.
Intrinsically Responsive CSS Grid with minmax and min
Evan Minto notes that flexible grids created with CSS Grid’s repeat
, auto-fill
, and minmax
are only intrinsically responsive (responsive to their container rather than the viewport) up to a point, because when the container width is narrower than the minimum width specified in minmax
the grid children overflow.
Applying media queries to the grid is not a satisfactory solution because they relate to the the viewport (hence why Every Layout often prefer Flexbox to CSS Grid because it allows them to achieve intrinsic responsiveness).
However we’ll soon be able to suggest grid item width as a percentage of the parent container, avoiding the overflow problem. The new “CSS Math functions” to help us achieve this are min()
, max()
and clamp()
. At the time of writing, these are only supported in Safari however Chrome support is in the pipeline.
GOV.UK Design System
Use this design system to make your service consistent with GOV.UK. Learn from the research and experience of other service teams and avoid repeating work that’s already been done.
Intro to CSS 3D transforms
Excellent tutorials by David DeSandro. I’ve already used the card flip and it worked really well.
Flickity – touch, responsive, flickable carousels
This slider/carousel certainly looks nice, and I like author David DeSandro’s work, having taken inspiration from his 3d Card Flip tutorial for a recent project. I’d just like to dig into it a little further to see how it fares in terms of accessibility before using it in the wild.
Katherine Kato’s personal website
Some simple but inspiring work here from Seattle-based web developer Katherine Kato. I really like the use of space, the typography, the colour palette and the use of CSS grid for layout.
CSS pointer-events to the rescue
Sometimes, for reasons unknown, we find that clicking or tapping an element just isn’t working. Here’s a CSS-based approach that might help.
I’ve recently encountered the scenario – usually in reasonably complex user interfaces – where I have an anchor (or ocassionally, a button) on which clicks or taps just aren’t working, i.e. they don’t trigger the event I was expecting.
On further investigation I found that this is often due to having an absolutely positioned element which is to some extent overlaying (or otherwise interfering with) our target clickable element. Alternatively, it may be because we needed a child/nested element inside our anchor or button and it is this element that the browser perceives as being the clicked or tapped element.
I’ve found that setting .my-elem { pointer-events: none; }
on the obscuring element resolves the problem and get you back on track.
Polypane: The browser for responsive web development and design
Polypane is a browser built specifically for developing responsive websites. It can present typical device resolutions side-by-side (for example iphone SE next to iphone 7 next to iPad) but also has some nice features such as automatically creating views based on your stylesheet’s media query breakpoints.
It’s a subscription service and at the moment I’m happy using a combination of Firefox Nightly and Chrome so I think I’ll wait this one out for the time being. But I’ll be keeping my eye on it!
Using aria-current is a win-win situation
The HTML attribute aria-current
allows us to indicate the currently active element in a sequence. It’s not only great for accessibility but also doubles as a hook to style that element individually.
By using [aria-current]
as your CSS selector (rather than a .current
class) this also neatly binds and syncs the way you cater to the visual experience and the screen reader experience, reducing the ability for the latter to be forgotten about.
As Léonie Watson explains, according to WAI-ARIA 1.1 there are a number of useful values that the aria-current
attribute can take:
page
to indicate the current page within a navigation menu or pagination section;step
for the current step in a step-based process;date
for the current date.time
for the current time.
I’ve been using the aria-current="page"
technique on a couple of navigation menus recently and it’s working well.
Also: my thanks go to Ethan Marcotte, David Kennedy and Lindsey. Ethan recently suggested that the industry should try harder regarding accessibility and recommended subscribing to David Kennedy’s a11y Weekly newsletter. I duly subscribed (it’s great!) and one of the issues linked to Lindsey’s article An Introduction to ARIA states in which I learned about aria-current
.
Solar Design System by Bulb
It’s a collection of shared patterns and practices that allow our team to build quality user interfaces consistently and quickly.
Certbot Troubleshooting
When taking the DIY approach to building a new server, Certbot is a great option for installing secure certificates. However, sometimes you can run into problems. Here, I review the main recurring issues I’ve encountered and how I fixed them.
When creating new servers for my projects I use Certbot as a means of installing free Let’s Encrypt secure certificates.
It’s great to be able to get these certificates for free and the whole process is generally very straightforward. However, since working with Let’s Encrypt certificates over the last few years I’ve found that the same recurring questions tend to plague me.
This is a note to “future me” (and anyone else it might help) with answers to the questions I’ve pondered in the past.
How do I safely upgrade from the old LE system to Certbot?
For servers where you previously used the 2015/2016, pre-Certbot Let’s Encrypt system for installing SSL certs, you can just install Certbot on top and it will just work. It will supersede the old certificates without conflict.
How do I upgrade Certbot now that Let’s Encrypt have removed support for domain validation with TLS-SNI-01?
Essentially the server needs Certbot v0.28 or above. See Let’s Encrypt’s post on how to check your Certbot version and steps to take after upgrading to check everything is OK. To apply the upgrade I performed apt-get update && apt-get upgrade -y
as root although depending on when you last did it this might be a bit risky as it could update a lot of packages rather than just the Certbot ones. It might be better to just try sudo apt-get install certbot python-certbot-apache
.
To what extent should I configure my 443 VirtualHost block myself or is it done for me?
When creating a new vhost on your Linode, DigitalOcean (or other cloud hosting platform) server, you need only add the <VirtualHost *:80>
directive. No need to add a <VirtualHost *:443>
section, nor worry about pointing to LE certificate files, nor bother writing rules to redirect http to https like I used to. When you install your secure certificate, certbot will automatically add the redirect into your original file and create an additional vhost file (with extension -le.ssl.conf
) based on the contents of your existing file but handling <VirtualHost *:443>
and referencing all the LE SSL certificate files it installed elsewhere on the system.
How should I manage automated renewals?
There’s no longer any need to manually add a cron job for certiticate renewal. Auto-renewal of certificates is now handled by a cron job which comes bundled with the certbot package you initially install – in my case usually a certbot ppa package for Ubuntu 16.04 or 18.04. However you won’t find that cron job in the crontab for either your limited user, nor the root user. Instead, it is installed at a lower level (/etc/cron.d
) and should just work unless you’ve done something fancy with systemd
in your system which in my case is unlikely.
How can I tell if renewals are working and what should I do if they’re not?
If you notice that the SSL certificate for your domain is within 30 days of expiry and hasn’t yet auto-renewed, then you know that something has gone wrong with the auto-renewal process. You can test for problems by running sudo certbot renew --dry-run
. You may find that there is, for example, a syntax error in your apache2.conf
or nginx config file which needs corrected – not that I’ve ever been guilty of this, you understand…
W3C HTML Element Sampler
In all my years of spinning up “HTML Typographic Elements” lists or pages as a reference for designers, I didn’t realise that the W3C provide the very thing I needed in their HTML Element Sampler. These pages provide comprehensive dummy content covering all the main typographic elements which is really handy when designing a website’s typographic styles and pattern library.
Atomic Design by Brad Frost
The original call-to-arms and manual for Design Systems.
Brad Frost explains why Design Systems are necessary, provides a methodology for Pattern Library creation, discusses Design System benefits and offers strategies for maintenance.
Image Color
A handy tool for identifying colours – provided in numerous different CSS-ready formats – and creating a complimentary colour palette from an image you upload or provide as a URL.
Small Victories
No CMS, no installation, no server, no coding required.
Another quick and clever way of creating a website; this time by collecting a bunch of files (HTML, video, images, bookmarks) into a folder, connecting Dropbox and Small Victories to that, choosing a theme and Hey Presto, you have a website.
I could see this as maybe being useful for some sort of transient campaign idea that doesn’t need a CMS and that you want others to be able to collaborate on.
Note: to get a custom domain and host CSS and JS files, you need to sign up to a paid plan, but at $4/month or $36/year it’s pretty cheap.
Carrd - simple, free, fully responsive one-page sites for pretty much anything
These days when friends tell me they want a personal website, it’s often just a single-page profile that they’re really after rather than something pricier and more complicated.
In the past there were services like http://flavors.me/ but it seems to have fallen by the wayside. This looks like a decent option to point friends toward if they’re not looking for a blog or want to take baby steps toward that. Incidentally, I came across Carrd through Chris Ferdinandi’s Vanilla JS List which features organisations which favour Vanilla JavaScript over JS frameworks.
Namecheap
I’ve heard a couple of people mention that when they buy domain names, they use Namecheap because they are cheap and trustworthy.--- I tend not to have any particular favourites but I’ll maybe give Namecheap a go next time.
A Dao of Web Design (on A List Apart)
John Allsopp’s classic article in which he looks at the medium of web design through the prism of the Tao Te Ching, and encourages us to embrace the web’s inherent flexibility and fluidity.
It’s time to throw out the rituals of the printed page, and to engage the medium of the web and its own nature.
It’s choc-full of quotable lines, but here are a few of my favourites:
We must “accept the ebb and flow of things.”
Everything I’ve said so far could be summarized as: make pages which are adaptable.
…and…
The web’s greatest strength, I believe, is often seen as a limitation, as a defect. It is the nature of the web to be flexible, and it should be our role as designers and developers to embrace this flexibility, and produce pages which, by being flexible, are accessible to all. The journey begins by letting go of control, and becoming flexible.
Grid by Example
Great resource from CSS Grid expert Rachel Andrew, with the Patterns and Examples sections which provide quick-start grid layouts being particularly handy.
Meet the New Dialog Element
Introducing dialog
: a new, easier, standards-based means of rendering a popup or modal dialogue.
The new element can be styled via CSS and comes with Javascript methods to show and close a dialog. We can also listen for and react to the show and close events.
Although currently only supported in Chrome, the Google Chrome dev team have provided a polyfill which patches support in all modern browsers and back to IE9.
The best way to Install Node.js and NPM on a Mac
In modern front-end development, we tend to use a number of JavaScript-based build tools (such as task runners like Gulp) which have been created using Node.js and which we install using NPM. Here’s the best way I’ve found for installing and maintaining Node and NPM on a Mac.
To install and use NPM packages, we first need to install Node.js and NPM on our computer (in my case a Mac).
I’ve found that although the Node.js website includes an installer, using Homebrew is a better way to install Node and NPM on a Mac. Choosing the Homebrew route means you don’t have to install using sudo
(or non-sudo but with complicated workarounds) which is great because it presents less risk of things going wrong later down the line. It also means you don’t need to mess around with your system $PATH
.
Most importantly, it makes removing or updating Node really easy.
Installation
The whole process (after you have XCode and Homebrew installed) should only take you a few minutes.
Just open your Terminal app and type brew install node
.
Updating Node and NPM
First, check whether or not Homebrew has the latest version of Node. In your Terminal type brew update
.
Then, to Upgrade Node type: brew upgrade node
.
Uninstalling Node and NPM
Uninstalling is as easy as running brew uninstall node
.
Credits
This post was based on information from an excellent article on Treehouse.
See all tags.