Skip to main content

Tagged “development”

Lightning Fast Web Performance course by Scott Jehl

I purchased Scott’s course back in 2021 and immediately liked it, but hadn’t found the space to complete it until now. Anyway, I’m glad I have as it’s well structured and full of insights and practical tips. I’ll use this post to summarise my main takeaways.

Having completed the course I have much more rounded knowledge in the following areas:

  • Why performance matters to users and business
  • Performance-related metrics and “moments”: defining fast and slow
  • Identifying performance problems: the tools and how to use them
  • Making things faster, via various good practices and fixes

I’ll update this post soon to add some key bullet-points for each of the above headings.

It's 2023, here is why your web design sucks (by Heather Buchel)

Heather explores why we no longer have “web designers”.

It's been belittled and othered away. It's why we've split that web design role into two; now you're either a UX designer and you can sit at that table over there or you're a front-end developer and you can sit at the table with the people that build websites.

Heather makes lots of good points in this post. But the part that resonates most with me is the observation that we have split design and engineering in a way that is dangerous for building proper websites.

We all lost when the web design role was split in two.

if our design partners are now at a different table, how do we expect them to acquire the deeply technical knowledge they need to know? The people we task with designing websites, I've found, often have huge gaps in their understanding of… the core concepts of web design.

Heather argues that designers don’t need to learn to code but that “design requires a deep understanding of a subject”. Strong agree!

Likewise, the arrival and evolution of the front-end role such that i) developers are separate from designers; and ii) within developers there’s a further split where the majority lack “front of the front-end” skills has meant that:

We now live in a world where our designers aren't allowed to… acquire the technical design knowledge they need to actually do their job and our engineers never learn about the technical design knowledge that they need to build the thing correctly.

Heather’s post does a great job of articulating the problem. They understandably don’t have the answers, but suggest that firstly education gaps and secondly how companies hire are contributing factors. So changes there could be impactful.

Blog development decisions

Here are some recurring development decisions I make when maintaining my personal website/blog, with some accompanying rationale.

Where should landmark-related HTML elements be in the source?

I set one header, one main and one footer element as direct children of the body element.

<body>
<header></header>
<main></main>
<footer></footer>
</body>

This isn’t arbitrary. A header at this level will be treated as a banner landmark. A footer is regarded as the page’s contentinfo landmark. Whereas when they are nested more deeply such as within a “wrapper” div they are not automatically given landmark status. You’d have to bolt-on ARIA attributes. My understanding is that it’s better to use elements with implicit semantics than to bolt on semantics manually.

How should I centre the main content in a way that’s responsive and supports full-width backgrounds?

My hard-learned approach is to use composition rather than try to do everything with “god” layouts.

I mentally break the page up from top to bottom into slices that correspond to logical groups of content and/or parts that need a dedicated full-width background. I give each slice padding on all sides. The lateral padding handily gives you the gutters you need on narrow screens. (You could use a Box layout for these sections. I tend not to consider them to be “true boxes” because usually their lateral padding differs from their vertical padding. So I just apply their styles on a case by case basis.)

Within each section, nest a dedicated Center layout to handle your fluid width-constraining wrappers.

This approach offers the best of all worlds. It doesn’t constrain your markup, which I find useful for achieving appropriate semantics and accessibility. You don’t need to put a “wrapper div” around everything. Instead you can have landmark-related elements as direct children of body, applying padding to those and nesting centred wrappers inside them.

By making proper use of padding, this approach also avoids problems of “collapsing margins” and other margin weirdness that make life difficult when you have sections with background colours. You don’t want to be using vertical margins in situations where “boxes with padding” would be more appropriate. Relatedly, I find that flow (or stack) layouts generally work best within each of your nested wrappers rather than at the top level.

How should I mark up lists of articles?

Should they be a bunch of sibling articles? Should they be in a list element like a ul?

Different switched-on developers tackle this differently, so it’s hard to offer a definitive best approach. Some developers even do it differently across different pages on their own site! Here are some examples in the wild:

So, clear as mud!

I currently use sibling articles with no wrapping list. Using article elements feel right because each (per MDN’s definition of article) “represents a self-contained composition… intended to be independently distributable or reusable (e.g., in syndication)”. I could be persuaded to wrap these in a list because that would announce to screen reader users upfront that it’s a list of articles and say how many there are. (It tends to make your CSS a tad gnarlier but that’s not the end of the world.)

Should a blog post page be marked up as an article or just using main?

I mark it up as an article for the same reasons as above. That article is nested inside a main because all of my pages have one main element wrapping around the primary, non-repeated content of the page.

To be continued

I’ll add more to this article over time.

A blog post which uses every HTML element (by Patrick Weaver)

An interesting article which helps the author – and his readers – understand some of the lesser-used and more obscure HTML elements.

While Patrick confesses he is still learning certain things and therefore I won’t regard his implementations as gospel in the way I might an article by someone with greater HTML and accessibility expertise such as Adrian Roselli, I see this as another useful resource to help me when deciding whether or not an HTML choice is the semantic and or correct tool for a given situation.

Thanks, Patrick!

Shoelace: a forward-thinking library of web components

I’m interested by Shoelace’s MO as a collection of pre-rolled, customisable web components. The idea is that it lets individuals and teams start building with web components – components that are web-native, framework-agnostic and portable – way more quickly.

I guess it’s a kind of Bootstrap for web components? I’m interested to see how well it’s done, how customisable the components are, and how useful it is in real life. Or if nothing else, I’m interested to see how they built their components!

It’s definitely an interesting idea.

I'll delve into Shoelace in more detail in the future when I have time, but in the meantime I was able to very quickly knock together a codepen that renders a Dropdown instance.

Thanks to Chris Ferdinandi for sharing Shoelace.

Use z-index only when necessary

There’s a great section on Source order and layers in Every Layout’s Imposter layout. It’s a reminder that when needing to layer one element on top of the other you should:

  1. favour a modern layout approach such as CSS Grid over absolute positioning; and
  2. not apply z-index unless it’s necessary.

which elements appear over which is, by default, a question of source order. That is: if two elements share the same space, the one that appears above the other will be the one that comes last in the source.

z-index is only necessary where you want to layer positioned elements irrespective of their source order. It’s another kind of override, and should be avoided wherever possible.

An arms race of escalating z-index values is often cited as one of those irritating but necessary things you have to deal with using CSS. I rarely have z-index problems, because I rarely use positioning, and I’m mindful of source order when I do.

The fear of keeping up (on gomakethings)

Great post by Chris here on the double-edged-sword of our rapidly-evolving web standards, and how to stay sane. On the one hand the latest additions to the HTML, CSS and JavaScript standards are removing the need for many custom tools which is positive. However:

it can also leave you feeling like it’s impossible to keep up or learn it all. And that’s because you can’t! The field is literally too big to learn everything. “Keeping up” is both impossible and overrated. It’s the path to burnout.

Chris’s suggestion – something I find reassuring and will return to in moments of doubt – is that we focus on:

  • a good understanding of the fundamentals,
  • staying aware of general trends in the industry (such as important forthcoming native HTML elements; and the different approaches to building a website etc),
  • problem-solving: being good at “solving problems with tech” rather than just knowing a bunch of tools.

Design Systems should avoid “God components” and Swiss Army Knives

Something we often talk about in our Design System team is that components should not be like Swiss Army Knives. It’s better for them to be laser-focused because by limiting their scope to a single task they are more reusable and support a more extensible system through composition.

Discussions often arise when we consider the flip-side – components which do too much, know too much, or care too much! When they cover too much ground or make assumptions about their context, things go wrong. Here are some examples.

Card

In websites where many elements have a “rounded panel”-like appearance so as to pop off the background, you can run into problems. Because of the somewhat Card-like appearance, people start to regard many semantically distinct things as “Cards” (rather than limiting the meaning of Card to a more conventional definition). Here are some of the problems this can cause:

  • If the name covers a million use cases, then how can you describe it sensibly, or define its boundaries?
  • When do you stop piling on different things it can mean? How do you stop it growing? How do you avoid bloat?
  • Ongoing naming/confusion issues: you’re setting yourself up for continued confusion and code disparity. If something is “semantically” a note, or a comment, or a message etc then you can expect that future staff are gonna describe it as that rather than a Card! They’ll likely (understandably) write code that feels appropriate too. The problem will continue.

I appreciate that often we need pragmatic solutions, so if our designs have lots of similar-looking elements then there is still something we can do. If the repeated thing is more of a “shape” than a something with common-purpose, then just call it out as that! That could either be by name – for example Every Layout have a Box layout which could be a starting point – or by categorisation i.e. by moving the non-ideally named thing into a clearly demarcated Utilities (or similar) category in your Design System.

Flex

It seems that a number of Design Systems have a Flex component. My feeling, though, is that these represent an early reaction to the emergence of CSS’s Flexbox, rather than necessarily being sensible system-friendly or consumer-friendly components. CSS layout covers a lot and I think breaking this down into different smaller tools (Stack, Inline, Grid etc) works better.

Button

I’ve talked before about the “Everything is a button” mindset and how it’s harmful. Buttons and links are fundamentally different HTML elements with totally different purposes, and bundling them together has various ill effects that I see on a regular basis.

References

Displaying tables on narrow screens

Responsive design for tables is tricky. Sure, you can just make the table’s container horizontally scrollable but that’s more a developer convenience than a great user experience. And if you instead try to do something more clever, you can run into challenges as I did in the past. Still, we should strive to design good narrow screen user experiences for tables, alongside feasible technical solutions to achieve them.

In terms of UI design, I was interested to read Erik Kennedy’s recent newsletter on The best way to display tables on mobile. Erik lists three different approaches, which are (in reverse order of his preference):

  1. Hide the least important columns
  2. Cards with rows of Label-Value pairs
  3. More radical “remix” as a “Mobile List”

Another article worth checking is Andrew Coyle’s The Responsive Table. He describes the following approaches:

  1. Horizontal overflow table (inc. fixed first column)
  2. Transitional table
  3. Priority responsive table

For the transitional table, Andrew links to Charlie Cathcart’s Responsive & Accessible Data Table codepen. It looks similar (perhaps better looking but not quite as accessible) to Adrian Roselli’s Responsive Accessible Table.

Full disclosure

Whether I’m thinking about inclusive hiding, hamburger menus or web components one UI pattern I keep revisiting is the disclosure widget. Perhaps it’s because you can use this small pattern to bring together so many other wider aspects of good web development. So for future reference, here’s a braindump of my knowledge and resources on the subject.

A disclosure widget is for collapsing and expanding something. You might alternately describe that as hiding and showing something. The reason we collapse content is to save space. The thinking goes that users have a finite amount of screen estate (and attention) so we might want to reduce the space taken up by secondary content, or finer details, or repeated content so as to push the page’s key messages to the fore and save the user some scrolling. With a disclosure widget we collapse detailed content into a smaller snippet that acts as a button the user can activate to expand the full details (and collapse them again).

Adrian Roselli’s article Disclosure Widgets is a great primer on the available native and custom ARIA options, how to implement them and where each might be appropriate. Adrian’s article helpfully offers that a disclosure widget (the custom ARIA flavour) can be used as a base in order to achieve some other common UI requirements so long as you’re aware there are extra considerations and handle those carefully. Examples include:

  • link and disclosure widget navigation
  • table with expando rows
  • accordion
  • hamburger navigation
  • highly custom select alternatives when listbox is innapropriate because it needs to include items that do not have the option role
  • a toggle-tip

Something Adrian addresses (and I’ve previously written about) is the question around for which collapse/expand use cases we can safely use the native details element. There’s a lot to mention but since I’d prefer to present a simple heuristic let’s go meta here and use a details:

Use details for basic narrative content and panels but otherwise use a DIY disclosure

It’s either a bad idea or at the very least “challenging” to use a native `details` for:

  • a hamburger menu
  • an accordion

In terms of styling terms it’s tricky to use a `details` for:

  • a custom appearance
  • animation

The above styling issues are perhaps not insurmountable. It depends on what level of customisation you need.

Note to self: add more detail and links to this section when I get the chance.

I’ve also noticed that Adrian has a handy pen combining code for numerous disclosure widget variations.

Heydon Pickering’s Collapsible sections on Inclusive Components is excellent, and includes consideration of progressive enhancement and an excellent web component version. It’s also oriented toward multiple adjacent sections (an accordion although it doesn’t use that term) and includes fantastic advice regarding:

  • appropriate markup including screen reader considerations
  • how best to programmatically switch state (such as open/closed) within a web component
  • how to make that state accessible via an HTML attribute on the web component (e.g. <toggle-section open=true>)
  • how that attribute is then accessible outside the component, for example to a button and script that collapses and expands all sections simultaneously

There’s my DIY Disclosure widget demo on Codepen. I first created it to use as an example in a talk on Hiding elements on the web, but since then its implementation has taken a few twists and turns. In its latest incarnation I’ve taken some inspiration from the way Manuel Matuzovic’s navigation tutorial uses a template in the markup to prepare the “hamburger toggle” button.

I’ve also been reflecting on how the hidden attribute’s boolean nature is ideal for a toggle button in theory – it’s semantic and therefore programattically conveys state – but how hiding with CSS can be more flexible, chiefly because hidden (like CSS’s display) is not animatible. If you hide with CSS, you could opt to use visibility: hidden (perhaps augmented with position so to avoid taking up space while hidden) which similarly hides from everyone in terms of accessibilty.

As it happens, the first web component I created was a disclosure widget. It could definitely be improved by some tweaks and additions along the lines of Heydon Pickering’s web component mentioned above. I’ll try to do that soon.

Troubleshooting

For some disclosure widget use cases (such as a custom link menu often called a Dropdown) there are a few events that typically should collapse the expanded widget. One is the escape key. Another is when the user moves focus outside the widget. One possible scenario is that the user might activate the trigger button, assess the expanded options and subsequently decide none are suitable and move elsewhere. The act of clicking/tapping elsewhere should collapse the widget. However there’s a challenge. In order for the widget to be able to fire unfocus so that an event listener can act upon that, it would have to be focused in the first place. And in Safari – unlike other browsers – buttons do not automatically receive focus when activated. (I think Firefox used to be the same but was updated.) The workaround is to set focus manually via focus() in your click event listener for the trigger button.

Should I use a button or a link?

I’ve written previously about the important differences between buttons and links. While reviewing some “component refresh” design mocks at work yesterday I noticed the designs were a bit unclear in this regard so I sent the designers a little decision-tree, which I’m noting here for future reference.

It’s important both for our users and for us as practitioners to distinguish between links (the <a> element) and the <button> element. The reason I push this is because they’re fundamentally different functionally, which has important usability implications. Users expect to use mouse, keyboard, browser back-button and assistive tech differently for links than they do for <button>s. And if they can’t visually distinguish one from the other, they’ll try things they expect to work then get confused when they don’t work.

I think this is an area where design and materials can’t be considered separately and need a joined-up approach.

Here’s a flow I hope is helpful.

Ask: does it…

  1. take the user to another page? Then it’ll be a link – the <a> (anchor) element.
  2. cause something to change on the current page, or submit a form? Then it’ll be a button – i.e. the <button> element.

If it’s a link (<a>):

  • it should be underlined so people know it’s a link
  • it should have a hover state, for example stay underlined but change colour
  • in cases where it’s a CTA you might choose to design it to look button-like and remove some standard link affordances. Just be aware you’re only “calling” it a button. In real user-experienced terms, it’s still a link.
  • it does not natively have a disabled state. We shouldn’t be disabling links.

If it’s a button (<button>):

  • it should look like a button, i.e. like a pill or rectangle
  • It should not look like a link – that’d confuse users into thinking it takes them to another page.
  • So it shouldn’t be underlined by default or on hover. It should have some other hover state.

Testing the decision tree

Let’s take the example of a control for launching a modal dialogue.

The obvious choice is a button, because the control causes something to change on the current page. In this case it causes a dialogue to appear on the current page.

Some might argue that it could be a link. This is usually influenced by the fact that dialogues are often (perhaps inadvisably) used as a kind of “fake page”. And to get someone to a “page” we use a link, right? Advocates of the link option might also have progressive enhancement in mind. If they present a link either to a named fragment further down the page or to a separate page, that offers a resilient baseline experience regardless of whether or not JavaScript is available. The idea is that they also have JavaScript to enhance the link when the user’s environment supports it, perhaps adding role=button.

However a button is the more accessible and user-friendly approach for launching the modal.

What open-source design systems are built with web components?

Alex Page, a Design System engineer at Spotify, has just asked:

What open-source design systems are built with web components? Anyone exploring this space? Curious to learn what is working and what is challenging. #designsystems #webcomponents

And there are lots of interesting examples in the replies.

I plan to read up on some of the stories behind these systems.

I really like Web Components but given that I don’t take a “JavaScript all the things” approach to development and design system components, I’ve been reluctant to consider that web components should be used for every component in a system. They would certainly offer a lovely, HTML-based interface for component consumers and offer interoperability benefits such as Figma integration. But if we shift all the business logic that we currently manage on the server to client-side JavaScript then:

  • the user pays the price of downloading that additional code;
  • you’re writing client-side JavaScript even for those of your components that aren’t interactive; and
  • you’re making everything a custom element (which as Jim Neilsen has previously written brings HTML semantics and accessibility challenges).

However maybe we can keep the JavaScript for our Web Component-based components really lightweight? I don’t know. For now I’m interested to just watch and learn.

Saving CSS changes in DevTools without leaving the browser

Scott Jehl recently tweeted:

Browser devtools have made redesigning a site such a pleasure. I love writing and adjusting a CSS file right in the sources panel and seeing design changes happen as I type, and saving it back to the file. (…) Designing against live HTML allows happy accidents and discoveries to happen that I wouldn't think of in an unconstrained design mockup

I feel very late to the party here. I tend to tinker in the DevTools Element Styles panel rather than save changes. So, inspired by Scott, I’ve just tried this out on my personal website. Here’s what I did.

  1. started up my 11ty-based site locally which launches a localhost URL for viewing it in the browser;
  2. opened Chrome’s DevTools at Sources;
  3. checked the box “Enable local overrides” then followed the prompts to allow access to the folder containing my SCSS files;
  4. opened an SCSS file in the Sources tab for editing side-by-side with my site in the browser;
  5. made a change, hit Cmd-S to save and marvelled at the fact that this updated that file, as confirmed by a quick git status check.
  6. switched to the Elements panel, opened its Styles subpanel, made an element style change there too, then confirmed that this alternative approach also saves changes to a file.

This is a really interesting and efficient way of working in the browser and I can see me using it.

There are also a couple of challenges which I’ll probably want to consider. Right now when I make a change to a Sass file, the browser takes a while to reflect that change, which diminishes the benefit of this approach. My site is set up such that Eleventy watches for changes to the sass folder as a trigger for rebuilding the static site. This is because for optimal performance I’m purging the compiled and combined CSS and inlining that into the <head> of every file… which unfortunately means that when the CSS is changed, every file needs rebuilt. So I need to wait for Eleventy to do its build thing until the page I’m viewing shows my CSS change.

To allow my SCSS changes to be built and reflected faster I might consider no longer inlining CSS, or only inlining a small amount of critical stuff… or maybe (as best of all worlds) only do the inlining for production builds but not in development. Yeah, I like that latter idea. Food for thought!

Partnering with Google on web.dev (on adactio.com)

At work in our Design System team, we’ve been doing a lot of content and documentation writing for a new reference website. So it was really timely to read Jeremy Keith of Clearleft’s new post on the process of writing Learn Responsive Design for Google’s web.dev resource. The course is great, very digestible and I highly recommend it to all. But I also love this new post’s insight into how Google provided assistance, provided a Content handbook as “house style” for writing on web.dev and managed the process from docs and spreadsheets to Github. I’m sure there will be things my team can learn from that Content Handbook as we go forward with our technical writing.

Building a toast component (by Adam Argyle)

Great tutorial (with accompanying video) from Adam Argyle which starts with a useful definition of what a Toast is and is not:

Toasts are non-interactive, passive, and asynchronous short messages for users. Generally they are used as an interface feedback pattern for informing the user about the results of an action. Toasts are unlike notifications, alerts and prompts because they're not interactive; they're not meant to be dismissed or persist. Notifications are for more important information, synchronous messaging that requires interaction, or system level messages (as opposed to page level). Toasts are more passive than other notice strategies.

There are some important distinctions between toasts and notifications in that definition: toasts are for less important information and are non-interactive. I remember in a previous work planning exercise regarding a toast component a few of us got temporarily bogged down in working out the best JavaScript templating solution for SVG icon-based “Dismiss” buttons… however we were probably barking up the wrong tree with the idea that toasts should be manually dismissable.

There are lots of interesting ideas and considerations in Adam’s tutorial, such as:

  • using the <output> element for each toast
  • some crafty use of CSS Grid and logical properties for layout
  • combining hsl and percentages in custom properties to proportionately modify rather than redefine colours for dark mode
  • animation using keyframes and animation
  • native JavaScript modules
  • inserting an element before the <body> element (TIL that this is a viable option)

Thanks for this, Adam!

(via Adam’s tweet)

There’s some nice code in here but the demo page minifies and obfuscates everything. However the toast component source is available on GitHub.

Related links

Web animation tips

Warning: this entry is a work-in-progress and incomplete. That said, it's still a useful reference to me which is why I've published it. I’ll flesh it out soon!

There are lots of different strands of web development. You try your best to be good at all of them, but there’s only so much time in the day! Animation is an area where I know a little but would love to know more, and from a practical perspective I’d certainly benefit from having some road-ready solutions to common challenges. As ever I want to favour web standards over libraries where possible, and take an approach that’s lean, accessible, progressively-enhanced and performance-optimised.

Here’s my attempt to break down web animation into bite-sized chunks for ocassional users like myself.

Defining animation

Animation lets us make something visually move between different states over a given period of time.

Benefits of animation

Animation is a good way of providing visual feedback, teaching users how to use a part of the interface, or adding life to a website and making it feel more “real”.

Simple animation with transition properties

CSS transition is great for simple animations triggered by an event.

We start by defining two different states for an element—for example opacity:1 and opacity:0—and then transition between those states.

The first state would be in the element’s starting styles (either defined explicitly or existing implicitly based on property defaults) and the other in either its :hover or :focus styles or in a class applied by JavaScript following an event.

Without the transition the state change would still happen but would be instantaneous.

You’re not limited to only one property being animated and might, for example, transition between different opacity and transform states simultaneously.

Here’s an example “rise on hover” effect, adapted from Stephanie Eckles’s Smol CSS.

<div class="u-animate u-animate--rise">
<span>rise</span>
</div>
.u-animate > * {
--transition-property: transform;
--transition-duration: 180ms;
transition: var(--transition-property) var(--transition-duration) ease-in-out;
}

.u-animate--rise:hover > * {
transform: translateY(-25%);
}

Note that:

  1. using custom properties makes it really easy to transition a different property than transform without writing repetitious CSS.
  2. we have a parent and child (<div> and <span> respectively in this example) allowing us to avoid the accidental flicker which can occur when the mouse is close to an animatable element’s border by having the child be the effect which animates when the trigger (the parent) is hovered.

Complex animations with animation properties

If an element needs to animate automatically (perhaps on page load or when added to the DOM), or is more complex than a simple A to B state change, then a CSS animation may be more appropriate than transition. Using this approach, animations can:

  • run automatically (you don’t need an event to trigger a state change)
  • go from an initial state through multiple intermediate steps to a final state rather than just from state A to state B
  • run forwards, in reverse, or alternate directions
  • loop infinitely

The required approach is:

  1. use @keyframes to define a reusable “template” set of animation states (or frames); then
  2. apply animation properties to an element we want to animate, including one or more @keyframes to be used.

Here’s how you do it:

@keyframes flash {
0% { opacity: 0; }
20% { opacity: 1; }
80% { opacity: 0; }
100% { opacity: 1; }
}

.animate-me {
animation: flash 5s infinite;
}

Note that you can also opt to include just one state in your @keyframes rule, usually the initial state (written as either from or 0%) or final state (written as either to or 100%). You’d tend to do that for a two-state animation where the other “state” is in the element’s default styles, and you’d either be starting from the default styles (if your single @keyframes state is to) or finishing on them (if your single @keyframes state is from).

Should I use transition or animation?

As far as I can tell there’s no major performance benefit of one over the other, so that’s not an issue.

When the animation will be triggered by pseudo-class-based events like :hover or :focus and is simple i.e. based on just two states, transition feels like the right choice.

Beyond that, the choice gets a bit less binary and seems to come down to developer preference. But here are a couple of notes that might help in making a decision.

For elements that need to “animate in” on page load such as an alert, or when newly added to the DOM such as items in a to-do list, an animation with keyframes feels the better choice. This is because transition requires the presence of two CSS rules, leading to dedicated JavaScript to grab the element and apply a class, whereas animation requires only one and can move between initial and final states automatically including inserting a delay before starting.

For animations that involve many frames; control over the number of iterations; or looping… use @keyframes and animation.

For utility classes and classes that get added by JS to existing, visible elements following an event, either approach could be used. Arguably transition is the slightly simpler and more elegant CSS to write if it covers your needs. Then again, you might want to reuse the animations applied by those classes for both existing, visible elements and new, animated-in elements, in which case you might feel that instead using @keyframes and animation covers more situations.

Performance

A smooth animation should run at 60fps (frames per second). Animations that are too computationally expensive result in frames being dropped, i.e. a reduced fps rate, making the animation appear janky.

Cheap and slick properties

The CSS properties transform and opacity are very cheap to animate. Also, browsers often optimise these types of animation using hardware acceleration. To hint to the browser that it should optimise an animation property (and to ensure it is handled by the GPU rather than passed from CPU to GPU causing a noticeable glitch) we should use the CSS will-change property.

.my-element {
will-change: transform;
}

Expensive properties

CSS properties which affect layout such as height are very expensive to animate. Animating height causes a chain reaction where sibling elements have to move too. Use transform over layout-affecting properties such as width or left if you can.

Some other CSS properties are less expensive but still not ideal, for example background-color. It doesn't affect layout but requires a repaint per frame.

Test your animations on a popular low-end device.

Timing functions

  • linear goes at the same rate from start to finish. It’s not like most motion in the real world.
  • ease-out starts fast then gets really slow. Good for things that come in from off-screen, like a modal dialogue.
  • ease-in starts slow then gets really fast. Good for moving somethng off-screen.
  • ease-in-out is the combination of the previous two. It‘s symmetrical, having an equal amount of acceleration and deceleration. Good for things that happen in a loop such as element fading in and out.
  • ease is the default value and features a brief ramp-up, then a lot of deceleration. It’s a good option for most general case motion that doesn’t enter or exit the viewport.

Practical examples

You can find lots of animation inspiration in libraries such as animate.css (and be sure to check animate.css on github where you can search their source for specific @keyframe animation styles).

But here are a few specific examples of animations I or teams I’ve worked on have had to implement.

Skip to content

The anchor’s State A sees its position fixed—i.e. positioned relative to the viewport—but then moved out of sight above it via transform: translateY(-10em). However its :focus styles define a State B where the intial translate has been undone so that the link is visible (transform: translateY(0em)). If we transition the transform property then we can animate the change of state over a chosen duration, and with our preferred timing function for the acceleration curve.

HTML:

<div class="u-visually-hidden-until-focused">
<a
href="#skip-link-target"
class="u-visually-hidden-until-focused__item"
>
Skip to main content</a>
</div>

<nav>
<ul>
<li><a href="/">Home</a></li>
<li><a href="/">News</a></li>
<li><a href="/">About</a></li>
<!-- …lots more nav links… -->
<li><a href="/">Contact</a></li>
</ul>
</nav>

<main id="skip-link-target">
<h1>This is the Main content</h1>
<p>Lorem ipsum <a href="/news/">dolor sit amet</a> consectetur adipisicing elit.</p>
<p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p>
</main>

CSS:

.u-visually-hidden-until-focused {
left: -100vw;
position: absolute;

&__item {
position: fixed;
top: 0;
left: 0;
transform: translateY(-10em);
transition: transform 0.2s ease-in-out;

&:focus {
transform: translateY(0em);
}
}
}

To see this in action, visit my pen Hiding: visually hidden until focused and press the tab key.

Animating in an existing element

For this requirement we want an element to animate from invisible to visible on page load. I can imagine doing this with an image or an alert, for example. This is pretty straightforward with CSS only using @keyframes, opacity and animation.

Check out my fade in and out on page load with CSS codepen.

Animating in a newly added element

Stephanie Eckles shared a great CSS-only solution for animating in a newly added element which handily includes a Codepen demo. She mentions “CSS-only” because it’s common for developers to achieve the fancy animation via transition but that means needing to “make a fake event” via a JavaScript setTimeout() so that you can transition from the newly-added, invisible and class-free element state to adding a CSS class (perhaps called show) that contains the opacity:1, fancy transforms and a transition. However Stephanie’s alternative approach combines i) hiding the element in its default styles; with ii) an automatically-running animation that includes the necessary delay and also finishes in the keyframe’s single 100% state… to get the same effect minus the JavaScript.

Avoiding reliance on JS and finding a solution lower down the stack is always good.

HTML:

<button>Add List Item</button>
<ul>
<li>Lorem ipsum dolor sit amet consectetur adipisicing elit. Nostrum facilis perspiciatis dignissimos, et dolores pariatur.</li>
</ul>

CSS:

li {
animation: show 600ms 100ms cubic-bezier(0.38, 0.97, 0.56, 0.76) forwards;

// Prestate
opacity: 0;
// remove transform for just a fade-in
transform: rotateX(-90deg);
transform-origin: top center;
}

@keyframes show {
100% {
opacity: 1;
transform: none;
}
}

Jhey Tompkins shared another CSS-only technique for adding elements to the DOM with snazzy entrance animations. He also uses just a single @keyframes state but in his case the from state which he uses to set the element’s initial opacity:0, then in his animation he uses an animation-fill-mode of both (rather than forwards as Stephanie used).

I can’t profess to fully understand both however if you change Jhey’s example to use forwards instead, then the element being animated in will temporarily appear before the animation starts (which ain’t good) rather than being initially invisible. Changing it to backwards gets us back on track, so I guess the necessary value relates to whether you’re going for from/0% or to/100%… and both just covers you for both cases. I’d probably try to use the appropriate one rather than both just in case there’s a performance implication.

Animated disclosure

Here’s an interesting conundrum.

For disclosure (i.e. collapse and expand) widgets, I tend to either use the native HTML <details> element if possible or else a simple, accessible DIY disclosure in which activating a trigger toggles a nearby content element’s visibility. In both cases, there’s no animation; the change from hidden to revealed and back again is immediate.

To my mind it’s generally preferable to keep it simple and avoid animating a disclosure widget. For a start, it’s tricky! The <details> element can’t be (easily) animated. And if using a DIY widget it’ll likely involve animating one of the expensive properties. Animating height or max-height is also gnarly when working with variable (auto) length content and often requires developers to go beyond CSS and reach for JavaScript to calculate computed element heights. Lastly, forgetting the technical challenges, there’s often no real need to animate disclosure; it might only hinder rather than help the user experience.

But let’s just say you have to do it, perhaps because the design spec requires it (like in BBC Sounds’ expanding and collapsing tracklists when viewed on narrow screens).

Options:

  • Animate the <details> element. This is a nice, standards-oriented approach. But it might only be viable for when you don’t need to mess with <details> appearance too much. We’d struggle to apply very custom styles, or to handle a “show the first few list items but not all” requirement like in the BBC Sounds example;
  • Animate CSS Grid. This is a nice idea but for now the animation only works in Firefox*. It’d be great to just consider it a progressive enhancement so it just depends on whether the animation is deemed core to the experience;
  • Animate from a max-height of 0 to “something sufficient” (my pen is inspired by Scott O’Hara’s disclosure example). This is workable but not ideal; you kinda need to set a max-height sweetspot otherwise your animation will be delayed and too long. You could of course add some JavaScript to get the exact necessary height then set it. BBC use max-height for their tracklist animation and those tracklists likely vary in length so I expect they use some JavaScript for height calculation.

* Update 20/2/23: the “animate CSS Grid” option now has wide browser support and is probably my preferred approach. I made a codepen that demonstrates a disclosure widget with animation of grid-template-rows.

Ringing bell icon

To be written.

Pulsing “radar” effect

To be written.

Accessibility

Accessibility and animation can co-exist, as Cassie Evans explains in her CSS-Tricks article Empathetic Animation. We should consider which parts of our website are suited to animation (for example perhaps not on serious, time-sensitive tasks) and we can also respect reduced motion preferences at a global level or in a more finer-grained way per component.

Notes

  • transition-delay can be useful for avoiding common annoyances, such as when a dropdown menu that appears on hover disappears when you try to move the cursor to it.

References

Front-end architecture for a new website (in 2021)

Just taking a moment for some musings on which way the front-end wind is blowing (from my perspective at least) and how that might practically impact my approach on the next small-ish website that I code.

I might lean into HTTP2

Breaking CSS into small modules then concatenating everything into a single file has traditionally been one of the key reasons for using Sass, but in the HTTP2 era where multiple requests are less of a performance issue it might be acceptable to simply include a number of modular CSS files in the <head>, as follows:

<link href="/css/base.css" rel="stylesheet">
<link href="/css/component_1.css" rel="stylesheet">
<link href="/css/component_2.css" rel="stylesheet">
<link href="/css/component_3.css" rel="stylesheet">

The same goes for browser-native JavaScript modules.

This isn’t something I’ve tried yet and it’d feel like a pretty radical departure from the conventions of recent years… but it‘s an option!

I’ll combine ES modules and classes

It’s great that JavaScript modules are natively supported in modern browsers. They allow me to remove build tools, work with web standards, and they perform well. They can also serve as a mustard cut that allows me to use other syntax and features such as async/await, arrow functions, template literals, the spread operator etc with confidence and without transpilation or polyfilling.

In the <head>:

<script type="module" src="/js/main.js"></script>

In main.js

import { Modal } from '/components/modal.js';

const Modal = new Modal();
modal.init();

In modal.js

export class Modal {
init() {
// modal functionality here
}
}

I’ll create Web Components

I’ve done a lot of preparatory reading and learning about web components in the last year. I’ll admit that I’ve found the concepts (including Shadow DOM) occasionally tough to wrap my head around, and I’ve also found it confusing that everyone seems to implement web components in different ways. However Dave Rupert’s HTML with Superpowers presentation really helped make things click.

I’m now keen to create my own custom elements for javascript-enhanced UI elements; to give LitElement a spin; to progressively enhance a Light DOM baseline into Shadow DOM fanciness; and to check out how well the lifecycle callbacks perform.

I’ll go deeper with custom properties

I’ve been using custom properties for a few years now, but at first it was just as a native replacement for Sass variables, which isn’t really exploiting their full potential. However at work we’ve recently been using them as the special sauce powering component variations (--gap, --mode etc).

In our server-rendered components we’ve been using inline style attributes to apply variations via those properties, and this brings the advantage of no longer needing to create a CSS class per variation (e.g. one CSS class for each padding variation based on a spacing scale), which in turn keeps code and specificity simpler. However as I start using web components, custom properties will prove really handy here too. Not only can they be updated by JavaScript, but furthermore they provide a bridge between your global CSS and your web component because they can “pierce the Shadow Boundary”, make styling Shadow DOM HTML in custom elements easier.

I’ll use BEM, but loosely

Naming and structuring CSS can be hard, and is a topic which really divides opinion. Historically I liked to keep it simple using the cascade, element and contextual selectors, plus a handful of custom classes. I avoided “object-oriented” CSS methodologies because I found them verbose and, if I’m honest, slightly “anti-CSS”. However it’s fair to say that in larger applications and on projects with many developers, this approach lacked a degree of structure, modularisation and predictability, so I gravitated toward BEM.

BEM’s approach is a pretty sensible one and, compared to the likes of SUIT, provides flexibility and good documentation. And while I’ve been keeping a watchful eye on new methodologies like CUBE CSS and can see that they’re choc-full of ideas, my feeling is that BEM remains the more robust choice.

It’s also important to me that BEM has the concept of a mix because this allows you to place multiple block classes on the same element so as to (for example) apply an abstract layout in combination with a more implementation-specific component class.

<div class="l-stack c-news-feed">

Where I’ll happily deviate from BEM is to favour use of certain ARIA attributes as selectors (for example [aria-current=page] or [aria-expanded=true] because this enforces good accessibility practice and helps create equivalence between the visual and non-visual experience. I’m also happy to use the universal selector (*) which is great for owl selectors and I’m fine with adjacent sibling (and related) selectors.

Essentially I’m glad of the structure and maintainability that BEM provides but I don’t want a straitjacket that stops me from using my brain and applying CSS properly.

Learn Responsive Design (on web.dev)

Jeremy Keith’s new course for Google’s web.dev learning platform is fantastic and covers a variety of aspects of responsive design including layout (macro and micro), images, icons and typography.

Resources for learning front-end web development

A designer colleague recently asked me what course or resources I would recommend for learning front-end web development. She mentioned React at the beginning but I suggested that it’d be better to start by learning HTML, CSS, and JavaScript. As for React: it’s a subset or offshoot of JavaScript so it makes sense to understand vanilla JS first.

For future reference, here are my tips.

Everything in one place

Google’s web.dev training resource have also been adding some excellent guides, such as:

Another great one-stop shop is MDN Web Docs. Not only is MDN an amazing general quick reference for all HTML elements, CSS properties, JavaScript APIs etc but for more immersive learning there are also MDN’s guides.

Pay attention to HTML

One general piece of advice is that when people look at lists of courses (whether or not they are new to web development) they ensure to learn HTML. People tend to underestimate how complicated, fast-moving and important HTML is.

Also, everything else – accessibility, CSS, JavaScript, performance, resilience – requires a foundation of good HTML. Think HTML first!

Learning CSS, specifically

CSS is as much about concepts and features – e.g. the cascade and specificity, layout, responsive design, typography, custom properties – as it is about syntax. In fact probably more so.

Most tutorials will focus on the concepts but not necessarily so much on practicalities like writing-style or file organisation.

Google’s Learn CSS course should be pretty good for the modern concepts.

Google also have Learn Responsive Design.

If you’re coming from a kinda non-CSS-oriented perspective, Josh W Comeau’s CSS for JavaScript Developers (paid course) could be worth a look.

If you prefer videos, you could check out Steve Griffith’s video series Learning CSS. Steve’s videos are comprehensive and well-paced. It contains a whole range of topics (over 100!), starting from the basics like CSS Box Model.

In terms of HTML and CSS writing style (BEM etc) and file organisation (ITCSS etc), here’s a (version of a) “style guide” that my team came up with for one of our documentation websites. I think it’s pretty good!

CSS and HTML Style Guide (to do: add link here)

For more on ITCSS and Harry Roberts’s thoughts on CSS best practices, see:

Learning JavaScript

I recommended choosing a course or courses from CSS-Tricks’ post Beginner JavaScript notes, especially as it includes Wes Bos’s Beginner JavaScript Notes + Reference.

If you like learning by video, check out Steve Griffith’s JavaScript playlist.

Once you start using JS in anger, I definitely recommend bookmarking Chris Ferdinandi’s Methods and APIs reference guide.

If you’re then looking for a lightweight library for applying sprinkles of JavaScript, you could try Stimulus.

Learning Responsive Design

I recommend Jeremy Keith’s Learn Responsive Design course on web.dev.

Lists of courses

You might choose a course or courses from CSS-Tricks’ post Where do you learn HTML and CSS in 2020?

Recommended books

  • Resilient Web Design by Jeremy Keith. A fantastic wide-screen perspective on what we’re doing, who we’re doing it for, and how to go about it. Read online or listen as an audiobook.
  • Inclusive Components by Heydon Pickering. A unique, accessible approach to building interactive components, from someone who’s done this for BBC, Bulb, Spotify.
  • Every Layout by Heydon Pickering & Andy Bell. Introducing layout primitives, for handling responsive design in Design Systems at scale (plus so many insights about the front-end)
  • Atomic Design by Brad Frost. A classic primer on Design Systems and component-composition oriented thinking.
  • Practical SVG by Chris Coyier. Learn why and how to use SVG to make websites more aesthetically sharp, performant, accessible and flexible.
  • Web Typography by Richard Rutter. Elevate the web by applying the principles of typography via modern web typography techniques.

Collapsible sections, on Inclusive Components

It’s a few years old now, but this tutorial from Heydon Pickering on how to create an accessible, progressively enhanced user interface comprised of multiple collapsible and expandable sections is fantastic. It covers using the appropriate HTML elements (buttons) and ARIA attributes, how best to handle icons (minimal inline SVG), turning it into a web component and plenty more besides.

BBC WebCore Design System

A Storybook UI explorer containing the components and layouts for making the front end of a BBC web experience.

Buttons and links: definitions, differences and tips

On the web buttons and links are fundamentally different materials. However some design and development practices have led to them becoming conceptually “bundled together” and misunderstood. Practitioners can fall into the trap of seeing the surface-level commonality that “you click the thing, then something happens” and mistakenly thinking the two elements are interchangeable. Some might even consider them as a single “button component” without considering the distinctions underneath. However this mentality causes our users problems and is harmful for effective web development. In this post I’ll address why buttons and links are different and exist separately, and when to use each.

Problematic patterns

Modern website designs commonly apply the appearance of a button to a link. For isolated calls to action this can make sense however as a design pattern it is often overused and under-cooked, which can cause confusion to developers implementing the designs.

Relatedly, it’s now common for Design Systems to have a Button component which includes button-styled links that are referred to simply as buttons. Unless documented carefully this can lead to internal language and comprehension issues.

Meanwhile developers have historically used faux links (<a href="#">) or worse, a DIY clickable div, as a trigger for JavaScript-powered functionality where they should instead use native buttons.

These patterns in combination have given rise to a collective muddle over buttons and links. We need to get back to basics and talk about foundational HTML.

Buttons and anchors in HTML

There are two HTML elements of interest here.

Hyperlinks are created using the HTML anchor element (<a>). Buttons (by which I mean real buttons rather than links styled to appear as buttons) are implemented with the HTML button element (<button>).

Although a slight oversimplification, I think David MacDonald’s heuristic works well:

If it GOES someWHERE use a link

If it DOES someTHING use a button

A link…

  • goes somewhere (i.e. navigates to another place)
  • normally links to another document (i.e. page) on the current website or on another website
  • can alternatively link to a different section of the same page
  • historically and by default appears underlined
  • when hovered or focused offers visual feedback from the browser’s status bar
  • uses the “pointing hand” mouse pointer
  • results in browser making an HTTP GET request by default. It’s intended to get a page or resource rather than to change something
  • offers specific right-click options to mouse users (open in new tab, copy URL, etc)
  • typically results in an address which can be bookmarked
  • can be activated by pressing the return key
  • is announced by screen readers as “Link”
  • is available to screen reader users within an overall Links list

A button…

  • does something (i.e. performs an action, such as “Add”, “Update” or "Show")
  • can be used as <button type=submit> within a form to submit the form. This is a modern replacement for <input type=submit /> and much better as it’s easier to style, allows nested HTML and supports CSS pseudo-elements
  • can be used as <button type=button> to trigger JavaScript. This type of button is different to the one used for submitting a <form>. It can be used for any type of functionality that happens in-place rather than taking the user somewhere, such as expanding and collapsing content, or performing a calculation.
  • historically and by default appears in a pill or rounded rectangle
  • uses the normal mouse pointer arrow
  • can be activated by pressing return or space.
  • implictly gets the ARIA button role.
  • can be extended with further ARIA button-related states like aria-pressed
  • is announced by screen readers as “Button”
  • unlike a link is not available to screen reader users within a dedicated list

Our responsibilities

It’s our job as designers and developers to use the appropriate purpose-built element for each situation, to present it in a way that respects conventions so that users know what it is, and to then meet their expectations of it.

Tips

  • Visually distinguish button-styled call-to-action links from regular buttons, perhaps with a more pill-like appearance and a right-pointing arrow
  • Avoid a proliferation of call-to-action links by linking content itself (for example a news teaser’s headline). Not only does this reduce “link or button?” confusion but it also saves space, and provides more accessible link text.
  • Consider having separate Design System components for Button and ButtonLink to reinforce important differences.
  • For triggering JavaScript-powered interactions I’ll typically use a button. However in disclosure patterns where the trigger and target element are far apart in the DOM it can make sense to use a link as the trigger.
  • For buttons which are reliant on JavaScript, it’s best to use them within a strategy of progressive enhancement and not render them on the server but rather with client-side JavaScript. That way, if the client-side JavaScript is unsupported or fails, the user won’t be presented with a broken button.

Update: 23 November 2024

Perhaps a better heuristic than David MacDonald’s mentioned above, is:

Links are for a simple connection to a resource; buttons are for actions.

What I prefer about including a resource is that the “goes somewhere” definition of a link breaks down for anchors that instruct the linked resource to download (via the download attribute) rather than render in the browser, but this doesn’t. I also like the inclusion of simple because some buttons (like the submit button of a search form) might finish by taking you to a resource (the search results page) but that’s a complex action not a simple connection; you’re searching a database using your choice of search query.

References

Broken Copy, on a11y-101.com

Here’s an accessibility tip that’s new to me. When the content of a heading, anchor, or other semantic HTML element contains smaller “chunks” of span and em (etc), the VoiceOver screen reader on Mac and iOS annoyingly fails to announce the content as a single phrase and instead repeats the parent element’s role for each inner element. We can fix that by adding an inner “wrapper” element inside our parent and giving it role=text.

Make sure not to add this role directly to your parent element since it will override its original role causing it to lose its intended semantics.

The text role is not yet in the official ARIA spec but is supported by Safari.

(via @Seraphae and friends on Twitter)

Motion One: The Web Animations API for everyone

A new animation library, built on the Web Animations API for the smallest filesize and the fastest performance.

This JavaScript-based animation library—which can be installed via npm—leans on an existing web API to keep its file size low and uses hardware accelerated animations where possible to achieve impressively smooth results.

For fairly basic animations, this might provide an attractive alternative to the heavier Greensock. The Motion docs do however flag the limitation that it can only animate “CSS styles”. They also say “SVG styles work fine”. I hope by this they mean SVG presentation attributes rather than inline CSS on an SVG, although it’s hard to tell. However their examples look promising.

The docs website also contains some really great background information regarding animation performance.

Testing ES modules with Jest

Here are a few troubleshooting tips to enable Jest, the JavaScript testing framework, to be able to work with ES modules without needing Babel in the mix for transpilation. Let’s get going with a basic set-up.

package.json

…,
"scripts": {
  "test": "NODE_ENV=test NODE_OPTIONS=--experimental-vm-modules jest"
},
"type": "module",
"devDependencies": {
  "jest": "^27.2.2"
}

Note: take note of the crucial "type": "module" part as it’s the least-documented bit and your most likely omission!

After that set-up, you’re free to import and export to your heart’s content.

javascript/sum.js

export const sum = (a, b) => {
  return a + b;
}

spec/sum.test.js

import { sum } from "../javascript/sum.js";

test('adds 1 + 2 to equal 3', () => {
expect(sum(1, 2)).toBe(3);
});

Hopefully that’ll save you (and future me) some head-scratching.

(Reference: Jest’s EcmaScript Modules docs page)

Harry Roberts says “Get Your Head Straight”

Harry Roberts (who created ITCSS for organising CSS at scale but these days focuses on performance) has just given a presentation about the importance of getting the content, order and optimisation of the <head> element right, including lots of measurement data to back up his claims. Check out the slides: Get your Head Straight

While some of the information about asset loading best practices is not new, the stuff about ordering of head elements is pretty interesting. I’ll be keeping my eyes out for a video recording of the presentation though, as it’s tricky to piece together his line of argument from the slides alone.

However one really cool thing he’s made available is a bookmarklet for evaluating any website’s <head>:
ct.css

The accessibility of conditionally revealed questions (on GOV.UK)

Here’s something to keep in mind when designing and developing forms. GOV.UK’s accessibility team found last year that there are some accessibility issues with the “conditional reveal” pattern, i.e. when selecting a particular radio button causes more inputs to be revealed.

The full background story is really interesting but the main headline seems to be: Keep it simple.

  1. Don’t reveal any more than a single input, otherwise the revealed section should not be in a show-and-hide but rather in its own form in the next step of the process.
  2. Conditionally show questions only (i.e. another form input such as Email address)—do not show or hide anything that’s not a question.

Doing otherwise causes some users confusion making it difficult for them to complete the form.

See also the Conditionally revealing a related question section on the Radios component on the GDS Design System

W3C Design System

The W3C have just published a new Design System. It was developed by British Digital Agency Studio 24, who are also working (in the open) on the redesign of the W3C website.

My initial impression is that this Design System feels pretty early-stage and work-in-progress. I’m not completely sold on all of the technical details, however it definitely contains a number of emergent best practices and lots of interesting parts.

I particularly liked the very detailed Forms section which assembles lots of good advice from Adam Silver and GOV.UK, and I also found it interesting and useful that they include a Page Templates section rather than just components and layouts.

It’s cool to see an institution like the W3C have a Design System, and I’m looking forward to seeing how it evolves.

Accessibility Testing (on adactio.com)

In this journal entry, Jeremy Keith argues that when it comes to accessibility testing it’s not just about finding issues—it’s about finding the issues at the right time.

Here’s my summary:

  • Accessibility Audits performed by experts and real Assistive Technology users are good!
  • But try to get the most out of them by having them focus on the things that you can’t easily do yourself.
  • We ourselves can handle things like colour contrast. It can be checked at the design stage before a line of code is written.
  • Likewise HTML structure such as ensuring accessible form labels, ensuring images have useful alt values, using landmarks like main and nav, heading structure etc. These are not tricky to find and fix ourselves and they have a big accessibility impact.
  • As well as fixing those issues ourselves we should also put in place new processes, checks and automation if possible to stop them recurring
  • As for custom interactive elements (tabs, carousels, navigation, dropdowns): these are specific to our site and complicated/error-prone by nature, so those are the things we should be aiming to have professional Accessibility Audits focus on in order to get best value for money.

Practical front-end performance tips

I’ve been really interested in the subject of Web Performance since I read Steve Souders’ book High Performance Websites back in 2007. Although some of the principles in that book are still relevant, it’s also fair to say that a lot has changed since then so I decided to pull together some current tips. Disclaimer: This is a living document which I’ll expand over time. Also: I’m a performance enthusiast but not an expert. If I have anything wrong, please let me know.

Inlining CSS and or JavaScript

The first thing to know is that both CSS and JavaScript are (by default) render-blocking, meaning that when a browser encounters a standard .css or .js file in the HTML, it waits until that file has finished downloading before rendering anything else.

The second thing to know is that there is a “magic file size” when it comes to HTTP requests. File data is transferred in small chunks of about 14 kb. So if a file is larger than 14 kb, it requires multiple roundtrips.

If you have a lean page and minimal CSS and or JavaScript to the extent that the page in combination with the (minified) CSS/JS content would weigh 14 kb or less after minifying and gzipping, you can achieve better performance by inlining your CSS and or JavaScript into the HTML. This is because there’d be only one request, thereby allowing the browser to get everything it needs to start rendering the page from that single request. So your page is gonna be fast.

If your page including CSS/JS is over 14 kb after minifying and gzipping then you’d be better off not inlining those assets. It’d be better for performance to link to external assets and let them be cached rather than having a bloated HTML file that requires multiple roundtrips and doesn’t get the benefit of static asset caching.

Avoid CSS @import

CSS @import is really slow!

JavaScript modules in the head

Native JavaScript modules are included on a page using the following:

<script type="module" src="main.js"></script>

Unlike standard <script> elements, module scripts are deferred (non render-blocking) by default. Rather than placing them before the closing </body> tag I place them in the <head> so as to allow the script to be downloaded early and in parallel with the DOM being processed. That way, the JavaScript is already available as soon as the DOM is ready.

Background images

Sometimes developers implement an image as a CSS background image rather than a “content image”, either because they feel it’ll be easier to manipulate that way—a typical example being a responsive hero banner with overlaid text—or simply because it’s decorative rather than meaningful. However it’s worth being aware of how that impacts the way that image loads.

Outgoing requests for images defined in CSS rather than HTML won’t start until the browser has created the Render Tree. The browser must first download and parse the CSS then construct the CSSOM before it knows that “Element X” should be visible and has a background image specified, in order to then decide to download that image. For important images, that might feel too late.

As Harry Roberts explains it’s worth considering whether the need might be served as well or better by a content image, since by comparison that allows the browser to discover and request the image nice and early.

By moving the images to <img> elements… the browser can discover them far sooner—as they become exposed to the browser’s preload scanner—and dispatch their requests before (or in parallel to) CSSOM completion

However if still makes sense to use a background image and performance is important Harry recommends including an accompanying hidden image inline or preloading it in the <head> via link rel=preload.

Preload

From MDN’s preload docs, preload allows:

specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.

The benefits are most clearly seen on large and late-discovered resources. For example:

  • Resources that are pointed to from inside CSS, like fonts or images.
  • Resources that JavaScript can request such as JSON and imported scripts.
  • Larger images and videos.

I’ve recently used the following to assist performance of a large CSS background image:

<link rel="preload" href="bg-illustration.svg" as="image" media="(min-width: 60em)">

Self-host your assets

Using third-party hosting services for fonts or other assets no longer offers the previously-touted benefit of the asset potentially already being in the user’s browser cache. Cross domain caching has been disabled in all major browsers.

You can still take advantage of the benefits of CDNs for reducing network latency, but preferably as part of your own infrastructure.

Miscellaneous

Critical CSS is often a wasted effort due to CSS not being a bottleneck, so is generally not worth doing.

References

Doppler: Type scale with dynamic line-height

line-height on the web is a tricky thing, but this tool offers a clever solution.

It’s relatively easy to set a sensible unit-less default ratio for body text (say 1.5), but that tends to need tweaked and tested for headings (where spacious line-height doesn’t quite work; but tight line-height is nice until the heading wraps, etc).

Even for body text it’s a not a one-size-fits-all where a line-height like 1.5 is appropriate for all fonts.

Then you’ve got different devices to consider. For confined spaces, tighter line-height works better. But this can mean you might want one line-height for narrow viewports and another for wide.

Then, factor in vertical rhythm based on your modular type and spacing scales if you really want to blow your mind.

It can quickly get really complicated!

Doppler is an interesting idea and tool that I saw in CSS-Tricks’ newsletter this morning. It lets you apply line-height using calc() based on one em-relative value (for example 1em) and one rem-relative value (for example 0.25rem).

In effect you’ll get something like:

set line-height to the font-size of the current element plus a quarter of the user’s preferred font-size

The examples look pretty promising and seem to work well across different elements. I think I’ll give it a spin.

Accessible Color Generator

There are many colour contrast checking tools but I like this one from Erik Kennedy (of Learn UI Design) a lot. It features an intuitive UI using simple, human language that mirrors the task I’m there to achieve, and it’s great that if your target colour doesn’t have sufficient contrast to meet accessibility guidelines it will intelligently suggest alternatives that do.

I’m sure everyone has their favourite tools; I just find this one really quick to use!

SVG Gobbler

SVG Gobbler is a browser extension that finds the vector content on the page you’re viewing and gives you the option to download, optimize, copy, view the code, or export it as an image.

This is a pretty handy Chrome extension that grabs all the SVGs on a webpage and lets you see them all in a grid.

Progressively enhanced burger menu tutorial by Andy Bell

Here’s a smart and comprehensive tutorial from Andy Bell on how to create a progressively enhanced narrow-screen navigation solution using a custom element. Andy also uses Proxy for “enabled” and “open” state management, ResizeObserver on the custom element’s containing header for a Container Query like solution, and puts some serious effort into accessible focus management.

One thing I found really interesting was that Andy was able to style child elements of the custom element (as opposed to just elements which were present in the original unenhanced markup) from his global CSS. My understanding is that you can’t get styles other than inheritable properties through the Shadow Boundary so this had me scratching my head. I think the explanation is that Andy is not attaching the elements he creates in JavaScript to the Shadow DOM but rather rewriting and re-rendering the element’s innerHTML. This is an interesting approach and solution for getting around web component styling issues. I see elsewhere online that the innerHTML based approach is frowned upon however Andy doesn’t “throw out” the original markup but instead augments it.

Should I use the HTML5 section element and if so, where?

Unlike other HTML5 elements such as header, footer and nav, it’s never been particularly clear to me when is appropriate to use section. This is due in large part to many experts having expressed that it doesn’t quite work as intended.

I like HTMHell’s rule-of-thumb regarding section:

If you’re not sure whether to use a <section>, it’s probably best to avoid it.

They go on to recommend that it’s much more important to create a sound document outline. That phrase can be confusing because of the related history of the browser document outline algorithm (or lack thereof) but I think what the author means here is to use and nest headings logically because that alone will give you a “document outline” and also helps AT users scan and skip around the page.

Relatedly: don’t let the original intended use of section tempt you to put multiple H1s on a page in the vain hope that browsers and assistive technology will interpret their nesting level to handle hierarchy appropriately. That would rely on on a document outline algorithm but no browser implements document outlining.

One sensible application of section is to provide additional information to screen reader users about the semantic difference between two adjoining content areas, when that distinction is otherwise only being made visually with CSS.

Here’s an example. Smashing Magazine’s blog articles begin with a quick summary, followed by a horizontal line separating the summary from the article proper. But the separator is purely decorative, so if the summary were wrapped in a div then a screen reader user wouldn’t know where it ends and the article begins. However by instead wrapping the summary in <section aria-label="quick summary">:

  • our wrapper has the built-in ARIA role of region. A region is a type of generic landmark element, and as a landmark a screen reader user will find it listed in a summary of the page and can navigate to it easily.
  • by giving it an accessible name (here via aria-label) it will be announced by a screen reader, with “Quick summary region” before and “Quick summary region end” after.

Update 07/11/22

Adrian Roselli’s twitter thread on section is gold. Here’s what I’ve gleaned from it:

The reason you would use a section element for accessibility purposes is to create a region landmark. If you are using headings properly, in most cases your content is already well-structured and will not require a region landmark. If you do need a section, note that from an accessibility perspective using the section tag alone is meaningless without providing an accessible name. To provide this, ensure your section has a heading and connect the section to that using aria-labelledby.

You can use section without the above measures and it will not harm users. But be aware it’s aiding your developer experience only, because it’s not helping users. And it may also mislead you and others into thinking you are providing semantics and accessibility which in reality you are not.

References:

Manage Design Tokens in Eleventy

One interesting aspect of the Duet Design System is that they use Eleventy to not only generate their reference website but also to generate their Design Tokens.

When I think about it, this makes sense. Eleventy is basically a sausage-machine; you put stuff in, tell it how you want it to transform that stuff, and you get something new out the other end. This isn’t just for markdown-to-HTML, but for a variety of formatA-to-formatB transformation needs… including, for example, using JSON to generate CSS.

Now this is definitely a more basic approach than using a design token tool like StyleDictionary. StyleDictionary handles lots of low-level stuff that would otherwise be tricky to implement. So I’m not suggesting that this is a better approach than using StyleDictionary. However it definitely feels pretty straightforward and low maintenance.

As Heydon Pickering explains, it also opens up the opportunity to make the Design Tokens CMS-editable in Netlify CMS without content editors needing to go near the code.

So you’d have a tokens.json file containing your design tokens, but it’d be within the same repo as your reference website. That’s probably not as good as having the tokens in a separate repo and making them available as a package, but of course a separate 11ty repo is an option too if you prefer.

For a smaller site at least, the “manage design tokens with 11ty” is a nice option, and I think I might give it a try on my personal website.

Duet Design System

Here’s a lovely Design System that interestingly uses Eleventy for its reference website and other generated artefacts:

We use Eleventy for both the static documentation and the dynamically generated parts like component playgrounds and design tokens. We don’t currently use a JavaScript framework on the website, except Duet’s own components.

I find Duet interesting both from the Design System perspective (it contains lots of interesting component techniques and options) but also in terms of how far 11ty can be pushed.

Favourite Eleventy (11ty) Resources

Here are my current go-to resources when building a new site using Eleventy (11ty).

Build an Eleventy site from scratch by Stephanie Eckles. As the name suggests, this is for starting from a blank canvas. It includes a really simple and effective way of setting up a Sass watch-and-build pipeline that runs alongside that of Eleventy, using only package.json scripts rather than a bundler.

Eleventy Base Blog from 11ty. If rather than a blank canvas you want a boilerplate that includes navigation, a blog, an RSS feed and prism CSS for code block styling (among other things) then this is a great option. Of course, you can also just cherry-pick the relevant code you need, as I often do.

Eleventy Navigation Plugin. This allows you to set a page or post as a navigation item. It handily supports ordering and hierarchical nesting (for subnavigation). You can then render out your navigation from a layout in a one-liner or in a custom manner.

Eleventy Cache Assets Plugin. This is really handy for caching fetched data so as not to exceed API limits or do undue work on every build.

11ty Netlify Jumpstart is another from Stephanie Eckles but this time a “quick-start boilerplate” rather than blank canvas. It includes a minimal Sass framework, generated sitemap, RSS feed and social share preview images. The About page it generates contains lots of useful info on its features.

forestry.io settings for 11ty Base Blog and forestry.io settings for Hylia (Andy Bell’s 11ty starter)

Add Netlify CMS to an 11ty-based website

All my posts tagged “11ty”

More to follow…

Astro

Astro looks very interesting. It’s in part a static site builder (a bit like Eleventy) but it also comes with a modern (revolutionary?) developer experience which lets you author components as web components or in a JS framework of your choice but then renders those to static HTML for optimal performance. Oh, and as far as I can tell theres no build pipeline!

Astro lets you use any framework you want (or none at all). And if most sites only have islands of interactivity, shouldn’t our tools optimize for that?

People have been posting some great thoughts and insights on Astro already, for example:

(via @css)

clipboard.js - Copy to clipboard without Flash

Here’s a handy JS package for “copy to clipboard” functionality that’s lightweight and installable from npm.

It also appears to have good legacy browser support plus a means for checking/confirming support too, which should assist if your approach is to only add a “copy” button to the DOM as a progressive enhancement.

(via @chriscoyier)

How to Favicon in 2021 (on CSS-Tricks)

Some excellent favicon tips from Chris Coyier, referencing Andrey Sitnik’s recent article of the same name.

I always appreciate someone looking into and re-evaluating the best practices of something that literally every website needs and has a complex set of requirements.

Chris is using:

<link rel="icon" href="/favicon.ico"><!-- 32x32 -->
<link rel="icon" href="/icon.svg" type="image/svg+xml">
<link rel="apple-touch-icon" href="/apple-touch-icon.png"><!-- 180x180 -->
<link rel="manifest" href="/manifest.webmanifest">

And in manifest.webmanifest:

{
"icons": [
{ "src": "/192.png", "type": "image/png", "sizes": "192x192" },
{ "src": "/512.png", "type": "image/png", "sizes": "512x512" }
]
}

(via @mxbck)

Front-of-the-front-end and back-of-the-front-end web development (by Brad Frost)

The Great Divide between so-called front-end developers is real! Here, Brad Frost proposes some modern role definitions.

A front-of-the-front-end developer is a web developer who specializes in writing HTML, CSS, and presentational JavaScript code.

A back-of-the-front-end developer is a web developer who specializes in writing JavaScript code necessary to make a web application function properly.

Brad also offers:

A succinct way I’ve framed the split is that a front-of-the-front-end developer determines the look and feel of a button, while a back-of-the-front-end developer determines what happens when that button is clicked.

I’m not sure I completely agree with his definitions—I see a bit more nuance in it. Then again, maybe I’m biased by my own career experience. I’m sort-of a FOTFE developer, but one who has also always done both BOTFE and “actual” back-end work (building Laravel applications, or working in Ruby on Rails etc).

I like the fact that we are having this discussion, though. The expectations on developers are too great and employers and other tech people need to realise that.

Issues with Source Code Pro in Firefox appear to be fixed

Last time I tried Source Code Pro as my monospaced typeface for code examples in blog posts, it didn’t work out. When viewed in Firefox it would only render in black meaning that I couldn’t display it in white on black for blocks of code. This led to me conceding defeat and using something simpler.

It now looks like I can try Source Code Pro again because the issue has been resolved. This is great news!

So, I should grab the latest release and give it another go. Actually, for optimum subsetting and performance I reckon in this case I can just download the default files from Source Code Pro on Google Webfonts Helper and that’ll give me the lightweight woff2 file I need.

I’d also mentioned the other day that I was planning to give Source Serif another bash so if everything works out, with these two allied to my existing Source Sans Pro I could have a nice complimentary set.

Design system components, recipes, and snowflakes (on bradfrost.com)

An excellent article from Brad Frost in which he gives us some vocabulary for separating context-agnostic components intended for maximal use from specific variants and one-offs.

In light of some recent conversations at work, this was in equal measure interesting, reassuring, and thought-provoking.

On the surface, a design system and process can seem generally intuitive but in reality every couple of weeks might throw up practical dilemmas for engineers. For example:

  • this new thing should be a component in programming terms but is it a Design System component?
  • is everyone aware that component has a different meaning in programming terms (think WebComponent, ViewComponent, React.Component) than in design system terms? Or do we need to talk about that?
  • With this difference in meaning, do we maybe need to all be more careful with that word component and perhaps define its meaning in Design Systems terms a bit better, including its boundaries?
  • should we enshrine a rule that even though something might be appropriate to be built as a component in programming terms under-the-hood, if it’s not a reusable thing then it doesn’t also need to be a Design System component?
  • isn’t it better for components to be really simple because the less opinionated one is, the more reusable it is, therefore the more we can build things by composition?

When I read Brad’s article last night it kind of felt like it was speaking to many of those questions directly!

Some key points he makes:

  • If in doubt: everything should be a component
  • The key thing is that the only ones you should designate as “Design System Components” are the ones for maximal reuse which are content and context-agnostic.
  • After that you have 1) Recipes—specific variants which are composed of existing stuff for a specific purpose rather than being context-agnostic; and 2) Snowflakes (the one-offs).

Then there was this part that actually felt like it could be talking directly to my team given the work we have been doing on the technical implementation details of our Card recently:

This structure embraces the notion of composition. In our design systems, our Card components are incredibly basic. They are basically boxes that have slots for a CardHeader, CardBody, and CardFooter.

We’ve been paring things back in exactly the same way and it was nice to get this reassurance we are on the right track.

(via @jamesmockett)

A First Look at aspect-ratio (on CSS-Tricks)

Chris Coyier takes the new CSS aspect-ratio property for a spin and tests how it works in different scenarios.

Note that he’s applying it here to elements which do not have an intrinsic aspect-ratio. So, think a container element (div or whatever is appropriate) rather than an img. This is line with a Jen’s Simmons’ recent replies to me when I asked her whether or not we should apply aspect-ratio to an img after she announced support for aspect-ratio in Safari Technical Preview 118.

A couple of interesting points I took from Chris’s article:

  • this simple new means of declaring aspect-ratio should soon hopefully supersede all the previous DIY techniques;
  • if you apply a CSS aspect-ratio to an element which has no explicit width set, we still get the effect because the element’s auto (rendered) width is used, then by combining that with the CSS aspect-ratio the browser can calculate the required height, then apply that height;
  • if the content would break out of the target aspect-ratio box, then the element will expand to accommodate the content (which is nice). If you ever need to override this you can by applying min-height: 0;
  • if the element has either a height or a width set, the other of the two is calculated from the aspect ratio;
  • if the element has both a height and width set, aspect-ratio is ignored.

Regarding browser support: at the time of writing aspect-ratio is supported in Chrome and Edge (but not IE), is coming in Firefox and Safari, but as yet there’s no word regarding mobile. So I’d want to use it as a progressive enhancement rather than for something mission critical.

Vanilla JS List

Here’s Chris Ferdinandi’s curated list of organisations which use vanilla JS to build websites and web apps.

You don’t need a heavyweight JavaScript framework, and vanilla JS does scale.

At the time of writing the list includes Marks & Spencer, Selfridges, Basecamp and GitHub.

(via @ChrisFerdinandi)

Use CSS Clamp to create a more flexible wrapper utility (on Piccalilli)

Here’s Andy Bell recommending using CSS clamp() to control your wrapper/container width because it supports setting a preferred value in vw to ensure sensible gutters combined with a maximum tolerance in rem—all in a single line of code.

If we use clamp() to use a viewport unit as the ideal and use what we would previously use as the max-width as the clamp’s maximum value, we get a much more flexible setup.

The code looks like this:

.container {
width: clamp(16rem, 90vw, 70rem);
margin-left: auto;
margin-right: auto;
}

This is pretty cool because I know from experience that coding responsive solutions for wrappers can be tricky and you can end up with a complex arrangement of max-width and media queries whilst still—as Andy highlights—not providing optimal readability for medium-sized viewports.

Using CSS Grid with minmax() is one possible approach to controlling wrappers however this article offers another (potentially better) tool for your kit.

It’s worth noting that Andy could probably have just used width: min(90vw, 70rem) here (as Christopher suggested) because setting the lower bound provided by clamp() is only necessary if your element is likely to shrink unexpectedly and a regular block-level element wouldn’t do that. The clamp approach might be handy for flex items, though.

(via @piccalilli_)

Accessible interactions (on Adactio)

Jeremy Keith takes us through his thought process regarding the choice of link or button when planning accessible interactive disclosure elements.

A button is generally a solid choice as it’s built for general interactivity and carries the expectation that when activated, something somewhere happens. However in some cases a link might be appropriate, for example when the trigger and target content are relatively far apart in the DOM and we feel the need move the user to the target / give it focus.

For a typical disclosure pattern where some content is shown/hidden by an adjacent trigger, a button suits perfectly. The DOM elements are right next to each other and flow into each other so there’s no need to move or focus anything.

However in the case of a log-in link in a navigation menu which—when enhanced by JavaScript—opens a log-in form inside a modal dialogue, a link might be better. In this case you might use an anchor with a fragment identifier (<a href="#login-modal">Log in</a>) pointing to a login-form far away at the bottom of the page. This simple baseline will work if JavaScript is unavailable or fails, however when JavaScript is available we can intercept the link’s default behaviour and enhance things. Furthermore because the expectation with links is that you’ll go somewhere and modal dialogues are kinda like faux pages, the link feels appropriate.

While not explicit in the article, another thing I take from this is that by structuring your no-JavaScript experience well, this will help you make appropriate decisions when considering the with-JavaScript experience. There’s a kind of virtuous circle there.

Meta Tags - Preview, Edit and Generate

A handy tool which lets you type in a URL then inspects that page’s meta tags and shows you how it will be presented on popular websites.

This is really useful for testing how an article will look as a Google search result or when shared on Facebook, Slack and Twitter based on different meta tag values.

Comparing Browsers for Responsive Design (on CSS-Tricks)

Chris Coyier checks out Sizzy, Polypane et al and decides which suits him best.

There are a number of these desktop apps where the goal is showing your site at different dimensions all at the same time. So you can, for example, be writing CSS and making sure it’s working across all the viewports in a single glance.

I noticed Andy Bell recommending Sizzy so I’m interested to give it a go. Polypane got Chris’s vote, but is a little more expensive at ~£8 per month versus ~£5, so I should do a little shoot-out of my own.

Progressively enhanced JavaScript In Real Life

Over the last couple of days I’ve witnessed a good example of progressive enhancement “In Real Life”. And I think it’s good to log and share these validations of web development best practices when they happen so that their benefits can be seen as real rather than theoretical.

A few days ago I noticed that the search function on my website wasn’t working optimally. As usual, I’d click the navigation link “Search” then some JavaScript would reveal a search input and set keyboard focus to it, prompting me to enter a search term. Normally, the JavaScript would then “look ahead” as I type characters, searching the website for matching content and presenting (directly underneath) a list of search result links to choose from.

The problem was that although the search input was appearing, the search result suggestions were no longer appearing as I typed.

Fortunately, back when I built the feature I had just read Phil Hawksworth’s Adding Search to a Jamstack site which begins by creating a non-JavaScript baseline using a standard form which submits to Google Search (scoped to your website), passing as search query the search term you just typed. This is how I built mine, too.

So, just yesterday at work I was reviewing a PR which prompted me to search for a specific article on my website by using the term “aria-label”. And although the enhanced search wasn’t working, the baseline search functionally was there to deliver me to a Google search result page (site:https://fuzzylogic.me/ aria-label) with the exact article I needed appearing top of the search results. Not a rolls-royce experience, but perfectly serviceable!

Why had the enhanced search solution failed? It was because the .json file which is the data source for the lookahead search had at some point allowed in a weird character and become malformed. And although the site’s JS was otherwise fine, this malformed data file was preventing the enhanced search from working.

JavaScript is brittle and fails for many reasons and in many ways, making it different from the rest of the stack. Added to that there’s the “unavailable until loaded” aspect, or as Jake Archibald put it:

all your users are non-JS while they’re downloading your JS.

The best practices that we as web developers have built up for years are not just theoretical. Go watch a screen reader user browse the web if you want proof that providing descriptive link text rather than “click here”, or employing headings and good document structure, or describing images properly with alt attributes are worthwhile endeavours. Those users depend on those good practices.

Likewise, JavaScript will fail to be available on ocassion, so building a baseline no-JS solution will ensure that when it does, the show still goes on.

A Utility Class for Covering Elements (on CSS { In Real Life })

Need to overlay one HTML element on top of and fully covering another, such as a heading with translucent background on top of an image? Michelle Barker has us covered with this blog post in which she creates an overlay utility to handle this. She firstly shows how it can be accomplished with positioning, then modernises her code using the inset CSS logical property, before finally demonstrating a neat CSS Grid based approach.

I like this and can see myself using it – especially the Grid-based version because these days I try to avoid absolute positioning and use modern layout tools instead where possible.

I’ve mocked up a modified version on Codepen, sticking with CSS Grid for simplicity. I was going to also wrap it in an @supports (display:grid) however the styles are all grid-based so in the case of no grid support they simply wouldn’t run, rather than causing any problems.

My Command Line Cheatsheet

Here’s a list of useful terminal commands for my reference and yours.

Using iTerm2 (Terminal Emulator)

Composer

Make editing long lines easier by using Composer.

Open Composer: Shift-Cmd-.

Type your long command and enjoy using standard text editing controls:

git commit -am "A very long commit message"

Send Composer command to terminal: Shift-Return

Snippets

Toolbelt > Show Toolbelt

Then add, edit and delete commands.

Select a command then use Send to insert it on the terminal.

References

My DevTools Cheatsheet

Here’s a (work in progress) list of useful (Mac) Browser DevTools tips, tricks and keyboard shortcuts for my reference and yours. This is a work in progress and I’ll update it as I go.

Console Panel

Return currently selected element to work with

$0

Then you can execute its methods or inspect its attribute values, for example:

$0.offsetParent

Debug event-based behaviour

In Chrome, right-click on the relevant element (e.g. a button) and select “Inspect Element”. By default, the Styles panel is selected but instead select the Event Listeners panel. In there you can see all events (e.g. click) currently being listened for on that element (and its parent elements so as to include instances of event delegation).

Each event can be expanded to show which element has the event listener attached – for example it might be the current element or might be document. From here you can get to the script containing the code. Click a line number within the code to add a breakpoint. This will pause code execution on that line until you click the play button to continue. You might also log the current value of a variable here.

Pause JavaScript execution

Cmd + backslash

Firefox

Get responsive img element’s currentSrc

Inspect the element, right click and select Show DOM Properties from the context menu.

Google Chrome

Open the Command Menu

Command+Shift+P

Disable JavaScript

Open the Command Menu then type “disable” and you’ll see the option.

Get responsive img element’s currentSrc

Inspect the element, click the properties tab, toggle open the top item.

Throttle network/bandwidth

Go to tab Network then change Throttling to your desired setting, for example “Slow 3G”, or “offline”.

References

Browser Support Heuristics

In web development it’s useful when we can say “if the browser supports X, then we know it also supports Y”.

There was a small lightbulb moment at work earlier this year when we worked out that:

if the user’s browser supports CSS Grid, then you know you it also supports custom properties.

Knowing this means that if you wrap some CSS in an @supports(display:grid) then you can also safely use custom properties within that block.

I love this rule of thumb! It saves you looking up caniuse.com for each feature and comparing the browser support.

This weekend I did some unplanned rabbit-holing on the current state of (and best practices for using) ES modules in the browser, as-is and untranspiled. That revealed another interesting rule of thumb:

any browser that supports <script type="module"> also supports let and constasync/await, the spread operator, etc.

One implication of this is that if you currently build a large JavaScript bundle (due to being transpiled down to ES 3/5 and including lots of polyfills) and ship this to all browsers including the modern ones… you could instead improve performance for the majority of your visitors by configuring your bundler to generate two bundles from your code then doing:

// only one of these will be used. 
<script type="module" src="lean-and-modern.js"></script>
<script nomodule src="bulky-alternative-for-old-browsers.js"></script>

I might make a little page or microsite for these rules of thumb. They’re pretty handy!

My Screen Reader Cheatsheet

Here’s a list of useful Screen Reader commands and tips for my reference and yours. This is a work in progress and I’ll update it as I go.

VoiceOver (Mac)

Initial setup:

  1. Open Safari > Preferences > Advanced; then
  2. check the checkbox “Press tab to highlight each item on a webpage”.

Usage

  • Open the page you want to test in your web browser (you might favour Safari for VoiceOver).
  • Cmd-F5 to turn VoiceOver on.
  • Cmd-F5 (again) to turn VoiceOver off.

Get Cmd-F5 for “toggling on and off” into your muscle memory!

Then:

  • Ctrl-Option-A to have VoiceOver read the entire page.
  • Ctrl to pause VoiceOver, and Ctrl again to resume.
  • Find any unexpected issues by

Tabbing

Tab through items on the page using the tab key. This will move to the next focusable item (button, link, input). You can verify all interactive elements have a focus style, all interactive elements are reachable by keyboard, all off-screen or hidden elements don’t get focused when they shouldn’t and that the spoken label for each interactive element has sufficient context to understand it (“click here” and “menu” isn’t sufficient).

Navigating with the right-pointing arrow key

Navigate through all the content using Ctrl-Option-→. While this is not how most screen reader users will read the page, it doesn’t take long and lets you confirm that everything VoiceOver announces makes sense.

Using Rotor to scan and jump to specific elements

  • Ctrl-Option-U to open Rotor
  • Browse categories using left and right arrows. This includes the Landmarks menu.
  • Down arrow to browse within the categories
  • press Return to select an item

This is a great way to check if your content structure makes sense to a screen reader. Checking the headings illustrates the outline of the page. Viewing the links helps ensure they all have a name that makes sense without visual context. Checking landmarks helps ensure that the proper ARIA roles have been applied. You might find that a list of articles is not titled appropriately or that headings are not properly nested.

Tables

  • Navigate to a table using Ctrl-Option-Cmd-T
  • It should read a caption and give you info about the size of the table
  • Ctrl-Cmd-{arrowkey} to navigate inside the table.

References

How-to: Create accessible forms - The A11Y Project

Here are five bite-sized and practical chunks of advice for creating accessible forms.

  1. Always label your inputs.
  2. Highlight input elements on focus.
  3. Break long forms into smaller sections/pages.
  4. Provide error messages (rather than just colour-based indicators)
  5. Avoid horizontal layout forms unless necessary.

I already apply some of these principles, but even within those I found some interesting takeaways. For example, the article advises that when labelling your inputs it’s better not to nest the input within a <label> because some assistive technologies (such as Dragon NaturallySpeaking) don’t support it.

I particularly like the idea of using CSS to make the input which has focus more obvious than it would be by relying solely on the text cursor (or caret).

input:focus {
outline: 2px solid royalblue;
box-shadow: 1px 1px 8px 1px royalblue;
}

(via @adactio)

Cheating Entropy with Native Web Technologies (on Jim Nielsen’s Weblog)

This is why, over years of building for the web, I have learned that I can significantly cut down on the entropy my future self will have to face by authoring web projects in vanilla HTML, CSS, and JS. I like to ask myself questions like:

  • Could this be done with native ES modules instead of using a bundler?
  • Could I do this with DOM scripting instead of using a JS framework?
  • Could I author this in CSS instead of choosing a preprocessor?

Fantastic post from Jim Neilson about how your future self will thank you if you keep your technology stack simple now.

(via @adactio)

How to hide elements on a web page

In order to code modern component designs we often need to hide then reveal elements. At other times we want to provide content to one type of user but hide it from another because it’s not relevant to their mode of browsing. In all cases accessibility should be front and centre in our thoughts. Here’s my approach, heavily inspired by Scott O’Hara’s definitive guide Inclusively Hidden.

Firstly, avoid the need to hide stuff. With a bit more thought and by using existing fit-for-purpose HTML tools, we can perhaps create a single user interface and experience that works for all. That approach not only feels like a more equal experience for everyone but also removes margin for error and code maintenance overhead.

With that said, hiding is sometimes necessary and here are the most common categories:

  1. Hide from everyone
  2. Hide visually (i.e. from sighted people)
  3. Hide from Assistive Technologies (such as screen readers)

Hide from everyone

We usually hide an element from everyone because the hidden element forms part of a component’s interface design. Typical examples are tab panels, off-screen navigation, and modal dialogues that are initially hidden until an event occurs which should bring them into view. Initially these elements should be inaccessible to everyone but after the trigger event, they become accessible to everyone.

Implementation involves using JavaScript to toggle an HTML attribute or class on the relevant element.

For basic, non-animated show-and-hide interactions you can either:

  1. toggle a class which applies display: none in CSS; or
  2. toggle the boolean hidden attribute, which has the same effect but is native to HTML5.

Both options work well but for me using the hidden attribute feels a little simpler and more purposeful. My approach is to ensure resilience by making the content available in the first instance in case JavaScript should fail. Then, per Inclusive Components’ Tabs example, JavaScript applies both the “first hide” and all subsequent toggling.

Here’s some CSS that supports both methods. (The hidden attribute doesn’t strictly need this but it’s handy to regard both options as high-specifity, “trump-everything-else” overrides.)

.u-hidden-from-everyone, 
[hidden]
{
display: none !important;
}

For cases where you are animating or sliding the hidden content into view, toggle the application of CSS visibility: hidden because this also removes the element from the accessibility tree but unlike display, can be animated. Note that with visibility: hidden the physical space occupied by the element is still retained, therefore it’s best to pair it with position: absolute or max-height: 0px; overflow: hidden to prevent that “empty space while hidden” effect. For example:

.off-canvas-menu {
visibility: hidden;
position: absolute;
transform: translateX(-8em);
transition: 250ms ease-in;
}
[aria-expanded="true"] + off-canvas-menu {
visibility: visible;
transform: translateX(0);
transition: visibility 50ms, transform 250ms ease-out;
}

Hide visually (i.e. from sighted people)

We’ll usually want to hide something visually (only) when its purpose is solely to provide extra context to Assistive Technologies. An example would be appending additional, visually-hidden text to a “Read more” link such as “about Joe Biden” since that would be beneficial to screen reader users.

We can achieve this with a visually-hidden class in CSS and by applying that class to our element.

.visually-hidden:not(:focus):not(:active) {
clip: rect(0 0 0 0);
clip-path: inset(50%);
height: 1px;
overflow: hidden;
position: absolute;
white-space: nowrap;
width: 1px;
}

Essentially this hides whatever it’s applied to unless it’s a focusable element currently being focused by screen reader controls or the tab key, in which case it is revealed.

Note that if adding to link text to make it more accessible, always append rather than inserting words into the middle of the existing text. That way, you avoid solving an accessibility for one group but creating another for another group (Dragon speech recognition software users).

Visually hidden until focused

There are other CSS approaches to hiding visually. One approach is to not only add position: absolute (removing the element from the document flow) but also position it off-screen with left: -100vw or similar. The use case for this approach might be when you want your visually hidden element to support being revealed and for that reveal to occur via a transition/animation from off-screen into the viewport. See Scott O’Hara’s off screen skip-links example.

Hide from Assistive Technologies (such as screen readers)

We sometimes hide visual elements from Assistive Technologies because they are decorative and have accompanying text, for example a “warning” icon with the text “warning” alongside. If we did not intervene then Assistive Technologies would read out “warning” twice which is redundant.

To achieve this we can apply aria-hidden="true" to our element so that screen readers know to ignore it. In the following examples we hide the SVG icons within buttons and links, safe in the knowledge that the included “Search” text is providing each interactive element with its accessible name.

<button>
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</button>

<a href="/search">
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</a>

Reference: Contextually Marking up accessible images and SVGs

A Guide To The State Of Print Stylesheets In 2018 - Smashing Magazine

Rachel Andrew explains how to write CSS for a nicely optimised printed page that uses a minimum of ink and paper and ensures that content is easy to read.

I really like the section on Workflow that compares the options of

  1. organising your print styles as a separate stylesheet loaded via a <link> in the <head> (this is the “traditional” approach); versus
  2. using @media print {} in your main styles, which opens up the opportunity to locate each component’s print styles beside its main styles.

As Rachel notes, the first option might feel tidy (and keeping print styles separate reduces the size of your main stylesheet) however on larger sites this approach can lead to print styles being “out of sight, out of mind” and poorly maintained.

I think there will always be a need for 80% global print styles, supplemented by a sprinkling of component-specific print styles (and maybe even the odd utility class). It’s just a case of how you organise this.

I had an idea that you could maybe put the global print styles in a separate sheet and locate the component styles beside components in the main stylesheet however because we tend to want global print styles to add to and override main styles you’d want the print_globals file coming after the main styles, but that then screws up the order of the component-specific print styles. When @layers with <link> is supported perhaps this could all work! Until then, the future of print CSS for large design systems is perhaps Option 2: colocate print styles with screen styles.

Better Alt Text

I’ve just read The A11Y Project’s page on alt text.

As most of us know, the HTML alt attribute is for providing “alternate text” descriptions of images to help ensure people do not miss out on information conveyed by graphics. This can help people using assistive technology such as screen readers, and in situations where images are slow or fail to load.

The article made some interesting points and even though I’ve been using the alt attribute for years I found three common cases where I could improve how I do things.

Avoid starting with “photo of…”

Don’t begin alternative text with “photo of…” or “picture of…”. Assistive technologies already indicate the role of the element as an “image” or “graphic”. Redundancy makes for a poor user experience.

Avoid including the word “logo” in logo images

If the image is a company’s logo, the alt should be the company’s name. Adding the word “logo” as part of the alternative text is neither necessary nor useful. (One thing I found helpful here is to think of the way I, as a sighted person, perceive Apple’s logo. I just think “Apple”, not “Apple’s logo”, so I guess the same principle applies.)

If using an image multiple times on the page, tailor the alt text

Using an image several times in a website doesn't necessarily mean the alt attribute should be the same for each instance. For example, when using a logo in the website’s header this often doubles as a link back to the home page. In this example, the alt would be most useful as “Apple - Homepage”. If that same logo were used in the footer of the site alongside the text “Apple, copyright 20XX”, then the logo should have an empty alt (alt="") so as to avoid creating a redundant announcement of the company’s name.

Setting an accessibility standard for a UK-based commercial website

When advocating accessible web practices for a commercial website, the question of “what does the law require us to do?” invariably arises.

The appropriate answer to that question should really be that it doesn’t matter. Regardless of the law there is a moral imperative to do the right thing unless you are OK with excluding people, making their web experiences unnecessarily painful, and generally flouting the web’s founding principles.

However as Web Usability’s article What is the law on accessibility? helpfully advises, in the UK the legal situation is as follows:

“The accessibility of a UK web site is covered by the Equality Act 2010” (which states that) “Site owners are required to make ‘reasonable adjustments’ to make their sites accessible to people with disabilities”. While “there is no legal precedent about what would constitute a ‘reasonable adjustment’”, “given that the Government has adopted the WCAG 2.1 level AA as a suitable standard for public sector sites and it is more broadly recognised as a ‘good’ approach, any site which met these guidelines would have a very strong defence against any legal action.”

So, WCAG 2.1 Level AA is the sensible accessibility standard for your commercial UK-based website to aim for.

While not aimed specifically at the UK market, deque.com’s article What to look for in an accessibility audit offers similar advice:

The most common and widely-accepted standard to test against is WCAG, a.k.a. Web Content Accessibility Guidelines. This standard created by the World Wide Web Consortium (W3C) defines technical guidelines for creating accessible web-based content.

WCAG Success Criteria are broken down into different “levels of conformance”: A (basic conformance), AA (intermediate conformance), and AAA (advanced conformance). The current standard for compliance is both WCAG 2.1 Level A and AA.

If you don’t have specific accessibility regulations that apply to your organization but want to avoid legal risk, WCAG 2.1 A and AA compliance is a reasonable standard to adopt.

Additional references

itty.bitty

Here’s an interesting tool for creating and sharing small-ish web pages without having to build a website or organise hosting.

itty.bitty takes html (or other data), compresses it into a URL fragment, and provides a link that can be shared. When it is opened, it inflates that data on the receiver’s side.

While I find this idea interesting, I’m not yet 100% sure how or when I’ll use it! I’m sure it’ll come in handy at some point, though.

Here’s my first “itty bitty” page, just for fun.

(via @chriscoyier)

When there is no content between headings

Hidde de Vries explains why an HTML heading should never be immediately followed by another.

When you use a heading element, you set the expectation of content.

I have always prided myself on using appropriate, semantic HTML, however it’s recently become clear to me that there’s one thing I occasionally do wrongly. Sometimes I follow a page’s title (usually an h1 element) with a subtitle which I mark up as an h2. I considered this the right element for the job and my choice had nothing to do with aesthetics.

However a recent article on subtitles by Chris Ferdinandi and now this article by Hidde have made me reconsider.

HTML headings are essentially ”names for content sections”. On screen readers they operate like a Table of Contents – one can use them to navigate to content.

Therefore I now reckon I should only use a hx heading when it will be immediately followed by (non-heading) content – paragraphs and so on – otherwise I should choose a different element.

I should probably mark up my subtitles as paragraphs.

The difference between aria-label and aria-labelledby (Tink - Léonie Watson)

The aria-label and aria-labelledby attributes do the same thing but in different ways. Sometimes the two attributes are confused and this has unintended results. This post describes the differences between them and how to choose the right one.

The key takeaways for me were:

  • Many HTML elements have an accessible name (which we can think of as its “label”) and this can be derived from the element’s content, an attribute, or from an associated element;
  • for aria-labelledby, use the id of another element and that will then use that element’s text as your first element’s accessible name;
  • use native HTML over ARIA where possible, but when you need ARIA it’s better to reuse than duplicate so if an appropriate label already exists in the document use aria-labelledby; otherwise use aria-label;
  • an ARIA attribute will trump any other accessible name (such as the element’s content)
  • there are some elements on which these ARIA attributes do not work consistently so check these before using.

Create a line break while maintaining inline status (on Piccalilli)

Sometimes you want to create a line break after an inline element, while retaining that inline element’s inline status.

A lovely trick from Andy Bell for breaking after an inline element (such as a form label) using a pseudo-element and the white-space property, so that we can avoid setting the element to display: block (thereby becoming full-width etc) when we don’t want that.

Here’s my own codepen for posterity.

Daniel Post shared a really cool performance-optimisation trick for Eleventy on Twitter the other day. When statically generating your site you can loop through your pages and, for each, use PurgeCSS to find the required CSS, then inline that into the <head>. This way, each page contains only the CSS it needs and no more!

Check out the code.

I’ve just installed this on my personal site. I was already inlining my CSS into the <head> but the promise of only including the minimum CSS that each specific page needs was too good to resist.

Turned out it was a breeze to get working, a nice introduction to Eleventy transforms, and so far it’s working great!

Thoughts on inline JavaScript event handlers in the <head>

I’ve been thinking about Scott Jehl’s “simplest way to load external CSS asynchronously” technique. I’m interested in its use of an inline (onload) event handler for running JavaScript-based enhancements in the <head>, in the context of some broader ruminations on how best to progressively enhance UI elements with JavaScript (for example adding toggle show/hide) without causing layout jank.

One really interesting aspect of using inline event handlers to apply enhancements was highlighted by Chris Ferdinandi today: as JavaScript goes, it’s pretty resilient.

Because we’re dealing with an HTML element directly in the document, and because the relevant JS is inline on that element and not dependent on any external files, the only case where the JS won’t run is if someone has JS completely turned off – a sub-1% proportion. The other typical JavaScript resilience pitfalls – such as network connections timing out, CDN failure and JS errors elsewhere blocking your code from running – simply don’t apply here.

Inclusive Datepicker (by Tommy Feldt)

A human-friendly datepicker. Supports natural language manual input through Chrono.js. Fully accessible with keyboard and screen reader.


Sign-in form best practices (on web.dev)

Sam Dutton advises how to use cross-platform browser features to build sign-in forms that are secure, accessible and easy to use.

The tips of greatest interest to me were:

  • on using autocomplete="new-password" on registration forms and autocomplete="current-password" on sign-in forms to tap into browser password suggestion and password manager features;
  • on how best to provide “Show Password” functionality; and
  • on using aria-describedby when providing guidance on password rules.

Three CSS Alternatives to JavaScript Navigation (on CSS-Tricks)

In general this is a decent article on non-JavaScript-based mobile navigation options, but what I found most interesting is the idea of having a separate page for your navigation menu (at the URL /menu, for example).

Who said navigation has to be in the header of every page? If your front end is extremely lightweight or if you have a long list of menu items to display in your navigation, the most practical method might be to create a separate page to list them all.

I also noted that the article describes a method where you can “spoof” a slide-in hamburger menu without JS by using the checkbox hack. I once coded a similar “HTML and CSS -only” hamburger menu, but opted instead to use the :target pseudo-class in combination with the adjacent sibling selector, as described by Chris Coyier back in 2012.

The Simplest Way to Load CSS Asynchronously (Filament Group)

Scott Jehl of Filament Group demonstrates a one-liner technique for loading external CSS files without them delaying page rendering.

While this isn’t really necessary in situations where your (minified and compressed) CSS is small, say 14k or below, it could be useful when working with large CSS files and want to deliver critical CSS separately and the rest asynchronously.

Today, armed with a little knowledge of how the browser handles various link element attributes, we can achieve the effect of loading CSS asynchronously with a short line of HTML. Here it is, the simplest way to load a stylesheet asynchronously:

<link rel="stylesheet" href="my.css" media="print" onload="this.media='all'">

Note that if JavaScript is disabled or otherwise not available your stylesheet will only load for print and not for screen, so you’ll want to follow up with a “normal” (non-print-specific) stylesheet within <noscript> tags.

Color Theme Switcher (on mxb.dev)

Max shows us how to build a colour theme switcher to let users customise your website. He uses a combination of Eleventy, JSON, Nunjucks with macros, a data attribute on the html element, CSS custom properties and a JavaScript based switcher.

Thanks, Max!

Sass and clamp (on Adactio: Journal)

Given what we can now do with CSS, do we still need Sass?

Sass was the hare. CSS is the tortoise. Sass blazed the trail, but now native CSS can achieve much the same result.

Jeremy’s post starts by talking about the new CSS clamp function and how it can be used for scalable type, then veers into a question of whether we still need Sass or if modern CSS now covers our needs.

This is really interesting and definitely gives me pause to consider whether I can simplify my development stack by removing a tool.

However I guess one reason (not mentioned in Jeremy’s post) you might want Sass is that many of the CSS functions which provide similar effects to mixins, variables etc are currently only supported in the most modern, standards-compliant browsers. Sass can pre-process its variables and mixins into older, more broadly-supported CSS. So choosing the pure CSS, processor-free option within a progressive enhancement oriented approach might mean that your broadly-supported baseline is more basic than it would be by using Sass. That’s the sort of decision I could take fairly lightly for my personal website, but I could see it being less palatable for stakeholders working on larger sites.

For example, if your site needs to support IE11 and theming which includes custom colour schemes, unfortunately you don’t have the luxury of putting all your eggs in the native CSS custom properties basket.

Best practice techniques for SVG Icons

Here’s how I’d handle various common SVG icon scenarios with accessibility in mind.

Just an icon

So this is an icon that’s not within a link or button and has no adjacent text. This might be, for example, an upward-pointing arrow icon in a <td> in a “league table” where the arrow is intended to indicate a trend such as “The figure has increased” or “Moving up the table”.

The point here is that in this scenario the SVG is content rather than decoration.

<svg 
role="img"
focusable="false"
aria-labelledby="arrow-title"
>

<title id="arrow-title">Balance has increased</title>
<path >…</path
</svg>

Note: Fizz Studio’s article Reliable valid SVG accessibility suggests that the addition of aria-labelledby pointing to an id for the <title> (as Léonie originally recommended) is no longer necessary. That’s encouraging, but as it does no harm to keep it I think I’ll continue to include it for the moment.

The same article also offers that maybe we should not use the SVG <title> element (and use aria-label to provide an accessible name instead) due to the fact that it leads to a potentially undesirable tooltip, much like the HTML title attribute does. To be honest I’m OK with this and don’t see it as a problem, and as I mention later have heard probably even more problematic things about aria-label so will stick with <title>.

Button (or link) with icon plus text

This is easy. Hide the icon from Assistive Technology using aria-hidden to avoid unnecessary repetition and rely on the text as the accessible name for the button or link.

<button>
<svg aria-hidden="true" focusable="false" ><!--...--></svg>
Search
</button>

<a href="/search">
<svg aria-hidden="true" focusable="false"><!--...--></svg>
Search
</a>

Button (or link) with icon alone

In this case the design spec is for a button with no accompanying text, therefore we must add the accessible name for Assistive Technologies ourselves.

<button>
<svg focusable="false" aria-hidden="true"><!--...--></svg>
<span class="visually-hidden">Search</span>
</button>

<a href="/search">
<svg focusable="false" aria-hidden="true"><!--...--></svg>
<span class="visually-hidden">Search</span>
</a>

The reason I use text that’s visually-hidden using CSS for the accessible name rather than adding aria-label on the button or link is because I’ve heard that the former option is more reliable. In greater detail: aria-label is announced inconsistently and not always translated.

References

Font Match

A font pairing app that helps you match fonts – useful for pairing a webfont with a suitable fallback. You can place the fonts on top of each other, side by side, or in the same line. You can adjust your fallback font’s size and position to get a great match.

Font style matcher

If you’re using a web font, you're bound to see a flash of unstyled text (or FOUC), between the initial render of your websafe font and the webfont that you’ve chosen. This usually results in a jarring shift in layout, due to sizing discrepancies between the two fonts. To minimize this discrepancy, you can try to match the fallback font and the intended webfont’s x-heights and widths. This tool helps you do exactly that.


Debouncing vs. throttling with vanilla JS (on Go Make Things)

Chris explains how debouncing and throttling are two related but different techniques for improving performance and user experience when working with frequently invoked JavaScript event handlers.

With throttling, you run a function immediately, then wait a specified amount of time before running it again. Any additional attempts to run it before that time period is over are ignored.

With debouncing, after the relevant event fires a specified time period must pass uninterrupted in order for your function to run. When the time period has passed uninterrupted, that last attempt to run the function is the one that runs, with any previous attempts ignored.

You might debounce code in event handlers for scroll events to run when the user is completely done scrolling so as not to negatively affect browser performance and user experience.

For interactions that update the UI, throttling might make more sense, so that the updates run at predictable intervals.

NB I’ve previously found Trey Huffine’s debounce tutorial and example function really useful, too.

Striking a Balance Between Native and Custom Select Elements (on CSS-Tricks)

We’re not going to try to replicate everything that the browser does by default with a native select element. We’re going to literally use a select element when any assistive tech is used. But when a mouse is being used, we’ll show the styled version and make it function as a select element.

This custom-styled select solution satisfies those who insist on a custom component but retains all the built-in accessibility we get from native form controls. I also really like the use of a @media (hover: hover) media query to detect an environment with hover (such as a computer with a mouse rather than a mobile browser on a handheld device).

How to use npm as a build tool

Kieth Cirkel explains how using npm to run the scripts field of package.json is a great, simple alternative to more complex build tools. The article is now quite old but because it contains so many goodies, and since I’ve been using the approach more and more (for example to easily compile CSS on my personal website), it’s definitely worth bookmarking and sharing.

npm’s scripts directive can do everything that these build tools can, more succinctly, more elegantly, with less package dependencies and less maintenance overhead.


It’s also worth mentioning that (as far as I can tell so far) Yarn also provides the same facility.

Related references:

JavaScript Arrow Functions

JavaScript arrow functions are one of those bits of syntax about which I occasionally have a brain freeze. Here’s a quick refresher for those moments.

Differences between arrow functions and traditional functions

Arrow functions are shorter than traditional function syntax.

They don’t bind their own this value. Instead, the this value of the scope in which the function was defined is accessible. That makes them poor candidates for methods since this won’t be a reference to the object the method is defined on. However it makes them good candidates for everything else, including use within methods, where—unlike standard functions—they can refer to (for example) this.name just like their parent method because the arrow function has no overriding this binding of its own.

TL;DR: typical usage

const doStuff = (foo) => {
// stuff that spans multiple lines
};

// short functions
const add = (num1, num2) => num1 + num2;

Explainer

// Traditional Function
function (a) {
return a + 100;
}

// Arrow Function Breakdown

// 1. Remove "function", place => between argument and opening curly
(a) => {
return a + 100;
}

// 2. Remove braces and word "return". The return is implied.
(a) => a + 100;

// 3. Remove the argument parentheses
a => a + 100;

References

How to optimise performance when using Google-hosted fonts (on CSS Wizardry)

A combination of asynchronously loading CSS, asynchronously loading font files, opting into FOFT, fast-fetching asynchronous CSS files, and warming up external domains makes for an experience several seconds faster than the baseline.

Harry Roberts suggests that, while self-hosting your web fonts is likely to be the overall best solution to performance and availability problems, we’re able to design some fairly resilient measures to help mitigate a lot of these issues when using Google Fonts.

Harry then kindly provides a code snippet that we can use in the <head> of our document to apply these measures.

Modern CSS Solutions

Modern CSS Solutions for Old CSS Problems

Stephanie Eckles with a beautifully presented series of articles on how to use modern CSS to tackle some of the enduring challenges of web development including dropdown navigation, centring and styling buttons.

CSS Section Separator Generator (on wweb.dev)

A handy tool that generates the required HTML and CSS for various section separator effects (including diagonal lines, spikes, and waves) by cleverly manipulating backgrounds and generated content.

I have to reluctanctly agree on this one. I’ve interviewed quite a few candidates for “front-end developer” (or similarly named) positions over recent years and the recurring pattern is that they are strong on JavaScript (though not necessarily the right time to use it) and weak on HTML, CSS and the “bigger picture”.

grep.app

grep.app searches code from over a half million public repositories on GitHub.

This could be useful when you’re struggling to use a certain new CSS property, or npm package, and want to see how other programmers are using it.

4 Ways to Animate the Color of a Text Link on Hover | CSS-Tricks

Let’s create a pure CSS effect that changes the color of a text link on hover – but slide that new color in instead of simply swapping colors.

Katherines post explores four different techniques to achieve the effect, and their comparative pros and cons with regard to accessibility, performance, and browser support.

Technique 4, which uses a CSS transform seems to be the most flexible, best-performing and has best cross-browser support, however because it requires adding a semantically redundant <span> into the anchor I would use it sparingly rather than on all links by default.

Screen - Work together like you're in the same room

Fast screen sharing with multiplayer control, drawing & video.

An application which allows collaborating (and drawing) on files and could be useful for pair programming. Free during the current COVID-19 situation.

(via @lylo)

Block Links: A tricky UI Problem

You have a “card” component which includes a heading, some text, an image, and a link to the full article, and it’s working great. Then along comes a UX requirement that the full card (not just the button or link) should be clickable. This is where things get complicated.

TL;DR

I was recently faced with this challenge while building a component at work and opted to implement a tailored version of Heydon Pickering’s Redundant Click Trick. This felt like the best approach, or perhaps more accurately “the lesser of three evils”. I’ll be monitoring how that performs, but in light of the knowledge and experience gained carrying out this task I’m also starting to think that – like Chris Coyier recently suggested – maybe full-card clickable regions are a bad idea.

Setting the Scene

Let’s say our starting HTML is this:

<div class="card">
<h2>Card Title</h2>
<img src="/path/to/img.jpg" />
<p>This is the body copy for the card. It it is comprised of a few sentences.</p>
<a href="/">Read more</a>
</div>

And the requirement we’ve been given is to make the whole card clickable rather than just the “Read more” link.

Option 1: Stuff everything inside an anchor

Here’s the thing – since the dawn of HTML5 we’ve been able to wrap the inline anchor (<a>) element around block-level content such as headings, paragraphs, and <div>s… so isn’t the answer just to do that?

<a href="/">
<div class="card">
<h2>Card Title</h2>
<img src="/path/to/img.jpg" />
<p>This is the body copy for the card. It it is comprised of a few sentences.</p>
</div>
</a>

Well, as with many HTML challenges, just because you can do something doesn’t mean you should. I always had a nagging doubt about stuffing all that disparate content inside a single anchor, and Adrian Roselli has recently confirmed that for screen reader users this approach is harmful.

Perhaps the worst thing you can do for a block link is to wrap everything in the <a href>… for a screen reader user the entire string is read when tabbing through controls… taking about 25 seconds to read before announcing it as a link.

Furthermore, images nested in this way are not clearly announced as they normally would be.

So if you care about the user experience for those people, this feels like a no-no.

Option 2: Stretch a standard anchor using pseudo-content

An alternate approach that’s gained traction over the last couple of years involves leaving the anchor or button in its initial position within the card (thereby avoiding the above mentioned accessibility problem) and using pseudo-content to stretch it to cover the entire card. This CSS-only trick involves setting the card to position:relative then giving the anchor (or button) :after pseudo-content and absolutely positioning that to the card’s four corners. This makes the whole card clickable like a button.

The problem with this approach is that any text in the card is no longer selectable.

Some might say that this is OK. Personally I feel that it is a fundamental usability requirement that text on a web page be selectable. Not being able to do so calls to mind the bad old days before web fonts where we used images for headings, and I like to think we’ve evolved from those kind of practices. Also, I feel any statement by us developers and designers that “losing the ability to select text is OK” lacks validity because we are biased; happy to justify taking away something fundamental from our users because we’re more concerned with getting a (frankly non-essential) feature over the line.

If we don’t like this compromise but are still determined to make the full card clickable, there’s one further option.

Option 3: The Redundant Click Trick

This technique, conceived by Heydon Pickering, uses JavaScript rather than CSS to make the card clickable.

Essentially we add an EventListener for a click on the Card and when one is detected, trigger a faux click on the inner anchor or button.

One challenge inherent in this approach is that a user attempting to select text would unintentionally trigger our faux link click. However we can again use JavaScript (using the onmousedown and onmouseup events) to detect the length of their press to infer whether they are selecting text or clicking, then take appropriate action.

The pros of this approach are that we avoid the screen reader problems and the inability to select text.

The cons are i) that it requires a more complicated, JavaScript-based approach; and ii) that the need for a “check how long the mouse has been pressed down” part isn’t ideal.

With this approach, if Analytics tracking is part of your mix I’d make sure to check that that works as expected across browsers and devices.

Summing up

So there we go – three ways to achieve a “block link” or button. But given the compromises they involve, perhaps the question should be – is it worth it? And given the tools we currently have available, I lean towards “no”.

References

Block Links, Cards, Clickable Regions etc by Adrian Roselli.

Cards by Heydon Pickering (in Inclusive Components).

Block Links are a pain and maybe just a bad idea by Chris Coyier on CSS-Tricks.

Fixing Github Command Line Authentication Issues

On at least two ocassions I’ve found myself scratching my head when an attempted push to a newly-created Github repo is met with authentication failures, despite me being sure I’m using the correct credentials.

Here’s the lowdown on the issue and how to resolve it.

Essentially the problem relates to Github expecting a personal access token rather than a password (although it provides no helpful hints that this is the case).

This might be because your Github account has 2FA enabled, and/or for security purposes because your account is part of an organisation that uses SAML single sign-on (SSO).

In my case, I had previously created a personal access token with the requisite privileges (in my Github account’s Developer Settings > Tokens section) for the purposes of API access, so I was able to just reuse that. However, if need be I could have created a new one.

Thanks to Ginny Fahs who had the same problem and documented her solution.

Github’s Help page Creating a personal access token for the command line is also useful.

My Codepen Cheatsheet

I’m finding Codepen to be more and more valuable not only for testing out new code and ideas, but also – when working on large applications – as a time-saving rapid prototyping environment which sidesteps the overhead of back-end set-up. Here are some tips which I’ve found useful, for future reference.

Control the Editor View layout

Append /left/, /right/, or /top/ to the URL to set the editor layout.

Append ?editors=1111 (change numbers as appropriate) to the URL to set which panels are maximised (in order of HTML, CSS, JavaScript, and console).

For example:

https://codepen.io/fuzzylogicx/pen/BEEYQL/left/?editors=1100

References

My VS Code Cheatsheet

Here’s a list of useful (Mac-based) VS Code tips for my reference and yours.

Use the Command Palette

Command-Shift-P

Then type your search term, for example “settings”.

Settings

My preferences (in settings.json or via Preferences→Settings):

{
"workbench.editor.showTabs": true,
"editor.formatOnSave": true,
"explorer.confirmDragAndDrop": false,
"editor.minimap.enabled": false,
"extensions.ignoreRecommendations": false,
"explorer.compactFolders": false,
"explorer.autoReveal": false,
"editor.accessibilitySupport": "off",
"ruby.codeCompletion": "rcodetools",
"emmet.includeLanguages": {
"nunjucks": "html",
"erb": "html"
},
"emmet.triggerExpansionOnTab": true
}

Note: the Emmet ones are really useful for code autocompletion.

Additional Emmet Settings

To add a keyboard shortcut for adding an arbitrary wrapper element (say, div.wrap) around some selected code:

Open the Command Palette then search “emmet wrap”. When you see the option “Emmet: wrap with abbreviation”, click the settings icon beside it. Enter your preferred keyboard shortcut. I currently use:

Command-Shift-A

Open current terminal directory in VS Code

code .

Toggle Terminal

Ctrl-`

Toggle sidebar visibility

Command-B

Edit multiple rows simultaneously

Select one instance of the text that appears in multiple locations. Use Command-D to select all, then edit.

Open file to side (for side-by-side editing)

Option–click on a file in the Explorer.

You Don't Need

A nice list of tips and tools on how to use simpler browser standards and APIs to avoid the added weight of unnecessary JavaScript and libraries.

Lodash, Moment and other similar libraries are expensive and we don’t always need them. This Github repo contains a host of nice tips, snippets and code–analysing tools.

One cautionary note regarding the idea of replacing JS with CSS: although the idea of using CSS rather than JavaScript for components like tabs and modals seems nice at first, it doesn’t properly consider that we often need JS for reasons of accessibility, in order to apply the correct aria attributes when the state of a UI component is modified.

Via Will Matthewson at work (FreeAgent) during our group conversation on JavaScript strategy.

Testing Stimulus Controllers

Stimulus JS is great but doesn’t provide any documentation for testing controllers, so here’s some of my own that I’ve picked up.

Required 3rd-party libraries

Basic Test

// hello_controller.test.js
import { Application as StimulusApp } from "stimulus";
import HelloController from "path/to/js/hello_controller";

describe("HelloController", () => {
beforeEach(() => {
// Insert the HTML and register the controller
document.body.innerHTML = `
<div data-controller="hello">
<input data-target="hello.name" type="text">
<button data-action="click->hello#greet">
Greet
</button>
<span data-target="hello.output">
</span>
</div>
`
;
StimulusApp.start().register('hello', HelloController);
})

it("inserts a greeting using the name given", () => {
const helloOutput = document.querySelector("[data-target='hello.output']");
const nameInput = document.querySelector("[data-target='hello.name']");
const greetButton = document.querySelector("button");
// Change the input value and click the greet button
nameInput.value = "Laurence";
greetButton.click();
// Check we have the correct greeting
expect(helloOutput).toHaveTextContent("Hello, Laurence!");
})
})

RegExr: Learn, Build, and Test RegEx

RegExr is an online tool to learn, build, & test Regular Expressions.

This handy, interactive tool is a bit like Postman but for RegEx. You can create RegEx patterns and save them for easy retrieval later.

I also like the way you can start by making a list of example text strings you want your pattern to i) match and ii) not match before starting work on your RegEx pattern, adopting a sort-of “Test Driven RegEx” approach.

Async and Await

My notes and reminders for handling promises with async and await In Real Life.

As I see it, the idea is to switch to using await when working with promise-returning, asynchronous operations (such as fetch) because it lends itself to more flexible and readable code.

async functions

The async keyword, when used before a function declaration like so async function f()):

  • defines an asynchronous function i.e. a function whose processes run after the main call stack and doesn’t block the main thread.
  • always returns a promise. (Its return value is implicitly wrapped in a resolved promise.)
  • allows us to use await.

The await operator

  • use the await keyword within async functions to wait for a Promise.
  • Example usage: const users = await fetch('/users')).
  • It makes the async function pause until that promise settles and returns its result.
  • It makes sense that it may only be used inside async functions so as to scope the “waiting” behaviour to that dedicated context.
  • It’s a more elegant syntax for getting a promise‘s result than promise.then.
  • If the promise resolves successfully, await returns the result.
  • If the promise rejects, await throws the error, just as if there were a throw statement at that line.
  • That throw causes execution of the current function to stop (so the next statements won't be executed), with control passed to the first catch block in the call stack. If no catch block exists among caller functions, the program will terminate.
  • Given this “continue or throw” behaviour, wrapping an await in a try...catch is a really nice and well-suited pattern for including error handling, providing flexibility and aiding readability.

Here’s a try...catch -based example. (NB let’s assume that we have a list of blog articles and a “Load more articles” button which triggers the loadMore() function):

export default class ArticleLoader {

async loadMore() {
const fetchURL = "https://mysite.com/blog/";
try {
const newItems = await this.fetchItems(fetchURL);
// If we’re here, we know our promise fulfilled.
// We might add some additional `await`, or just…
// Render our new HTML items into the DOM.
this.renderItems(newItems);
} catch (err) {
this.displayError(err);
}
}

async fetchArticles(url) {
const response = await fetch(url, { method: "GET" });
if (response.ok) {
return response.text();
}
throw new Error("Sorry, there was a problem fetching additional articles.");
}

displayError(err) {
const errorMsgContainer = document.querySelector("[data-target='error-msg']");
errorMsgContainer.innerHTML = `<span class="error">${err}</span>`;
}
}

Here’s another example. Let’s say that we needed to wait for multiple promises to resolve:

const allUsers = async () => {
try {
let results = await Promise.all([
fetch(userUrl1),
fetch(userUrl2),
fetch(userUrl3)
]);
// we’ll get here if the promise returned by await
// resolved successfully.
// We can output a success message.
// ...
} catch (err) {
this.displayError(err);
}
}

Using await within a try...catch is my favourite approach but sometimes it’s not an option because we’re at the topmost level of the code therefore not in an async function. In these cases it’s good to remember that we can call an async function and work with its returned value like any promise, i.e. using then and catch.

For example:

async function loadUser(url) {
const response = await fetch(url);
if (response.status == 200) {
const json = await response.json();
return json;
}
throw new Error(response.status);
}

loadUser('no-user-here.json')
.then((json) => {
// resolved promise, so do something with the json
// ...
})
.catch((err) => {
// then() returns a promise, so is chainable.
// rejected promise, so do something with the json
document.body.innerHTML = "foo";
});

References:

Modest JS Works

Pascal Laliberté has written a short, free, web-based book which advocates a modest and layered approach to using JavaScript.

I make the case for The JS Gradient, a principle whereby your app can have multiple coexisting modern JS approaches, starting from the global sprinkles to spot view-models to, yes, an SPA if that’s really necessary. At each point in the gradient, you’ll see when it’s a good idea to go a step further toward heavier JavaScript, or not.

Pascal’s philosophy starts with the following ideals:

  • prefer server-generated HTML over JavaScript-generated HTML. If we need to add more complex JavaScript layers we may deviate from that ideal, but this should be the starting point;
  • we should be able to swap and replace the HTML on a page on a whim. We can then support techniques like pjax (replacing the whole body of a page with new HTML such as with Turbolinks) and ahah (asynchronous HTML over HTTP: replacing parts of a page with new HTML, so as to make our app feel really fast while still favouring server-generated HTML;
  • favour native Browser APIs over proprietary libraries. Use the tools the browser gives us (History API, Custom Event handlers, native form elements, CSS and the cascade) and polyfill older browsers.

He argues that a single application can combine the options along the JS Gradient, but also that we need only move to a new level if and when we reach the current level’s threshold.

He defines the levels as follows:

  • Global Sprinkles: general app-level enhancements that occur on most pages, achieved by adding event listeners at document level to catch user interactions and respond with small updates. Such updates might include dropdowns, fetching and inserting HTML fragments, and Ajax form submission. This might be achieved via a single, DIY script (or something like Trimmings) that is available globally and provides reusable utilities via data- attributes;
  • Component Sprinkles: specific page component behaviour defined in individual .js files, where event listeners are still ideally set on the document;
  • Stimulus components: where each component’s HTML holds its state and defines its behaviour, with a companion controller .js file which wires up event handlers to elements;
  • Spot View-Models: using a framework such as Vue or React only in specific spots, for situations where our needs are more complex and generating the HTML on the server would be impractical. Rather than taking over the whole page, this just augments a specific page section with a data-reactive view-model.
  • A single-page application (SPA): typically an all-JavaScript affair, where whole pages are handled by Reactive View-Models like Vue and React and the browser’s handling of clicks and the back button are overriden to serve different JavaScript-generated views to the user. This is the least modest approach but there are times when it is necessary.

One point to which Pascal regularly returns is that it’s better to add event listeners to the document (with a check to ensure the event occurred on the relevant element) rather than to the element itself. I already knew that Event Delegation is better for browser performance however Pascal’s point is that in the context of wanting to support swapping and replacing HTML on a whim, if event listeners are directly on the element but that element is replaced (or a duplicate added) then we would need to keep adding more event listeners. By contrast, this is not necessary when the event listener is added to the document.

Note: Stimulus applies event handlers to elements rather than the document, however one of its USPs is that it’s set up so that as elements appear or disappear from the DOM, event handlers are automatically added and removed. This lets you swap and replace HTML as you need without having to manually define and redefine event handlers. He calls this Automated Behaviour Orchestration and notes that while adding event listeners to the document is the ideal approach, the Stimulus approach is the next best thing.

Also of particular interest to me was his Stimulus-based Shopping Cart page demo where he employs some nice techniques including:

  • multiple controllers within the same block of HTML;
  • multiple Stimulus actions on a single element;
  • controller action methods which use document.dispatchEvent to dispatch Custom Events as a means of communicating changes up to other components;
  • an element with an action which listens for the above custom event occurring on the document (as opposed to an event on the element itself).

I’ve written about Stimulus before and noted a few potential cons when considering complex interfaces, however Pascal’s demo has opened my eyes to additional possibilities.

My Ruby and Rails Cheatsheet

I’m no Ruby engineer however even as a front-end developer I’m sometimes called upon to work on Rails applications that require me to know my way around. Here are my notes and reminders.

This is not intended to be an authoritative guide but merely my notes from various lessons. It’s also a work-in-progress and a living, changing document.

Table of contents

The Rails Console

The console command lets you interact with your Rails application from the command line.

# launch a console (short version)
rails c

# long version
bundle exec rails console

Quickly find where a method is located:

Myobj.method(:methodname).source_location

# Returns a file and line which you can command-click
=> ["/local/path/to/mymodelname/model.rb", 99]

See an object’s methods:

Myobj.methods

# Search for a method using a search string
# this returns all of the object methods containing string “/pay/“
Myobj.methods.grep(/pay/)

Rspec

Run it like so:

bin/rspec spec/path/to/foo_spec.rb

# Run a particular line/method
bin/rspec spec/path/to/foo_spec.rb:195

If adding data variables to use in tests, declare them in a let block so as to keep them isolated and avoid them leaking elsewhere.

let(:example_data_obj) {
{
foo: "bar",
baz: "bat",

}
}

Note: if you need multiple data variables so as to handle different scenarios, it’s generally more readable to define the data being tested right next to the test.

Debugging

I’ll cover debugging related to more specific file types later but here’s a simple tip. You can check the value of a variable or expression at a given line in a method by:

  1. add byebug on a line of its own at the relevant place in your file, then save file
  2. switch to the browser and reload your page
  3. in the terminal tab that’s running the Rails server (which should now be stopped at the debugging breakpoint), at the bottom type the variable name of interest. You won’t see any text but just trust that your typing is taking effect. Press return
  4. you’ll now see the value of that variable as it is at the debugging breakpoint
  5. When you’re done, remove your byebug. You may need to type continue (or c for short) followed by return at the command prompt to get the server back on track.

Helpers

Helper methods are to there to support your views. They’re for extracting into methods little code routines or logic that don’t belong in a controller and are too complex or reusable to be coded literally into your view. They’re reusable across views because they become available to all your views automatically.

Don’t copy and reuse method names from other helpers. You’ll get conflicts because Helpers are leaky. Instead, start your helper methods with an appropriate namespace.

Unlike object methods (e.g. myobj.do_something) helper methods (e.g. render_something) are not available for us to use in the Rails console.

Helper specs

Basic format:

# frozen_string_literal: true
require "rails_helper"

RSpec.describe Foos::BarHelper do
let(:foo) { FactoryBot.create(:foo) }

describe "#foo_bars_sortable_link" do

context "when bat is not true" do
it "does a particular thing" do
expect(helper.foo_bars_sortable_link(foo, bat: "false")).to have_link(
# …
)
end
end

context "when bat is true" do
it "does something else" do
expect(helper.foo_bars_sortable_link(foo, bat: "true")).to have_link(
# …a different link from previous test
)
end
end
end
end

Notes:

  • start with describe: it’s a good top-level.
  • describe a helper method using hash (describe "#project_link" do)
  • Helper methods should not directly access controller instance variables because it makes them brittle, less reusable and less maintainable. If you find you’re doing that you might see it as an opportunity to refactor your helper method.

Debugging Helper methods

If you want to debug a helper method by running it and stepping through it at the command line you should lean on a test to get into the method’s context.

# in foo_helper.rb, insert above line of interest
binding.pry # or byebug

# at command line, run helper’s spec (at relevant line/assertion)
bin/rspec spec/path/to/foo_helper_spec.rb:195

# the “debugger” drops you in at the line where you added your breakpoint
# and shows the body of the function being run by the line of the spec we requested.
From: /path/to/app/helpers/foo_helper.rb:26 FooHelper#render_foo:

# you’re now debugging in the context of the running helper method…
# with the arguments passed in by the test available to manipulate.
# this means you can run constituent parts of the method at the debugger prompt…
# for example…
# run this to get back the HTML being rendered.
render_user_profile(user)

blank? versus empty?

If you want to test whether something is “empty” you might use empty? if you’re testing a string, however it’s not appropriate for testing object properties (such as person.nickname) because objects can be nil and the nil object has no empty? method. (Run nil.empty? at the console for proof.) Instead use blank? e.g. person.nickname.blank?.

frozen_string_literal: true

I’ll often see this at the top of files, for example Ruby classes. This is just a good practice. It makes things more efficient and thereby improves performance.

frozen_string_literal: true

Class-level methods

They’re called class-level methods because they are methods which are never called by the instance, i.e. never called outside of the class. They are also known as macros.

Examples include attr_reader and ViewComponent’s renders_one.

Constants

Here’s an example where we define a new constant and assign an array to it.

ALLOWED_SIZES = [nil, :medium, :large]

Interestingly while the constant cannot be redefined later—i.e. it could not later be set to something other than an array—elements can still be added or removed. We don’t want that here. The following would be better because it locks things down which is likely what we want.

ALLOWED_SIZES = [nil, :medium, :large].freeze

Symbols

They’re not variables. They’re more like strings than variables however Strings are used to work with data whereas Symbols are identifiers.

You should use symbols as names or labels for things (for example methods). They are often used to represent method & instance variable names:

# here, :title is a symbol representing the @title instance variable
attr_reader :title

# refer to the render_foo method using a symbol
Myobj.method(:render_foo).source_location

# you can also use symbols as hash keys
hash = {a: 1, b: 2, c: 3}

From what I can gather, colons identify something as a Symbol and the colon is at the beginning when its a method name or instance variable but at the end when its a hash key.

Hashes

A Hash is a dictionary-like collection of unique keys and their values. They’re also called associative arrays. They’re similar to Arrays, but where an Array uses integers as its index, a Hash allows you to use any object type.

Example:

hash = {a: 1, b: 2, c: 3}

The fetch method for Hash

Use the fetch method as a neat one-liner to get the value of a Hash key or return something (such as false) if it doesn’t exist in the hash.

@options.fetch(:flush, false)

ViewComponents

ViewComponents (specifically the my_component.rb file) are just controllers which do not access the database.

They use constructors like the following:

def initialize(size: nil, full_height: false, data: nil)
super
@size = allowed_value?(ALLOWED_CARD_SIZES, size)
@full_height = full_height
@data = data
end

(Note that you would never include a constructor in a Rails controller or model.)

ViewComponents in the Rails console

view = ActionView::Base.new
view.render(CardComponent.new)

Instance variables

def initialize(foo: nil)
super
@foo = foo
end

In the above example @foo is an instance variable. These are available to an instance of the controller and private to the component. (This includes ViewComponents, which are also controllers.)

In a view, you can refer to it using @foo.

In a subsequent method within the controller, refer to it simply as foo. There’s no preceding colon (it’s not a symbol; in a conditional a symbol would always evaluate to true) and no preceding @.

def classes
classes = ["myThing"]
classes << "myThing-foo" if foo
classes
end

Making instance variables publicly available

The following code makes some instance variables of a ViewComponent publicly available.

attr_reader :size, :full_height, :data

Using attr_reader like this automatically generate a “getter” for a given instance variable so that you can refer to that instead of the instance variable inside your class methods. My understanding is that doing so is better than accessing the instance variable direct because, among other benefits, it provides better error messages. More about using attr_reader.

The ViewComponent docs also use attr_reader.

Methods

Every method returns a value. You don’t need to explicitly use return, because without it it will be assumed that you’re returning the last thing in the method.

def hello
"hello world”
end

Define private methods

Add private above the instance methods which are only called from within the class in which they are defined and not from outside. This makes it clear for other developers that they are internal and don’t affect the external interface. This lets them know, for example, that these method names could be changed without breaking things elsewhere.

Also: keep your public interface small.

Naming conventions

The convention I have worked with is that any method that returns a boolean should end with a question mark. This saves having to add prefixes like “is-” to method names. If a method does not return a boolean, its name should not end with a question mark.

Parameters

The standard configuration of method parameters (no colon and no default value) sets them as required arguments that must be passed in order when you call the method. For example:

def write(file, data, mode)

end

write("cats.txt", "cats are cool!", "w")

By setting a parameter to have a default value, it becomes an optional argument when calling the method.

def write(file, data, mode = "w")

end

write("shopping_list.txt", "bacon")

Named Parameters

Configuring your method with named parameters makes the method call read a little more clearly (via the inclusion of the keywords in the call) and increases flexibility because the order of arguments is not important. After every parameter, add a colon. Parameters are mandatory unless configured with a default value.

Here’s an example.

def write(file:, data:, mode: "ascii")

end

write(data: 123, file: "test.txt")

And here’s how you might do things for a Card ViewComponent.

def initialize(size: nil, full_height: false, data: nil)

end

<%= render(CardComponent.new(size: :small, full_height: true)) do %>

Check if thing is an array and is non-empty

You can streamline this to:

thing.is_a?(Array) && thing.present?

The shovel operator

The shovel operator (<<) lets you add elements to an array. Here’s an example where we build up an HTML class attribute for a BEM-like structure:

def classes
classes = []
classes << "card--#{size}" if size
classes << "card--tall" if full_height
classes.join(" ")
end

Double splat operator

My understanding is that when you pass **foo as a parameter to a method call then it represents the hash that will be returned from a method def foo elsewhere. The contents of that hash might be different under different circumstances which is why you’d use the double-splat rather than just specifying literal attributes and values. If there are multiple items in the hash, it’ll spread them out as multiple key-value pairs (e.g. as multiple HTML attribute name and attribute value pairs). This is handy when you don’t know which attributes you need to include at the time of rendering a component and want the logic for determining that to reside in the component internals. Here’s an example, based on a ViewComponent for outputting accessible SVG icons:

In the icon_component.html.erb template:

<%= tag.svg(
class: svg_class,
fill: "currentColor",
**aria_role
) do %>

<% end %>

In IconComponent.rb:

def aria_role
title ? { role: "img" } : { aria: { hidden: true } }
end

The **aria_role argument resolves to the hash output by the aria_role method, resulting in valid arguments for calling Rails’s tag.svg.

require

require allows you to bring other resources into your current context.

Blocks

The do…end structure in Ruby is called a “block”, and more specifically a multi-line block.

  <%= render CardComponent.new do |c| %>
Card stuff in here.
<% end %>

Blocks are essentially anonymous functions.

When writing functions where we want a block passed in, we can specify that the block is required. For example:

def do_something(param, &block)

Here, the ampersand (&) means that the block is required.

yield

When you have a method with a yield statement, it is usually running the block that has been passed to it.

You can also pass an argument to yield e.g. yield(foo) and that makes foo available to be passed into the block.

See the yield keyword for more information.

Single-line block

Sometimes we don’t need to use a multiline block. We can instead employ a single-line block. This uses curly braces rather than do…end.

For example in a spec we might use:

render_inline(CardComponent.new) { "Content" }
expect(rendered_component).to have_css(".fe-CardV2", text: "Content")

The above two lines really just construct a “string” of the component and let you test for the presence of things in it.

Rendering HTML

We have the content_tag helper method for rendering HTML elements. However you are arguably just as well coding the actual HTML rather than bothering with it, especially for the likes of div and span elements.

link_to is a little more useful and makes more sense to use.

Multi-line HTML string

Return a multi-line HTML string like so:

output = "<p>As discussed on the phone, the additional work would involve:</p>
<ol>
  <li>Item 1</li>
  <li>Item 2</li>
  <li>Item 3</li>
</ol>
<p>This should get your historic accounts into a good shape.</p>".html_safe
output

Interpolation

Here’s an example where we use interpolation to return a string that has a text label alongside an inline SVG icon, both coming from variables.

"#{link[:text]} #{icon_svg}".html_safe

tag.send()

send() is not just for use on tag. It’s a means of calling a method dynamically i.e. using a variable. I’ve used it so as to have a single line create either a th or a td dymamically dependent on context.

Only use it when you are in control of the arguments. Never use it with user input or something coming from a database.

Random IDs or strings

object_id gives you the internal ruby object id for what you’re working on. I used this in the past to append a unique id to an HTML id attribute value so as to automate an accessibility feature. However don’t use it unintentionally like I did there.

It’s better to use something like rand, or SecureRandom or SecureRandom.hex.

Views

If you have logic you need to use in a view, this would tend to live in a helper method rather than in the controller.

Policies

You might create a method such as allowed_to? for purposes of authorisation.

Start (local) Rails server

Note: the following is shorthand for bin/rails server -b 0.0.0.0.

rails s

Miscellaneous

Use Ruby to create a local web server.

# to serve your site at localhost:5000 run this in the project’s document root
ruby -run -e httpd . -p 5000

Web fonts: where to put them in the Rails file structure

See https://gist.github.com/anotheruiguy/7379570.

The Database

Reset/wipe the database.

bundle exec rake db:reset

Routing

Get routes for model from terminal

Let’s say you’re working on the index page for pet_foods and want to create a sort-by-column anchors where each link’s src points to the current page with some querystring parameters added. You’re first going to need the route for the current page and in the correct format.

To find the existing routes for pet_foods you can run:

rails routes | grep pet_foods

References

Subgrid for CSS Grid launches in Firefox 71

Subgrid for CSS Grid Layout has arrived in Firefox and it looks great. Here’s how I wrapped my head around the new concepts.

While MDN has some nice examples, I generally find I need a little extra time, trial and error and note-making in order to fully grasp CSS Grid concepts.

For example, I always need to remind myself that parts of the syntax – such as grid-template-columns selectors – refer to grid lines rather than columns.

So I created a Subgrid example pen with guideline notes for future reference. Note: make sure to open the pen in Firefox rather than any other browser, because at the time of writing only Firefox supports Subgrid!

Using CSS Custom Properties to streamline animation

Thanks to a great tip from Lucas Hugdahl on Twitter, here’s how to use CSS custom properties (variables) in your transforms so you don't need to rewrite the whole transform rule in order to transition (animate) a single property.

Let’s take the simple example of a button that we want to increase in size when hovered.

By using a custom property for the scale value, we can keep things DRYer in our :hover rule by only updating that variable rather than rewriting the entire transform rule.

The button HTML:

<button>Hover over me</button>

CSS:

button {
transition: transform .5s ease-in-out;
transform: translateX(-50%) translateY(-50%) scale(var(--scale, 1));
}

button:hover {
--scale: 2;
}

See it in action on Codepen.

“Your interview test for junior developer” (from Bruce Lawson on Twitter)

"Ok, as part of your interview test for junior developer, we want you to put some words, an image and some links onto a webpage. We use Node, Docker, Kubernetes, React, Redux, Puppeteer, Babel, Bootstrap, Webpack, <div> and <span>. Go!"

Bruce Lawson nicely illustrates how ridiculous many job adverts for web developers are. This (see video) never fails to crack me up. It’s funny ‘cos it’s true!

(via @brucel)

Progressively Enhanced JavaScript with Stimulus

I’m dipping my toes into Stimulus, the JavaScript micro-framework from Basecamp. Here are my initial thoughts.

I immediately like the ethos of Stimulus.

The creators’ take is that in many cases, using one of the popular contemporary JavaScript frameworks is overkill.

We don’t always need a nuclear solution that:

  • takes over our whole front end;
  • renders entire, otherwise empty pages from JSON data;
  • manages state in JavaScript objects or Redux; or
  • requires a proprietary templating language.

Instead, Simulus suggests a more “modest” solution – using an existing server-rendered HTML document as its basis (either from the initial HTTP response or from an AJAX call), and then progressively enhancing.

It advocates readable markup – being able to read a fragment of HTML which includes sprinkles of Stimulus and easily understand what’s going on.

And interestingly, Stimulus proposes storing state in the HTML/DOM.

How it works

Stimulus’ technical purpose is to automatically connect DOM elements to JavaScript objects which are implemented via ES6 classes. The connection is made by data– attributes (rather than id or class attributes).

data-controller values connect and disconnect Stimulus controllers.

The key elements are:

  • Controllers
  • Actions (essentially event handlers) which trigger controller methods
  • Targets (elements which we want to read or write to, mapped to controller properties)

Some nice touches

I like the way you can use the connect() method – a lifecycle callback invoked whenever a given controller is connected to the DOM - as a place to test browser support for a given feature before applying a JS-based enhancement.

Stimulus also readily supports the ability to have multiple instances of a controller on the page.

Furthermore, actions and targets can be added to any type of element without the controller JavaScript needing to know or care about the specific element, promoting loose coupling between HTML and JavaScript.

Managing State in Stimulus

Initial state can be read in from our DOM element via a data- attribute, e.g. data-slideshow-index.

Then in our controller object we have access to a this.data API with has(), get(), and set() methods. We can use those methods to set new values back into our DOM attribute, so that state lives entirely in the DOM without the need for a JavaScript state object.

Possible Limitations

Stimulus feels a little restrictive if dealing with less simple elements – say, for example, a data table with lots of rows and columns, each differing in multiple ways.

And if, like in our data table example, that element has lots of child elements, it feels like there might be more of a performance hit to update each one individually rather than replace the contents with new innerHTML in one fell swoop.

Summing Up

I love Stimulus’s modest and progressive enhancement friendly approach. I can see me adopting it as a means of writing modern, modular JavaScript which fits well in a webpack context in situations where the interactive elements are relatively simple and not composed of complex, multidimensional data.

How to manage JavaScript dependencies

Managing JavaScript dependencies is about as much fun as a poke in the eye. However even if—like me—you prefer to keep things lean and dependency-free as far as possible, it’s something you’re going to need to do either in large work projects or as your personal side-project grows. In this post I tackle it head-on to reduce the problem to some simple concepts and practical techniques.

In modern JavaScript applications, we can add tried-and-tested open source libraries and utilities by installing packages from the NPM registry. This can aid development by letting you concentrate on your application’s unique features rather than reinventing the wheel for already-solved common tasks.

A typical example might be to add axios or node-fetch to a Node.js project to provide a means of making API calls.

We can use a package manager such as yarn or npm to install packages. When our package manager installs a package it logs it as a project dependency which is to say that the project depends upon its presence to function properly.

It then follows that anyone who wants to run the application should first install its dependencies.

And it’s the responsibility of the project owner (you and your team) to manage the project’s dependencies over time. This involves:

  • updating packages when they release security patches;
  • maintaining compatibility by staying on package upgrade paths; and
  • removing installed packages when they are no longer necessary for your project.

While it’s important to keep your dependencies updated, in a recent survey by Sonatype 52% of developers said they find dependency management painful. And I have to agree that it’s not something I generally relish. However over the years I’ve gotten used to the process and found some things that work for me.

A simplified process

The whole process might go something like this (NB install yarn if you haven’t already).

# Start installing and managing 3rd-party packages.
# (only required if your project doesn’t already have a package.json)
yarn init # or npm init

# Install dependencies (in a project which already has a package.json)
yarn # or npm i

# Add a 3rd-party library to your project
yarn add package_name # or npm i package_name

# Add package as a devDependency.
# For tools only required in the local dev environment
# e.g. CLIs, hot reload.
yarn add -D package_name # or npm i package_name --save-dev

# Add package but specify a particular version or semver range
# https://devhints.io/semver
# It’s often wise to do this to ensure predictable results.
# caret (^) is useful: allows upgrade to minor but not major versions.
# is >=1.2.3 <2.0.0
yarn add package_name@^1.2.3

# Remove a package
# use this rather than manually deleting from package.json.
# Updates yarn.lock, package.json and removes from node_modules.
yarn remove package_name # or npm r package_name

# Update one package (optionally to a specific version/range)
yarn upgrade package_name
yarn upgrade package_name@^1.3.2

# Review (in a nice UI) all packages with pending updates,
# with the option to upgrade whichever you choose
yarn upgrade-interactive

# Upgrade to latest versions rather than
# semver ranges you’ve defined in package.json.
yarn upgrade-interactive -—latest

Responding to a security vulnerability in a dependency

If you host your source code on GitHub it’s a great idea to enable Dependabot. Essentially Dependabot has your back with regard to any dependencies that need updated. You set it to send you automated security updates by email so that you know straight away if a vulnerability has been detected in one of your project dependencies and requires action.

Helpfully, if you have multiple Github repos and more than one of those include the vulnerable package you also get a round-up email with a message something like “A new security advisory on lodash affects 8 of your repositories” with links to the alert for each repo, letting you manage them all at once.

Dependabot also works for a variety of languages and techologies—not just JavaScript—so for example in a Rails project it might email you to suggest bumping a package in your Gemfile.

Automated upgrades

Sometimes the task is straightforward. The Dependabot alert email tells you about a vulnerability in a package you explicitly installed and the diligent maintainer has already made a patch release available.

A simple upgrade to the relevant patch version would do the job, however Dependabot can even take care of that for you! Dependabot can automatically open a new Pull Request which addresses the vulnerability by updating the relevant dependency. It’ll give the PR a title like

Bump lodash from 4.17.11 to 4.17.19

You just need to approve and merge that PR. This is great; it’s really simple and takes care of lots of cases.

Note 1: if you work on a corporate repo that is not set up to “automatically open PRs”, often you can still take advantage of Github’s intelligence with just one or two extra manual steps. Just follow the links in your Github security alert email.

Note 2: Dependabot can also be set to do automatic version updates even when your installed version does not have a vulnerability. You can enable this by adding a dependabot.yml to your repo. But so far I’ve tended to avoid unpredictability and excess noise by having it manage security updates only.

Manual upgrades

Sometimes Dependabot will alert you to an issue but is unable to fix it for you. Bummer.

This might be because the package owner has not yet addressed the security issue. If your need to fix the situtation is not super-urgent, you could raise an issue on the package’s Github repo asking the maintainer (nicely) if they’d be willing to address it… or even submit a PR applying the fix for them. If you don’t have the luxury of time, you’ll want to quickly find another package which can do the same job. An example here might be that you look for a new CSS minifier package because your current one has a longstanding security issue. Having identified a replacement you’d then remove package A, add package B, then update your code which previously used package A to make it work with package B. Hopefully only minimal changes will be required.

Alternatively the package may have a newer version or versions available but Depandabot can’t suggest a fix because:

  1. the closest new version’s version number is beyond the allowed range you specified in package.json for the package; or
  2. Dependabot can’t be sure that upgrading wouldn’t break your application.

If the package maintainer has released newer versions then you need to decide which to upgrade to. Your first priority is to address the vulnerability, so often you’ll want to minimise upgrade risk by identifying the closest non-vulnerable version. You might then run yarn upgrade <package…>@1.3.2. Note also that you may not need to specify a specific version because your package.json might already specify a semver range which includes your target version, and all that’s required is for you to run yarn upgrade or yarn upgrade <package> so that the specific “locked” version (as specified in yarn.lock) gets updated.

On other occasions you’ll read your security advisory email and the affected package will sound completely unfamiliar… likely because it’s not one you explicitly installed but rather a sub-dependency. Y’see, your dependencies have their own package.json and dependencies, too. It seems almost unfair to have to worry about these too, however sometimes you do. The vulnerability might even appear several times as a sub-dependency in your lock file’s dependency tree. You need to check that lock file (it contains much more detail than package.json), work out which of your top-level dependencies are dependent on the sub-dependency, then go check your options.

Update: use yarn why sockjs (replacing sockjs as appropriate) to find out why a module you don’t recognise is installed. It’ll let you know what module depends upon it, to help save some time.

When having to work out the required update to address a security vulnerability in a package that is a subdependency, I like to quickly get to a place where the task is framed in plain English, for example:

To address a vulnerability in xmlhttprequest-ssl we need to upgrade karma to the closest available version above 4.4.1 where its dependency on xmlhttprequest-ssl is >=1.6.2

Case Study 1

I was recently alerted to a “high severity” vulnerability in package xmlhttprequest-ssl.

Dependabot cannot update xmlhttprequest-ssl to a non-vulnerable version. The latest possible version that can be installed is 1.5.5 because of the following conflicting dependency: @11ty/eleventy@0.12.1 requires xmlhttprequest-ssl@~1.5.4 via a transitive dependency on engine.io-client@3.5.1. The earliest fixed version is 1.6.2.

So, breaking that down:

  • xmlhttprequest-ssl versions less than 1.6.2 have a security vulnerability;
  • that’s a problem because my project currently uses version 1.5.5 (via semver range ~1.5.4), which I was able to see from checking package-lock.json;
  • I didn’t explicitly install xmlhttprequest-ssl. It’s at the end of a chain of dependencies which began at the dependencies of the package @11ty/eleventy, which I did explicitly install;
  • To fix things I want to be able to install a version of Eleventy which has updated its own dependencies such there’s no longer a subdependency on the vulnerable version of xmlhttprequest-ssl;
  • but according to the Dependabot message that’s not possible because even the latest version of Eleventy (0.12.1) is indirectly dependent on a vulnerable version-range of xmlhttprequest-ssl (~1.5.4);
  • based on this knowledge, Dependabot cannot recommend simply upgrading Eleventy as a quick fix.

So I could:

  1. decide it’s safe enough to wait some time for Eleventy to resolve it; or
  2. request Eleventy apply a fix (or submit a PR with the fix myself); or
  3. stop using Eleventy.

Case Study 2

A while ago I received the following security notification about a vulnerability affecting a side-project repo.

dot-prop < 4.2.1 “Prototype pollution vulnerability in dot-prop npm package before versions 4.2.1 and 5.1.1 allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.

I wasn’t familar with dot-prop but saw that it’s a library that lets you “Get, set, or delete a property from a nested object using a dot path”. This is not something I explicitly installed but rather a sub-dependency—a lower-level library that my top-level packages (or their dependencies) use.

Github was telling me that it couldn’t automatically raise a fix PR, so I had to fix it manually. Here’s what I did.

  1. looked in package.json and found no sign of dot-prop;
  2. started thinking that it must be a sub-dependency of one or more of the packages I had installed, namely express, hbs, request or nodemon;
  3. looked in package-lock.json and via a Cmd-F search for dot-prop I found that it appeared twice;
  4. the first occurrence was as a top-level element of package-lock.jsons top-level dependencies object. This object lists all of the project’s dependencies and sub-dependencies in alphabetical order, providing for each the details of the specific version that is actually installed and “locked”;
  5. I noted that the installed version of dot-prop was 4.2.0, which made sense in the context of the Github security message;
  6. the other occurrence of dot-prop was buried deeper within the dependency tree as a dependency of configstore;
  7. I was able to work backwards and see that dot-prop is required by configstore then Cmd-F search for configstore to find that it was required by update-notifier, which is turn is required by nodemon;
  8. I had worked my way up to a top-level dependency nodemon (installed version 1.19.2) and worked out that I would need to update nodemon to a version that had resolved the dot-prop vulnerability (if such a version existed);
  9. I then googled “nodemon dot-prop” and found some fairly animated Github issue threads between Remy the maintainer of nodemon and some users of the package, culminating in a fix;
  10. I checked nodemon’s releases and ascertained that my only option if sticking with nodemon was to install v2.0.3—a new major version. I wouldn’t ideally install a version which might include breaking changes but in this case nodemon was just a devDependency, not something which should affect other parts of the application, and a developer convenience at that so I went for it safe in the knowledge that I could happily remove this package if necessary;
  11. I opened package.json and within devDependencies manually updated nodemon from ^1.19.4 to ^2.0.4. (If I was in a yarn context I’d probably have done this at the command line). I then ran npm i nodemon to reinstall the package based on its new version range which would also update the lock file. I was then prompted to run npm audit fix which I did, and then I was done;
  12. I pushed the change, checked my Github repo’s security section and noted that the alert (and a few others besides) had disappeared. Job’s a goodun!

Proactively checking for security vulnerabilities

It’s a good idea on any important project to not rely on automated alerts and proactively address vulnerabilities.

Check for vulnerabilities like so:

yarn audit

# for a specific level only
yarn audit --level critical
yarn audit --level high

Files and directories

When managing dependencies, you can expect to see the following files and directories.

  • package.json
  • yarn.lock
  • node_modules (this is the directory into which packages are installed)

Lock files

As well as package.json, you’re likely to also have yarn.lock (or package.lock or package-lock.json) under source control too. As described above, while package.json can be less specific about a package’s version and suggest a semver range, the lock file will lock down the specific version to be installed by the package manager when someone runs yarn or npm install.

You shouldn’t manually change a lock file.

Choosing between dependencies and devDependencies

Whether you save an included package under dependencies (the default) or devDependencies comes down to how the package will be used and the type of website you’re working on.

The important practical consideration here is whether the package is necessary in the production environment. By production environment I don’t just mean the customer-facing website/application but also the enviroment that builds the application for production.

In a production “build process” environment (i.e. one which likely has the environment variable NODE_ENV set to production) the devDependencies are not installed. devDependencies are packages considered necessary for development only and therefore to keep production build time fast and output lean, they are ignored.

As an example, my personal site is JAMstack-based using the static site generator (SSG) Eleventy and is hosted on Netlify. On Netlify I added a NODE_ENV environment variable and set it to production (to override Netlify’s default setting of development) because I want to take advantage of faster build times where appropriate. To allow Netlify to build the site on each push I have Eleventy under dependencies so that it will be installed and is available to generate my static site.

By contrast, tools such as Netlify’s CLI and linters go under devDependencies. Netlify’s build prorcess does not require them, nor does any client-side JavaScript.

Upgrading best practices

  • Check the package CHANGELOG or releases on Github to see what has changed between versions and if there have been any breaking changes (especially when upgrading to the latest version).
  • Use a dedicated PR (Pull Request) for upgrading packages. Keep the tasks separate from new features and bug fixes.
  • Upgrade to the latest minor version (using yarn upgrade-interactive) and merge that before upgrading to major versions (using yarn upgrade-interactive -—latest).
  • Test your work on a staging server (or Netlify preview build) before deploying to production.

References

Jank-free Responsive Images

Here’s how to improve performance and prevent layout jank when browsers load responsive images.

Since the advent of the Responsive Web Design era many of us, in our rush to make images flexible and adaptive, stopped applying the HTML width and height attributes to our images. Instead we’ve let CSS handle the image, setting a width or max-width of 100% so that our images can grow and shrink but not extend beyond the width of their parent container.

However there was a side-effect in that browsers load text first and images later, and if an image’s dimensions are not specified in the HTML then the browser can’t assign appropriate space to it before it loads. Then, when the image finally loads, this bumps the layout – affecting surrounding elements in a nasty, janky way.

CSS-tricks have written about this several times however I’d never found a solid conclusion.

Chrome’s Performance Warning

The other day I was testing this here website in Chrome and noticed that if you don’t provide images with inline width and height attributes, Chrome will show a console warning that this is negatively affecting performance.

Based on that, I made the following updates:

  1. I added width and height HTML attributes to all images; and
  2. I changed my CSS from img { max-width: 100%; } to img { width: 100%; height: auto; }.

NB the reason behind #2 was that I found that that CSS works better with an image with inline dimensions than max-width does.

Which dimensions should we use?

Since an image’s actual rendered dimensions will depend on the viewport size and we can’t anticipate that viewport size, I plumped for a width of 320 (a narrow mobile width) × height of 240, which fits with this site’s standard image aspect ratio of 4:3.

I wasn’t sure if this was a good approach. Perhaps I should have picked values which represented the dimensions of the image on desktop.

Jen Simmons to the rescue

Jen Simmons of Mozilla has just posted a video which not only confirmed that my above approach was sound, but also provided lots of other useful context.

Essentially, we should start re-applying HTML width and height attributes to our images, because in soon-to-drop Firefox and Chrome updates the browser will use these dimensions to calculate the image’s aspect ratio and thereby be able to allocate the exact required space.

The actual dimensions we provide don’t matter too much so long as they represent the correct aspect ratio.

Also, if we use the modern srcset and sizes syntax to offer the browser different image options (like I do on this site), so long as the different images are the same aspect ratio then this solution will continue to work well.

There’s no solution at present for the Art Direction use case – where we want to provide different aspect ratios dependent on viewport size – but hopefully that will come along next.

I just tested this new feature in Firefox Nightly 72, using the Inspector’s Network tab to set “throttling” to 2G to simulate a slow-loading connection, and it worked really well!

Lazy Loading

One thing I’m keen to test is that my newly-added inline width and height attributes play well with loading="lazy". I don’t see why they shouldn’t and in fact they should in theory all support each other well. In tests so far everything seems good, however since loading="lazy" is currently only implemented in Chrome I should re-test images in Chrome once it adds support for the new image aspect ratio calculating feature, around the end of 2019.

Relearn CSS layout: Every Layout

Every now and then something comes along in the world of web design that represents a substantial shift. The launch of Every Layout, a new project from Heydon Pickering and Andy Bell, feels like one such moment.

In simple terms, we get a bunch of responsive layout utilities: a Box, a Stack, a Sidebar layout and so on. However Every Layout offers so much more—in fact for me it has provided whole new ways of thinking and talking about modern web development. In that sense I’m starting to regard it in terms of classic, game-changing books like Responsive Web Design and Mobile First.

Every Layout’s components, or primitives, are self-governing and free from media queries. This is a great step forward because media queries tie layout changes to the viewport, and that’s suboptimal when our goal is to create modular components for Design Systems which should adapt to variously-sized containers. Every Layout describe their components as existing in a quantum state: simultaneously offering both narrow and wide configurations. Importantly, the way their layouts adapt is also linked to the dynamic available space in the container and the intrinsic width of its contents, which leads to more fluid, organic responsiveness.

Every Layout’s approach feels perfect for the new era of CSS layout where we have CSS Grid and Flexbox at our disposal. They use these tools to suggest rather than dictate, letting the browser make appropriate choices based on its native algorithms.

Native lazy-loading for the web

Now that we have the HTML attribute loading we can set loading="lazy" on our website’s media, and the loading of non-critical, below-the-fold media will be deferred until the user scrolls to them.

This can really improve performance so I’ve implemented it on images and iframes (youtube video embeds etc) throughout this site.

This is currently only supported in Chrome, but that still makes it well worth doing.

Fringe Making

Last Tuesday, 20/8/19 I made the train trip east for a day at the Edinburgh Festival Fringe.

There are always a variety of interesting shows to catch in the month-long festival and this year I was particularly looking foward to Darren McGarvey AKA Loki’s Scotland Today. Having read and enjoyed McGarvey’s book Poverty Safari last year I was keen to see and hear him in the flesh.

Another reason for the trip was that during a recent stint working with Bright Signals I developed the FringeMaker web app – a Pokémon Go style game where you win points by “checking into” Fringe gig venues – so I was excited to hit the Edinburgh streets to give it a spin for real.

First port of call was to meet my friends Mick and Laura at George Square for a catch up and pre-gig beer. Having made it through the festival crowds and pouring rain to find them, we took temporary refuge with a pint and some tasty pizza from the nearby stalls, before setting off for our first gig: Tony Slattery’s Slattery Will Get You Nowhere.

Slattery will get you nowhere

In the early nineties I enjoyed watching Tony on Whose Line is it Anyway? and I was moved by a recent Guardian interview which revealed that in the years following the show ending he fell off the rails somewhat due to his bipolar disorder allied to alcohol and drug addictions, and was now looking for an agent and new opportunities.

The format of this show was just Tony and comedy historian Robert Ross sat at a table, with Tony answering a series of unscripted questions. Over the course of an hour he stepped through his career, from winning the Fringe’s inaugural Perrier Award with Cambridge Footlights pals Stephen Fry, Hugh Laurie and Emma Thompson (“yeah but where are they now?”), to pant-splitting (literally) and deleted scenes on Whose Line is it Anyway?, to acting roles in films such as Peter’s Friends. It was filled with funny anecdotes involving the likes of Rik Mayall and Kenneth Branagh, plus a few might-have-been stories such as when he narrowly missed out (to Sylvester McCoy) on the role of Doctor Who.

Overall I really enjoyed this. Despite having problems which have taken their toll, Tony Slattery is still a funny and engaging performer and is also doing his bit to help raise awareness of bipolar disorder. He seems like a good egg.

Scotland Today

Onwards to Darren McGarvey’s show at The Stand’s New Town Theatre, and he unexpectedly begins with a TED Talk style discussion on space and quantum mechanics, setting up the idea that there are two contradictory versions of himself.

There’s the pre- Poverty Safari, lower working class CDE2 Darren; and the new, “poster child for working class politics”, middle-class, ABC1 Darren.

During the show he mostly speaks as the “new Darren”, describing how his situation has improved and priorities changed since no longer having to constantly worry about financial security. He is still angry about the injustices of life in the UK – citing the inadequate response to the Grenfell Tower Disaster as an example – but also realises that he has become a contradiction given his new status.

He moves on to suggest that the ABC1 group are in a privileged position, uniquely placed to get ahead in life while others can’t; and that because they don’t properly understand the circumstances of the CDE2s they are therefore not in a position to be making the decisions that affect them.

He contrasts the comfortable lives of the ABC1s with those of the CDE2s who live in quicksand – constantly being dragged down by financial and other societal problems, with no prospect of getting out and a feeling that by attempting to escape you only make matters worse.

McGarvey finishes by switching to his angry, in-your-face, baseball cap wearing alter-ego from another possible timeline; not blessed with the fortune of middle-class Darren and furious about the injustices of his situation and life in Tory-led Britain.

Again, I really enjoyed this show, and felt that McGarvey was just as powerful in the flesh as on paper, if not more so. There were maybe a few too many narrative devices and gimmicks going on than necessary, but he’s a really interesting voice and continually says things which make me think and challenge myself.

I think his new BBC show, Darren McGarvey's Scotland will definitely be one to watch.

From dynamic to static

“I’ll just make a few small tweaks to my website…” said I. Cut to three sleep-deprived days later and I’ve rebuilt it, SSG/JAMstack-stylee with Eleventy and Netlify and entirely re-coded the front-end. Silly, but so far so good, and it’s greasy fast!

So yes, I’ve just updated my website from being a dynamic, LAMP-stack affair which used Perch CMS and was hosted on Linode to being statically-generated using Eleventy and hosted on Netlify.

It mostly went smoothly. And the environmental and continuous deployment boilerplate that Netlify provide are fantastic, and will be a real time-saver over my current “set up and maintain a Linode server” approach.

In terms of challenges and troubleshooting, I did have to find a solution to the issue of FOUT on repeat visits. It seems this was happening as a result of Netlify’s interesting approach to asset caching which works well for most requirements but wasn’t so great for self-hosted webfonts. My solution was to add specific headers for .woff and .woff2 files in my application’s Netlify config file.

Saying bye-bye to autoprefixer

For a while now I’ve been using gulp-autoprefixer as part of my front-end build system. However, I’ve just removed it from my boilerplate. Here’s why.

The npm module gulp-autoprefixer takes your standard CSS then automatically parses the rules and generates any necessary vendor-prefixed versions, such as ::-webkit-input-placeholder to patch support for ::placeholder in older Webkit browsers.

I’ve often felt it excessive—like using a hammer to crack a nut. And I’ve wondered if it might be doing more harm than good, by leading me to believe I have a magical sticking plaster for non-supporting browsers when actually (especially in the case of IE) the specific way in which a browser lacks support might be more nuanced. Furthermore I’ve never liked the noise generated by all those extra rules in my CSS output, especially when using the inspector to debug what would otherwise be just a few lines of CSS.

But I always felt it was a necessary evil.

However, I’ve just removed gulp-autoprefixer from my boilerplate. Why? Because:

  1. Browsers are no longer shipping any new CSS with prefixes, and as at 2019, they haven’t been for years;
  2. With the browsers that do require prefixed CSS now old and in the minority, it feels like progressive enhancement rather than “kitchen sink” autoprefixing should take care of them. (Those browsers might not get the enhanced experience but what they’ll get will be fine.)

Jen Simmons’ tweet on this topic was the push I needed.

So I’ve removed one layer of complexity from my set-up, and so far nothing has exploded. Let’s see how it goes.

Cookie Consent by Osano

The most popular drop-in solution to the EU Cookie Law requirements.

Over the last year I’ve been successfully using Cookie Consent by Osano on a number of commercial websites. Essentially this is a banner which appears at the bottom (or top) of your website and asks the visitor to explicitly give (or decline to give) consent for the cookies your website uses. It’s a great free resource which handles the requisite GDPR requirements (and more) and offers a number of customisation options.

it’s very simple to include and use – you just step through their WYSIWYG generator, include the generated JavaScript-based settings in your site, and point to their CSS and JavaScript libraries. I like self-hosting my own static assets so I integrate the libraries into my code rather than linking to their externally hosted files, but that’s just my personal preference.

Why do we need this?

It’s because In 2018, the European Union’s General Data Protection Regulation (GDPR) went into effect, establishing a number of principles governing the collection of personal information. Any company or individual that processes personal information of European Union citizens must comply with the GDPR, regardless of where data is stored or processed.

Cookies often collect information about their users that is not specifically identified with one individual, but if that information, combined with other data, can be used to identify an individual, it becomes “personal information” for the purposes of the GDPR and must be treated as such.

The clearest and most effective way to notify a user in advance of the collection of information using cookies is to provide a web banner or “pop-up” cookie notice that appears automatically when the home page is accessed for the first time, and requires some affirmative action.

Real Favicon Generator

Knowing how best to serve, size and format favicons and other icons for the many different device types and operating systems can be a minefield. My current best practice approach is to create a 260px × 260px (or larger) source icon then upload it to Real Favicon Generator.

This is the tool recommended by CSS-Tricks and it takes care of most of the pain by not only generating all the formats and sizes you need but also providing some code to put in your <head> and manifest.webmanifest file.

Katherine Kato’s personal website

Some simple but inspiring work here from Seattle-based web developer Katherine Kato. I really like the use of space, the typography, the colour palette and the use of CSS grid for layout.

CSS pointer-events to the rescue

Sometimes, for reasons unknown, we find that clicking or tapping an element just isn’t working. Here’s a CSS-based approach that might help.

I’ve recently encountered the scenario – usually in reasonably complex user interfaces – where I have an anchor (or ocassionally, a button) on which clicks or taps just aren’t working, i.e. they don’t trigger the event I was expecting.

On further investigation I found that this is often due to having an absolutely positioned element which is to some extent overlaying (or otherwise interfering with) our target clickable element. Alternatively, it may be because we needed a child/nested element inside our anchor or button and it is this element that the browser perceives as being the clicked or tapped element.

I’ve found that setting .my-elem { pointer-events: none; } on the obscuring element resolves the problem and get you back on track.

Here’s some more information on CSS pointer events.

Polypane: The browser for responsive web development and design

Polypane is a browser built specifically for developing responsive websites. It can present typical device resolutions side-by-side (for example iphone SE next to iphone 7 next to iPad) but also has some nice features such as automatically creating views based on your stylesheet’s media query breakpoints.

It’s a subscription service and at the moment I’m happy using a combination of Firefox Nightly and Chrome so I think I’ll wait this one out for the time being. But I’ll be keeping my eye on it!

Using aria-current is a win-win situation

The HTML attribute aria-current allows us to indicate the currently active element in a sequence. It’s not only great for accessibility but also doubles as a hook to style that element individually.

By using [aria-current] as your CSS selector (rather than a .current class) this also neatly binds and syncs the way you cater to the visual experience and the screen reader experience, reducing the ability for the latter to be forgotten about.

As Léonie Watson explains, according to WAI-ARIA 1.1 there are a number of useful values that the aria-current attribute can take:

  • page to indicate the current page within a navigation menu or pagination section;
  • step for the current step in a step-based process;
  • date for the current date.
  • time for the current time.

I’ve been using the aria-current="page" technique on a couple of navigation menus recently and it’s working well.

Also: my thanks go to Ethan Marcotte, David Kennedy and Lindsey. Ethan recently suggested that the industry should try harder regarding accessibility and recommended subscribing to David Kennedy’s a11y Weekly newsletter. I duly subscribed (it’s great!) and one of the issues linked to Lindsey’s article An Introduction to ARIA states in which I learned about aria-current.

Using CSS display: contents to snap grandchild elements to a grid

I realised last night while watching a presentation by Lea Verou that I could streamline my CSS Grid layouts.

I’d been creating an overall page grid by setting body { display: grid; } then some grid areas but realised that this only worked for direct children and didn’t help with aligning more deeply nested elements to that outer grid.

For example in the case of the main header if I wanted its child logo, nav and search elements to snap to the body grid then I found myself having to duplicate the display: grid and grid-template-areas again on the header.

It didn’t feel very DRY but my understanding was that while we await subgrid, it was a necessary evil.

What I should have been using is display: contents.

If you set your header to display: contents then the parent (body) grid layout will apply to the header’s contents (logo, nav, etc) as if the header element (the “real” direct child of the grid) wasn’t there. This gives us good semantics without the need to redefine the grid on the header.

Here’s a codepen to illustrate.

I was aware of the existence of display: contents but somehow it hadn’t sunk in because I’d only read about it in the abstract. Lea Verou’s explanation made all the difference. Cheers, Lea!

Update 4/5/2019

Frustratingly, I’ve learned that although we can apply this technique, we shouldn’t… or at least not for a while.

Due to a bug in all supporting browsers the property will currently also remove the element (header in our case) from the accessibility tree meaning that its semantics will be lost.

For reference, see:

Update 11/4/2021

Thanks to Rachel Andrew for the heads-up that this issue is now fixed in both Firefox and Chrome.

We’re now just waiting for Edge and Safari to roll out fixes before we can regard this as a safe option.

How to control SVG icon size and colour in context

A while back I read a great SVG icon tip from Andy Bell which I’d been meaning to try and finally did so today. Andy recommended that for icons with text labels we set the width and height of the icons to 1em since that will size them proportionately to the adjacent text and additionally lets us use font-size to make any further sizing tweaks.

As previously mentioned, I’ve recently been working on my SVG skills.

Andy Bell’s SVG icon-sizing technique is really clever and feels like it adds lots of flexibility and future-friendliness so I was keen to try it out.

Here’s how it works.

The HTML:

<a class="call-to-action" href="/">
<span>I’m a link</span>
<svg
class="cta-icon"
aria-hidden="true"
width="1em"
height="1em"
viewBox="0 0 14 13"
xmlns="http://www.w3.org/2000/svg">

<path
fill="currentColor"
fill-rule="evenodd"
d="M3.49.868l7.683 3.634a2 2 0 0 1 .052 3.59l-7.682 3.913a2 2 0 0 1-2.908-1.782V2.676A2 2 0 0 1 3.49.868z">

</path>
</svg>
</a>

<a class="call-to-action call-to-action-alt" href="/">
<span>I’m a large link</span>
<svg
class="cta-icon" aria-hidden="true"
width="1em" height="1em"
viewBox="0 0 14 13"
xmlns="http://www.w3.org/2000/svg">

<path
fill="currentColor"
fill-rule="evenodd"
d="M3.49.868l7.683 3.634a2 2 0 0 1 .052 3.59l-7.682 3.913a2 2 0 0 1-2.908-1.782V2.676A2 2 0 0 1 3.49.868z">

</path>
</svg>
</a>

The CSS:

a { color: rgb(183, 65, 14); }

a:hover { color: #6A2000; }

.call-to-action {
display: inline-flex;
align-items: center;
font-weight: bold;
}

.call-to-action-alt {
font-size: 2rem;
}

.cta-icon {
margin-left: .5em;
font-size: .8em;
}

Here are my key takeaways:

  • By applying width and height of 1em to our icon it is predictably sized by default.
  • It can now have its size further tweaked in CSS using font-size, for example with ems (where 1em = the font-size of the parent anchor element).
  • This technique requires the viewbox attribute being present on the svg.
  • Apply the width and height =1em as inline attributes on the svg. We could apply them using CSS, however the inline approach avoids potentially massive icons showing in cases where CSS doesn’t load.
  • To get the colour matching, apply fill="currentColor" as an inline attribute on the svg’s path.
  • Now, when you apply a hover colour to the anchor in CSS, the icon will just pick that up. Nice!
  • Applying inline-flex to the anchor makes the vertical-alignment of text and icon easier.
  • Apply aria-hidden to the icon because it’s mainly decorative so we don’t want it read out by screen readers.

And here’s a demo I created to test-drive the technique.

Check localhost development on your iPhone

Here’s how to check the application you’re running locally on your MacBook on your iPhone.

It’s pretty much a case of connecting your iPhone to your MacBook by USB, tweaking some settings, then browsing to the application via a given IP in iOS Safari.

Box Shadow around the full box

Sometimes when coding a UI element you want a shadow around the whole box. However, most CSS box-shadow examples/tutorials tend to show inset box-shadows or ones that otherwise sit off to the side.

Here’s how to apply box-shadow to the whole box for a simple but nice effect.

.box-with-shadow {
box-shadow: 0 0 4px #ccc;
}

And here’s how it looks:

Lorem ipsum

Certbot Troubleshooting

When taking the DIY approach to building a new server, Certbot is a great option for installing secure certificates. However, sometimes you can run into problems. Here, I review the main recurring issues I’ve encountered and how I fixed them.

When creating new servers for my projects I use Certbot as a means of installing free Let’s Encrypt secure certificates.

It’s great to be able to get these certificates for free and the whole process is generally very straightforward. However, since working with Let’s Encrypt certificates over the last few years I’ve found that the same recurring questions tend to plague me.

This is a note to “future me” (and anyone else it might help) with answers to the questions I’ve pondered in the past.

How do I safely upgrade from the old LE system to Certbot?

For servers where you previously used the 2015/2016, pre-Certbot Let’s Encrypt system for installing SSL certs, you can just install Certbot on top and it will just work. It will supersede the old certificates without conflict.

How do I upgrade Certbot now that Let’s Encrypt have removed support for domain validation with TLS-SNI-01?

Essentially the server needs Certbot v0.28 or above. See Let’s Encrypt’s post on how to check your Certbot version and steps to take after upgrading to check everything is OK. To apply the upgrade I performed apt-get update && apt-get upgrade -y as root although depending on when you last did it this might be a bit risky as it could update a lot of packages rather than just the Certbot ones. It might be better to just try sudo apt-get install certbot python-certbot-apache.

To what extent should I configure my 443 VirtualHost block myself or is it done for me?

When creating a new vhost on your Linode, DigitalOcean (or other cloud hosting platform) server, you need only add the <VirtualHost *:80> directive. No need to add a <VirtualHost *:443> section, nor worry about pointing to LE certificate files, nor bother writing rules to redirect http to https like I used to. When you install your secure certificate, certbot will automatically add the redirect into your original file and create an additional vhost file (with extension -le.ssl.conf) based on the contents of your existing file but handling <VirtualHost *:443> and referencing all the LE SSL certificate files it installed elsewhere on the system.

How should I manage automated renewals?

There’s no longer any need to manually add a cron job for certiticate renewal. Auto-renewal of certificates is now handled by a cron job which comes bundled with the certbot package you initially install – in my case usually a certbot ppa package for Ubuntu 16.04 or 18.04. However you won’t find that cron job in the crontab for either your limited user, nor the root user. Instead, it is installed at a lower level (/etc/cron.d) and should just work unless you’ve done something fancy with systemd in your system which in my case is unlikely.

How can I tell if renewals are working and what should I do if they’re not?

If you notice that the SSL certificate for your domain is within 30 days of expiry and hasn’t yet auto-renewed, then you know that something has gone wrong with the auto-renewal process. You can test for problems by running sudo certbot renew --dry-run. You may find that there is, for example, a syntax error in your apache2.conf or nginx config file which needs corrected – not that I’ve ever been guilty of this, you understand…

W3C HTML Element Sampler

In all my years of spinning up “HTML Typographic Elements” lists or pages as a reference for designers, I didn’t realise that the W3C provide the very thing I needed in their HTML Element Sampler. These pages provide comprehensive dummy content covering all the main typographic elements which is really handy when designing a website’s typographic styles and pattern library.

My Git Cheatsheet

I’ve used Git for many years but it can still trip me up. At times I’ve worked primarily in a GUI (like Sourcetree or Fork), and other times directly on the command line. I’ve worked on projects where I’ve been the sole developer and others where I’m part of a large team. Regardless of the tools or context, I’ve learned there are certain need-to-knows. Here’s a list of useful Git concepts and commands for my reference and yours.

Note: the following is not an exhaustive list but rather the thing I keep coming back to and/or regularly forget. For deeper explanations, see the list of resources at the foot of the article.

Table of contents

Starting work

Create a remotely-hosted repo

Option 1: Create a new repo in your Github account

This generates a new, empty repo (optionally initialised with a README).

Do this when you will be working on a new, dedicated project rather than contributing changes to a pre-existing one.

Option 2: Create repo from a “template repository” (owned by you or someone else)

This generates a new repo with the same directory structure and files as the template. It’s a good option for starting your own new, potentially long-lived project from a solid starting point.

Unlike a fork it does not include the entire commit history of the parent repository. Instead it starts with a single commit.

Github Reference: Creating a repository from a template

Option 3: Fork an existing repo (usually owned by someone else)

This generates a new repo which is a copy of another repo, including its commit history. Your commits will update your copy rather than the original repo.

Do this by clicking the Fork button in the header of a repository.

This is good for (often short-lived) collaboration on an existing repo. You can contribute code to someone else’s project, via PRs.

Github Reference: Working with forks

Start locally by cloning

clone creates a local copy on your computer of a remote (Github-hosted) repo.

cd projects
git clone https://github.com/githubusername/projectname.git optionallocaldirectoryname

You might be cloning a repo you own, or one owned by someone else (to use its features in your project).

Your local copy will, by default, have its origin remote set to the Github repo you cloned.

I cloned an empty new project

We’re in easy streets. The default remote is set exactly as you want it. Just write code, push at your leisure, and pull if/when you need to.

I cloned a pre-existing project (owned by me or someone else):
I plan to use it in my own, separate project

You might want to cut all ties and have a clean slate, git-wise.

rm -rf .git
git init
git remote add origin https://github.com/mygithubusername/mynewproject.git
git push -u origin master

Alternatively you might want to keep the original remote available so you can pull in its future project updates, but reset the origin remote to your new/target repo.

git remote rename origin upstream
git remote add origin https://github.com/mygithubusername/mynewproject.git
git push origin master
git pull origin master
# in the future the original repo gets an update
git pull upstream master
The source repo is my fork of a project to which I want to contribute

See Working with forks for how best to stay in sync and open PRs.

Duplicating (also knows as “duplicate without forking”)

This is a special type of clone. I know this is an option, but it‘s not one I’m familiar with or have had call to use. I can refer to Duplicating a repository if need be.

Start locally from a blank slate

Although cloning is the easiest way to get started locally, ocassionally I start by coding from scratch instead.

mkdir myproject && cd myproject
echo "# Welcome to My Project Repo" >> README.md
git init
git add README.md
git commit -m "first commit"

# go to Github and create an empty repo, if you haven’t already.
# then add as a remote
git remote add origin https://github.com/mygitusername/myproject.git

# push up, passing -u to set the remote branch as the default upstream branch our local branch will track
# this saves typing out ‘origin master’ repeatedly in future.
git push -u origin master

Remotes

Remove a remote from your local settings:

git remote rm <name>

Rename a remote:

git remote rename oldname newname

Configuration

Configure your favourite editor to be used for commit messages:

git config --global core.editor "nano"

Use git st as a shortcut for git status (to stop me mistyping as “statsu”):

git config --global alias.st status

Configure any setting:

git config [--global] <key> <value>

git config --global user.email "myname@domain.com"

Staging, unstaging and deleting files

# stage all unstaged files
git add .

# stage individual file/s
git add filename.txt

Unstage with reset (the opposite of git add):

# unstage all staged files
git reset .

# unstage individual file/s
git reset filename.txt

Delete a physical file and stage the deletion for the next commit:

git rm folder/filename.txt

Committing updates

Commit with a multi-line message:

git commit

Commit with short message:

git commit -m "fix: typo in heading"

Stage and commit all changes in a single command (note: doesn’t work with new, untracked files):

git commit -am "fix: typo in heading"

Branches

Show all local branches:

git branch

Show all local and remote branches:

git branch -a

Show branches you last worked on (most recently commited to):

git branch --sort=-committerdate

Save current state to new branch but don’t yet switch to it (useful after committing to wrong branch):

git branch newbranchname

Create and switch to new branch (main or whatever branch you want to branch off):

git checkout -b mynewbranch

Note that if you branch off foo_feature then when creating a PR in GitHub for your changes in mynewbranch you can change the Base branch from the default of main to foo_feature. This specifies that you are requesting your changes be merged into foo_feature rather than main and makes the comparison of changes relative to foo_feature rather than main.

Switch to an existing branch:

git checkout branchname

Save typing by setting the upstream remote branch for your local branch:

# git branch -u remotename/branchname
git branch -u fuzzylogic/v3

# now there’s no need to type origin master
git pull

Delete local branch:

git branch -d name_of_branch

# need to force it because of some merge issue or similar?
git branch -D name_of_branch

Save changes temporarily

stash is like a clipboard for git.

# Before changing branch, save changes you’re not ready to commit
git stash

# change branch, do other stuff. Then when return:
git stash pop

Staying current and compatible

fetch remote branch and merge simultaneously:

git pull remotename branchname

# common use case is to update our local copy of master
git pull origin master

# shorthand when a default upstream branch has been set
git pull

# an alternative is to update (fetch) which does not auto-merge, then 'reset' to the latest commit on the remote
# https://stackoverflow.com/questions/55731891/effects-of-git-remote-update-origin-prune-on-local-changes
git checkout master
git remote update --prune
git reset --hard origin/master

Merge another branch (e.g. master) into current branch:

git merge otherbranch

# a common requirement is to merge in master
git merge master

Rebasing

git rebase can be used as:

  1. an alternative to merge; and
  2. a means of tidying up our recent commits.

As an alternative to merge its main pro is that it leads to a more linear therefore easier-to-read history. Note however that it is potentially more disruptive therefore not right for every situation.

Say I’ve been working on a feature branch and I think it’s ready.

I might want to just tidy up my feature branch’s commits and can do this with an “interactive rebase”. This technique allows me to tidy my feature branch work to remove trivial, exploratory and generally less relevant commits so as to keep the commit history clean.

I might also want to bring in master to ensure synchronicity and compatibility. rebase sets the head of my feature branch to the head of master then adds my feature branch’s commits on top.

While it’s a good idea to rebase before making a PR, don’t use it after making a PR because from that point on the branch is public and rebasing a public branch can cause problems for collaborators on the branch. (The only exception to the previous rule is if you’re likely to be the only person working on the PR branch)

Rebuild your feature branch’s changes on top of master:

git checkout master
git pull origin master
git checkout myfeaturebranch
git rebase master

Force push your rebased branch (again, only when you’re unlikely to have/require collaborators on the PR):

git push --force origin myfeaturebranch

Tidy a feature branch before making a PR:

git checkout myfeaturebranch
git rebase -i master

# just tidy the last few (e.g. 3) commits
git rebase -i HEAD~3

# this opens a text editor listing all commits due to be moved, e.g.:
pick 33d5b7a Message for commit #1
pick 9480b3d Message for commit #2
pick 5c67e61 Message for commit #3

# change 'pick' to 'fixup' to condense commits, say if #2 was just a small fix to #1
pick 33d5b7a Message for commit #1
fixup 9480b3d Message for commit #2
pick 5c67e61 Message for commit #3

# alternatively if use 'squash', after saving it will open an editor
# and prompt you to set a new commit message for the combined stuff.
pick 33d5b7a Message for commit #1
squash 9480b3d Message for commit #2
squash 5c67e61 Message for commit #3

More on squash including a handly little video if I forget how it works.

Undo a rebase:

git reset --hard ORIG_HEAD

For more detail, read Atlassian’s guide to rebasing.

Reviewing your activity

Show commit history (most recent first; q to quit):

git log

# compact version
git log --oneline

# limit scope to commits on a branch
git log branchname

Check if your feature branch is trailing behind:

# show commits in master that are not yet in my feature branch
git log --oneline my-feature..master

# show commits on remote branch that are not yet in my local branch
git log --pretty='format:%h - %an: %s' new-homepage..origin/new-homepage

# show commits by me that included “heroku” and that changed file Gemfile
git log --author=Demaree --grep=heroku --oneline Gemfile

Show changes that occurred in the most recent commit or a given commit.

git show

show changes in a given commit

git show 591672e

Review differences between staged changes and last commit:

git diff --cached

Review changes between a given version/commit and the latest:

git diff 591672e..master

Fixing Things

Discard all your as-yet uncommitted changes:

git restore .

Get your local feature branch out of a problem state by resetting to the state it is on the remote (e.g. at last push).

git reset --hard origin/my-branch

Undo all the changes in a given commit:

git revert 591672e

Alter the previous commit (change the message and/or include further updates):

# we are amending the previous commit rather than creating a new commit.
# if file changes are staged, it amends previous commit to include those.
# if there are no staged changes, it lets us amend the previous commit’s message only.
git commit --amend

Move current branch tip backward to a given commit, reset the staging area to match, but leave the working directory alone:

git reset 591672e

# additionally reset the working directory to match the given commit
git reset --hard 591672e

See what the app/site was like (e.g. whether things worked or were broken) at a given previous commit, noting the following:

  • You’re now “detatched”, in that your computer’s HEAD is pointing at a commit rather than a branch.
  • You’re expected to merely review, not to make commits. Any commits you make would be “homeless”, since commits are supposed to go in branches. (However you could then branch off.)
git checkout 591672e

Return one or more files to the state they were in at a previous commit, without reverting everything else.

git checkout 3aa647dac9a8a251ca223a693d4c140fd3c1db11 /path/to/file.md /path/to/file2.erb

# if happy you then need to commit those changes
git commit

When git st reveals a list of staged files including lots of strange files you don’t want there mixed with others you do…

# add those you want to stay modified and staged
git add path/to/file-I-want-1.rb path/to/file-I-want-2.md

# this will clear all others out of the stage
git checkout .

Grab one or more commits from elsewhere and drop into your current branch:

git cherry-pick 591672e

# grab the last commit from a branch e.g. master
git cherry-pick master

Fix a pull that went wrong / shouldn’t have been done:

git pull origin branchname
# whoops!

git reflog
# shows a list of every thing you've
# done in git, across all branches!
# each one has an index HEAD@{index}
# find the one before you broke everything

git reset HEAD@{index}
# magic time machine

Miscellaneous handy things

Revert to the previous branch you were on

git checkout -

Useful GitHub stuff

Useful external resources

Rubadub App

Rubadub have a new mobile app that delivers the RaD crew’s top vinyl recommendations (the best around) direct to your phone.

At a time when lots of vinyl releases are highly limited, this gets you early access to the latest heat before it disappears. It should also generally save untold hours browsing/searching since in their recommendations Rubadub have already done the job of separating the wheat from the chaff.

The app was developed by me and the team at Greenhill.

It was quite tricky, because aside from developing the mobile app itself there was a lot of API work needed to integrate it with Rubadub’s stock and e-commerce systems. We also built middleware specifically for machine-learning customer tastes.

The current v1 app handles the core feature of letting people listen to, save and buy records but there’s a lot of cool stuff vis-a-vis personalised messaging and taste-based recommendations on the roadmap. I’ve written about this in more detail over on Greenhill’s site.

The long-term idea is that it becomes the app equivalent of the actual record shop experience...i.e. going into Rubadub on Howard St and one of the guys/gals handing you a pile of tunes with a side of witty repartee.

If you’re a vinyl junkie like me or into electronic music in general, I recommend checking it out.

A Dao of Web Design (on A List Apart)

John Allsopp’s classic article in which he looks at the medium of web design through the prism of the Tao Te Ching, and encourages us to embrace the web’s inherent flexibility and fluidity.

It’s time to throw out the rituals of the printed page, and to engage the medium of the web and its own nature.

It’s choc-full of quotable lines, but here are a few of my favourites:

We must “accept the ebb and flow of things.”

Everything I’ve said so far could be summarized as: make pages which are adaptable.

…and…

The web’s greatest strength, I believe, is often seen as a limitation, as a defect. It is the nature of the web to be flexible, and it should be our role as designers and developers to embrace this flexibility, and produce pages which, by being flexible, are accessible to all. The journey begins by letting go of control, and becoming flexible.

Meet the New Dialog Element

Introducing dialog: a new, easier, standards-based means of rendering a popup or modal dialogue.

The new element can be styled via CSS and comes with Javascript methods to show and close a dialog. We can also listen for and react to the show and close events.

Although currently only supported in Chrome, the Google Chrome dev team have provided a polyfill which patches support in all modern browsers and back to IE9.

The best way to Install Node.js and NPM on a Mac

In modern front-end development, we tend to use a number of JavaScript-based build tools (such as task runners like Gulp) which have been created using Node.js and which we install using NPM. Here’s the best way I’ve found for installing and maintaining Node and NPM on a Mac.

To install and use NPM packages, we first need to install Node.js and NPM on our computer (in my case a Mac).

I’ve found that although the Node.js website includes an installer, using Homebrew is a better way to install Node and NPM on a Mac. Choosing the Homebrew route means you don’t have to install using sudo (or non-sudo but with complicated workarounds) which is great because it presents less risk of things going wrong later down the line. It also means you don’t need to mess around with your system $PATH.

Most importantly, it makes removing or updating Node really easy.

Installation

The whole process (after you have XCode and Homebrew installed) should only take you a few minutes.

Just open your Terminal app and type brew install node.

Updating Node and NPM

First, check whether or not Homebrew has the latest version of Node. In your Terminal type brew update.

Then, to Upgrade Node type: brew upgrade node.

Uninstalling Node and NPM

Uninstalling is as easy as running brew uninstall node.

Credits

This post was based on information from an excellent article on Treehouse.

See all tags.

External Link Bookmark Note Entry Search