Skip to main content

Tagged “performance”

Top rankin’

In my recent post Website updates I said:

I’ve added myself to 11ty’s community and am now listed as an Eleventy author. I hope that some day I’ll find that fuzzylogic.me has been added to 11ty’s Speedlify Leaderboard too – that’d be cool!

Well, about a week ago I noticed that I have been added to 11ty’s Speedlify Leaderboard and not only that I’ve gone straight in at the very respectable rank of #23. I’m among some really talented company there so I’m proud of that!

Lightning Fast Web Performance course by Scott Jehl

I purchased Scott’s course back in 2021 and immediately liked it, but hadn’t found the space to complete it until now. Anyway, I’m glad I have as it’s well structured and full of insights and practical tips. I’ll use this post to summarise my main takeaways.

Having completed the course I have much more rounded knowledge in the following areas:

  • Why performance matters to users and business
  • Performance-related metrics and “moments”: defining fast and slow
  • Identifying performance problems: the tools and how to use them
  • Making things faster, via various good practices and fixes

I’ll update this post soon to add some key bullet-points for each of the above headings.

My new syntax for modern, responsive blog images

I’ve started trialling different HTML and technologies for the “simple” responsive images (i.e. not art-directed per breakpoint) used in blog articles on this site. I’m continuing to lean on Cloudinary as my free image host, CDN and format-conversion service. But at the HTML level I’ve moved from a complicated <img srcset> based approach that includes many resized versions of the same image. I now use a simpler <picture> and <source> based pattern that keeps the number of images and breakpoints low and instead – by using the source element’s type attribute – takes advantage of the performance gains offered by the new avif and webp image formats.

My new approach is based on advice in Jake Archibald’s brilliant article Halve the size of images by optimising for high density displays. Jake explains that the majority of your traffic likely consists of users with high density screens so when we can combine optimising for that and making performance gains in a progressively enhanced way, we should!

Jake offers a “lazy but generally good enough” approach:

Here's the technique I use for most images on this blog: I take the maximum size the image can be displayed in CSS pixels, and I multiply that by two, and I encode it at a lower quality, as it'll always be displayed at a 2x density or greater. Yep. That's it. For 'large' images in blog posts like this, they're at their biggest when the viewport is 799px wide, where they take up the full viewport width. So I encode the image 1,598 pixels wide.

<picture>
  <source type="image/avif" srcset="red-panda.avif" />
  <source type="image/webp" srcset="red-panda.webp" />
  <img src="red-panda.jpg" width="1598" height="1026" alt="A red panda" />
</picture>

So, if you want your images to be as sharp as possible, you need to target images at the user's device pixels, rather than their CSS pixels. To encode a 2x image, I throw it into Squoosh.app, and zoom it out until it's the size it'll be displayed on a page. Then I just drag the quality slider as low as it'll go before it starts looking bad.

Taking Jake’s guidance and tweaking it for my Cloudinary-based context, my recent post April 2022 mixtape included its image like so:

<figure>
  <picture>
    <source type="image/avif" srcset="https://res.cloudinary.com/…/f_avif,q_auto,w_1292/v1654433393/mato_1500_squooshed_mozjpg_xjrkhl.jpg" />
    <source type="image/webp" srcset="https://res.cloudinary.com/…/f_webp,q_auto,w_1292/v1654433393/mato_1500_squooshed_mozjpg_xjrkhl.jpg" />
      <img class="u-full-parent-width" src="https://res.cloudinary.com/…/f_jpg,q_auto,w_1292/v1654433393/mato_1500_squooshed_mozjpg_xjrkhl.jpg" width="1292" height="1292" alt="Side A of the 7-inch vinyl release of Mato’s “Summer Madness" loading="lazy" decoding="async" />
  </picture>
  <figcaption>Mato’s “Summer Madness”, as featured on the mix</figcaption>
</figure>

And my process was as follows:

  1. Take a photo (for me, that’ll be on my phone). Without doing anything special it’ll already be wide enough.
  2. Apple use the HEIC format. To get around that, do: Share, Copy, open Files > HEIC to JPG, paste and it’ll save as JPG.
  3. Drop it into Squoosh and do the following:
  • rotate it if necessary
  • resize it to 1300 wide (in my current layout 1292 is twice as wide (2 × 646) as the image would need to go, and I just round up to 1300)
  • reduce its quality a bit (stopping before it gets noticeably bad)
  1. Encode that as a mozjpg which gave the best size reduction and as far as I can tell, is a safe approach to use.
  2. Upload to my Cloudinary account then copy its new Cloudinary URL.
  3. Prepare the image HTML per the above snippet. The first source tells Cloudinary to use format avif, while for the second source it’s webp, and for the fallback img it’s jpg.
  4. Check the rendered image in a browser to confirm that the modern formats are being used.

I’ll DRY-up that HTML into an 11ty shortcode in due course.

I’ve no doubt that I’ll be getting some of this wrong – this stuff gets pretty complicated! For example I note my image file size is still quite large so I wonder if I should be manually creating the avif and webp versions in Squoosh myself to ensure getting the savings that make this approach worthwhile, rather than handing the conversion off to Cloudinary. (However this would mean having to host more images…)

In the meantime however, I’m happy that this approach has simplified the mental overhead of handling modern, responsive blog images, and optimising it can be a work in progress.

Additional references

Avoiding img layout shifts: aspect-ratio vs width & height attributes (on Jake Archibald's blog)

Recently I’ve noticed some developers recommending using the CSS aspect-ratio property directly on images. My understanding of aspect-ratio was that it’s not so much intended for elements like img which already have an intrinsic aspect ratio, but rather for the likes of div which do not. Furthermore, when the goal is to prevent the layout shift that can occur after an image loads we should supply our images with width and height HTML attributes rather than using CSS.

In this timely post, Jake helpfully explains how width and height attributes are used by CSS as presentation hints to automatically set an aspect-ratio that will also, in cases where the attributes were set wrongly, fall back to the image’s intrinsic aspect ratio. Therefore, concentrating on HTML alone is ideal for our content images. My previous approach seems sound but I now know a little more about why.

If I'm adding an image to an article on my blog, that's content. I want the reserved space to be the aspect ratio of the content. If I get the width and height attributes wrong, I'd rather the correct values were used from the content image. Therefore, width and height attributes feel like the best fit. This means I can just author content, I don't need to dip into inline styles.

If it's a design requirement that the layout of an image is a particular aspect ratio, enforcing that with aspect-ratio in CSS can be appropriate. For example, a hero image that must be 16 / 9 – if the image isn't quite 16 / 9 I don't want it messing up my design, I want the design to take priority. Although, if the image isn't actually that aspect ratio, it'll either end up stretched (object-fit: fill), letter-boxed (object-fit: contain), or cropped (object-fit: cover). None of which are ideal.

Learn Responsive Design (on web.dev)

Jeremy Keith’s new course for Google’s web.dev learning platform is fantastic and covers a variety of aspects of responsive design including layout (macro and micro), images, icons and typography.

Harry Roberts says “Get Your Head Straight”

Harry Roberts (who created ITCSS for organising CSS at scale but these days focuses on performance) has just given a presentation about the importance of getting the content, order and optimisation of the <head> element right, including lots of measurement data to back up his claims. Check out the slides: Get your Head Straight

While some of the information about asset loading best practices is not new, the stuff about ordering of head elements is pretty interesting. I’ll be keeping my eyes out for a video recording of the presentation though, as it’s tricky to piece together his line of argument from the slides alone.

However one really cool thing he’s made available is a bookmarklet for evaluating any website’s <head>:
ct.css

Practical front-end performance tips

I’ve been really interested in the subject of Web Performance since I read Steve Souders’ book High Performance Websites back in 2007. Although some of the principles in that book are still relevant, it’s also fair to say that a lot has changed since then so I decided to pull together some current tips. Disclaimer: This is a living document which I’ll expand over time. Also: I’m a performance enthusiast but not an expert. If I have anything wrong, please let me know.

Inlining CSS and or JavaScript

The first thing to know is that both CSS and JavaScript are (by default) render-blocking, meaning that when a browser encounters a standard .css or .js file in the HTML, it waits until that file has finished downloading before rendering anything else.

The second thing to know is that there is a “magic file size” when it comes to HTTP requests. File data is transferred in small chunks of about 14 kb. So if a file is larger than 14 kb, it requires multiple roundtrips.

If you have a lean page and minimal CSS and or JavaScript to the extent that the page in combination with the (minified) CSS/JS content would weigh 14 kb or less after minifying and gzipping, you can achieve better performance by inlining your CSS and or JavaScript into the HTML. This is because there’d be only one request, thereby allowing the browser to get everything it needs to start rendering the page from that single request. So your page is gonna be fast.

If your page including CSS/JS is over 14 kb after minifying and gzipping then you’d be better off not inlining those assets. It’d be better for performance to link to external assets and let them be cached rather than having a bloated HTML file that requires multiple roundtrips and doesn’t get the benefit of static asset caching.

Avoid CSS @import

CSS @import is really slow!

JavaScript modules in the head

Native JavaScript modules are included on a page using the following:

<script type="module" src="main.js"></script>

Unlike standard <script> elements, module scripts are deferred (non render-blocking) by default. Rather than placing them before the closing </body> tag I place them in the <head> so as to allow the script to be downloaded early and in parallel with the DOM being processed. That way, the JavaScript is already available as soon as the DOM is ready.

Background images

Sometimes developers implement an image as a CSS background image rather than a “content image”, either because they feel it’ll be easier to manipulate that way—a typical example being a responsive hero banner with overlaid text—or simply because it’s decorative rather than meaningful. However it’s worth being aware of how that impacts the way that image loads.

Outgoing requests for images defined in CSS rather than HTML won’t start until the browser has created the Render Tree. The browser must first download and parse the CSS then construct the CSSOM before it knows that “Element X” should be visible and has a background image specified, in order to then decide to download that image. For important images, that might feel too late.

As Harry Roberts explains it’s worth considering whether the need might be served as well or better by a content image, since by comparison that allows the browser to discover and request the image nice and early.

By moving the images to <img> elements… the browser can discover them far sooner—as they become exposed to the browser’s preload scanner—and dispatch their requests before (or in parallel to) CSSOM completion

However if still makes sense to use a background image and performance is important Harry recommends including an accompanying hidden image inline or preloading it in the <head> via link rel=preload.

Preload

From MDN’s preload docs, preload allows:

specifying resources that your page will need very soon, which you want to start loading early in the page lifecycle, before browsers' main rendering machinery kicks in. This ensures they are available earlier and are less likely to block the page's render, improving performance.

The benefits are most clearly seen on large and late-discovered resources. For example:

  • Resources that are pointed to from inside CSS, like fonts or images.
  • Resources that JavaScript can request such as JSON and imported scripts.
  • Larger images and videos.

I’ve recently used the following to assist performance of a large CSS background image:

<link rel="preload" href="bg-illustration.svg" as="image" media="(min-width: 60em)">

Self-host your assets

Using third-party hosting services for fonts or other assets no longer offers the previously-touted benefit of the asset potentially already being in the user’s browser cache. Cross domain caching has been disabled in all major browsers.

You can still take advantage of the benefits of CDNs for reducing network latency, but preferably as part of your own infrastructure.

Miscellaneous

Critical CSS is often a wasted effort due to CSS not being a bottleneck, so is generally not worth doing.

References

Astro

Astro looks very interesting. It’s in part a static site builder (a bit like Eleventy) but it also comes with a modern (revolutionary?) developer experience which lets you author components as web components or in a JS framework of your choice but then renders those to static HTML for optimal performance. Oh, and as far as I can tell theres no build pipeline!

Astro lets you use any framework you want (or none at all). And if most sites only have islands of interactivity, shouldn’t our tools optimize for that?

People have been posting some great thoughts and insights on Astro already, for example:

(via @css)

Ruthlessly eliminating layout shift on netlify.com, by Zach Leatherman

I love hearing about clever front-end solutions which combine technologies and achieve multiple goals. In Zach’s post we hear how Netlify’s website suffered from layout shift when conditionally rendering dismissible promo banners, and how he addressed this by rethinking the problem and shifting responsibilities around the stack.

Here’s my summary of the smart ideas covered in the post:

  • decide on the appropriate server-rendered content… in this case showing rather than hiding the banner, making the most common use case faster to load
  • have the banner “dismiss” button’s event handling script store the banner’s href in the user’s localStorage as an identifier accessible on return visits
  • process lightweight but critical JavaScript logic early in the <head>… in this case a check for this banner’s identifier existing in localStorage
  • under certain conditions – in this case when the banner was previously seen and dismissed – set a “state” class (banner--hide) on the <html> element, leading to the component being hidden seamlessly by CSS
  • build the banner as a web component, the first layer of which being a custom element <announcement-banner> and the second a JavaScript class to enhance it
  • delegate responsibility for presenting the banner’s “dismiss” button to the same script responsible for the component’s enhancements, meaning that a broken button won’t be presented if that script were to break.

So much to like in there!

Here are some further thoughts the article provoked.

Web components FTW

It feels like creating a component such as this one as a web component leads to a real convergence of benefits:

  • tool-free, async loading of the component JS as an ES module
  • fast, native element discovery (no need for a document.querySelector)
  • enforces using a nice, idiomatic class providing encapsulation and high-performing native callbacks
  • resilience and progressive enhancement by putting all your JS-dependent stuff into the JS class and having that enhance your basic custom element. If that JS breaks, you still have the basic element and won’t present any broken elements.

Even better, you end up with framework-independent, standards-based component that you could share with others for reuse elsewhere, just like Zach did.

Multiple banners

I could see there being a case where there are multiple banners during the same time period. I guess in that situation the localStorage banner value could be a stringified object rather than a simple, single-URL string.

Setting context on the root

It’s really handy to have a way to exert just-in-time control over the display of a server-rendered element in a way that avoids flashes of content… and adding a class to the <html> element offers that. In this approach, we run the small amount of JavaScript required to test a local condition (e.g. checking for a value in localStorage) really early. That lets us process our conditional logic before the element is rendered… although this also means that it’s not yet available in the DOM for direct manipulation. But adding a class to the HTML element means that we can pre-prepare CSS to use that class as a contextual selector for hiding the element.

We’re already familiar with the technique of placing classes on the root element from libraries like modernizr and some font-loading approaches, but this article serves as a reminder that we can employ it whenever we need it.

Handling the close button

Zach’s approach to handling the banner’s dismiss button was interesting. He makes sure that it’s not shown unless the web component’s JavaScript runs successfully which is great, but rather than inject it with JavaScript he includes it in the initial HTML but hidden with CSS, and his method of hiding is opacity.

We use opacity to toggle the close button so that it doesn’t reflow the component when it’s enabled via JavaScript.

I think what Zach’s saying is that the alternatives – inserting the button with JS, or toggling the hidden attribute or its CSS counterpart display:none – would affect geometry causing the browser to perform layout… whereas modifying opacity does not.

I love that level of diligence! Typically I prefer to delegate responsibility for inserting JS-dependent buttons to JavaScript because in comparison to including a button in the server-rendered HTML then hiding it, it feels more resilient and a more maintainable separation of concerns. However as always the best solution depends on the situation.

If I were going down Zach’s route I think I’d replace opacity with visibility since the latter hiding method removes the hidden element from the document which feels more accessible, while still avoiding triggering the reflow that display would.

Side-thoughts

In a server-side scripted application – one using Rails or PHP, for example – you could alternatively handle persisting state with cookies rather than localStorage… allowing you to test for the presence of the cookie on the server then handle conditional rendering of the banner on the server too, rather than needing classes which trigger hiding. I can see an argument for that. Thing is though, not everyone’s working in that environment. Zach has provided a standalone solution.

References

Big picture performance analysis using Lighthouse Parade (on Cloud Four)

I like the sound of this performance analysis tool from the clever folks at Cloud Four, especially because it covers your entire site rather than just a single page.

Lighthouse Parade is a Node.js command line tool that crawls a domain and gathers lighthouse performance data for every page. With a single command, the tool will crawl an entire site, run a Lighthouse report for each page, and then output a spreadsheet with the aggregated data.

It was also easy to run with no local installation, via:

npx lighthouse-parade https://fuzzylogic.me

However it was taking a long time to run over all the pages of my statically generated site (there are lots of pages), so much so that I felt I needed to stop it due to my laptop sounding like it was about to take off. So if running it again, I’d probably only do so on a smaller site, or first check the tool’s documentation to see if either scope-limiting or sitemap-based crawling support have been added.

Creating websites with prefers-reduced-data (on polypane.app)

Even though more and more people get access to the internet every day, not all of them have fast gigabit connections or unlimited data. Using the media query prefers-reduced-data we can keep our sites accessible to everyone.

I’ve long wondered if there could be a way to tailor content to a user based on the speed of their internet connection, especially when considering features like responsive images. This looks like it might be the way to do that (although browser support is currently non-existent). It also allows us to respect the user’s wishes on how much data / battery life etc they’re willing to spare for your website.

(via @adactio)

Daniel Post shared a really cool performance-optimisation trick for Eleventy on Twitter the other day. When statically generating your site you can loop through your pages and, for each, use PurgeCSS to find the required CSS, then inline that into the <head>. This way, each page contains only the CSS it needs and no more!

Check out the code.

I’ve just installed this on my personal site. I was already inlining my CSS into the <head> but the promise of only including the minimum CSS that each specific page needs was too good to resist.

Turned out it was a breeze to get working, a nice introduction to Eleventy transforms, and so far it’s working great!

Thoughts on inline JavaScript event handlers in the <head>

I’ve been thinking about Scott Jehl’s “simplest way to load external CSS asynchronously” technique. I’m interested in its use of an inline (onload) event handler for running JavaScript-based enhancements in the <head>, in the context of some broader ruminations on how best to progressively enhance UI elements with JavaScript (for example adding toggle show/hide) without causing layout jank.

One really interesting aspect of using inline event handlers to apply enhancements was highlighted by Chris Ferdinandi today: as JavaScript goes, it’s pretty resilient.

Because we’re dealing with an HTML element directly in the document, and because the relevant JS is inline on that element and not dependent on any external files, the only case where the JS won’t run is if someone has JS completely turned off – a sub-1% proportion. The other typical JavaScript resilience pitfalls – such as network connections timing out, CDN failure and JS errors elsewhere blocking your code from running – simply don’t apply here.

The Simplest Way to Load CSS Asynchronously (Filament Group)

Scott Jehl of Filament Group demonstrates a one-liner technique for loading external CSS files without them delaying page rendering.

While this isn’t really necessary in situations where your (minified and compressed) CSS is small, say 14k or below, it could be useful when working with large CSS files and want to deliver critical CSS separately and the rest asynchronously.

Today, armed with a little knowledge of how the browser handles various link element attributes, we can achieve the effect of loading CSS asynchronously with a short line of HTML. Here it is, the simplest way to load a stylesheet asynchronously:

<link rel="stylesheet" href="my.css" media="print" onload="this.media='all'">

Note that if JavaScript is disabled or otherwise not available your stylesheet will only load for print and not for screen, so you’ll want to follow up with a “normal” (non-print-specific) stylesheet within <noscript> tags.

How to optimise performance when using Google-hosted fonts (on CSS Wizardry)

A combination of asynchronously loading CSS, asynchronously loading font files, opting into FOFT, fast-fetching asynchronous CSS files, and warming up external domains makes for an experience several seconds faster than the baseline.

Harry Roberts suggests that, while self-hosting your web fonts is likely to be the overall best solution to performance and availability problems, we’re able to design some fairly resilient measures to help mitigate a lot of these issues when using Google Fonts.

Harry then kindly provides a code snippet that we can use in the <head> of our document to apply these measures.

We’ve ruined the Web. Here’s how we fix it. (This is HCD podcast)

During the COVID situation, people have an urgent need to access critical information online. But in 2020, the average webpage is rammed full of large JavaScript files, huge images etc, and as a result is slow to load. This problem is likely to be most keenly felt by those who don’t have the luxury of fast internet – potentially the same people who need access to that critical information the most.

Here’s a brilliant discussion between Gerry McGovern and Jeremy Keith on that problem, suggesting tactics to help fix things such as performance budgets, introducing tactics at the design stage to mimic slow connections and other access constraints, optimising for return visits, progressive enhancement and more.

Loved this!

(via @adactio)

Emergency Website Kit (Max Böck)

In cases of emergency, many organizations need a quick way to publish critical information. But existing (CMS) websites are often unable to handle sudden spikes in traffic.

Fantastic effort by Max Böck here, stepping up during the COVID-19 pandemic to provide this Eleventy template for organisations to use to publish critical information as lean, resilient, static HTML pages.

CSS Triggers

Check whether or not a CSS property is a good candidate for smooth animation based on whether updates to its value trigger expensive changes (to, for example, “element geometry”) causing layout updates and repaints.

width and height are both poor candidates whereas transform is good.

When should you add the defer attribute to the script element? (on Go Make Things)

For many years I’ve placed script elements just before the closing body tag rather than in the <head>. Since a standard <script> element is render-blocking, the theory is that by putting it at the end of the document – after the main content of the page has loaded – it’s no longer blocking anything, and there’s no need to wrap it in a DOMContentLoaded event listener.

It turns out that my time-honoured default is OK, but there is a better approach.

Chris has done the research for us and ascertained that placing the <script> in the <head> and adding the defer attribute has the same effect as putting that <script> just before the closing body tag but offers improved performance.

This treads fairly complex territory but my general understanding is this:

Using defer on a <script> in the <head> allows the browser to download the script earlier, in parallel, so that it is ready to be used as soon as the DOM is ready rather than having to be downloaded and parsed at that point.

Some additional points worth noting:

  • Only use the defer attribute when the src attribute is present. Don’t use it on inline scripts because it will have no effect.
  • The defer attribute has no effect on module scripts (script type="module"). They defer by default.

Lazy load embedded YouTube videos (CSS-Tricks)

This is a very clever idea via Arthur Corenzan. Rather than use the default YouTube embed, which adds a crapload of resources to a page whether the user plays the video or not, use the little tiny placeholder webpage that is just an image you can click that is linked to the YouTube embed.

Here’s a performance-enhancing trick for embedding youtube videos which eschews the default YouTube embed (and all the resources it adds to a page whether the video plays or not) in favour of rendering a tiny placeholder webpage in the srcdoc attribute.

Importantly, the user experience with the video remains the same.

Webfont loading strategies

When it comes to webfonts, if you want to serve an accessible and high performance experience across device types it’s not as straightforward as just specifying your fonts in CSS then hoping for the best.

We likely have goals such as the following:

  • avoid Flash of Invisible Text (FOIT). Flash of Unstyled Text (FOUT) is preferable.
  • provide a good experience for users on slow connections.
  • while favouring FOUT over FOIT, take care to minimise jarring reflows.

Browser support is also a factor. For example font-display: swap seems like a great option however as Chris Ferdinandi pointed out in Why use the vanilla JS fontfaceset.load method instead of CSS font-display: swap it doesn’t have comprehensive mobile support. Since mobile users are regularly on slower connections they are likely to be hit hard by load times incurred due to custom fonts, therefore a solution which doesn’t cater to them is less attractive.

I tend to go for FOUT with a class—an approach that is reliable and good enough for most use cases. The idea is to use JavaScript to detect when a font has loaded successfully, and only then add a fonts-loaded class to the HTML element causing CSS scoped to that class to take effect. To date I’ve handled the detection using Bram Stein’s Font Face Observer to benefit from greater browser support (it handles IE) than I’d get using the native FontFaceSet.load() however I expect to gravitate toward the native API solution soon.

I’ve also experimented with pre-loading fonts in the <head> in combination with font-display: swap on this, my personal website. This was inspired by the font-loading approach Zach Leatherman took with CSS-Tricks. However due to the fonts I’m using right now I’d currently be preloading too heavy a load for slow connections (thus temporarily blocking initial page render) so I’d need to go further and introduce two-stage loading into that mix.

I’ve also found that your choice of font and font host can really influence (constrain) the flexibility and options you have for font features and loading.

  • font-display: swap can only be applied on a self-hosted font or one hosted on Google Fonts (via a special querystring parameter on the font URL). So this CSS-only approach is not available with (for example) fonts hosted on Adobe Fonts or fonts.com.
  • link rel=preload is only available for self-hosted fonts because you need to be able to rely on the URL not changing, and you can’t rely on that for externally hosted fonts.
  • You might want to use the opentype features of a font but are they available in your font hosting/loading solution? Some of the fonts hosted on Google Fonts have nice OpenType features but by default these features are pruned out.
  • Getting access to the source .ttf file is beneficial as it allows you to subset the font, including special features and characters but also cutting out any your website does not require. Google Fonts have an open license which means we don’t need to lean on their hosting but can instead download fonts in .ttf, subset them (I use Glyphhanger) and self-host them.

This is all a work in progress and I’m still working this stuff out.

Native lazy-loading for the web

Now that we have the HTML attribute loading we can set loading="lazy" on our website’s media, and the loading of non-critical, below-the-fold media will be deferred until the user scrolls to them.

This can really improve performance so I’ve implemented it on images and iframes (youtube video embeds etc) throughout this site.

This is currently only supported in Chrome, but that still makes it well worth doing.

Stuffing the front end

Here’s Bridget Stewart, a developer from Ohio, with some thoroughly enjoyable “curmudgeonly” thoughts on why your website doesn’t necessarily need a Javascript framework.

With websites taking a median time of 5.8 seconds to show primary content (source: HTTPArchive) and Google warning us that 53% of mobile visits leave pages that take longer than three seconds to load, it’s hard to justify stuffing the front-end unnecessarily.

Bridget’s article also contains this quote which I love:

First, solve the problem. Then, write the code.

— John Johnson

Essentially, don’t select the tooling first—a principle I fully endorse.

See all tags.

External Link Bookmark Note Entry Search