Skip to content

Performance: Add Lighthouse CI

Warren Gifford requested to merge tr/lighthouse-ci-add into main

Created by: umpox

This PR adds Lighthouse CI to track performance metrics on different branches.

image

It runs on local instances of:

CI Flow

Lighthouse CI currently only runs on the sourcegraph-async pipeline. This is because it is not currently asserting against anything so should not block any builds. This is also a good opportunity to test it for any flakiness without risking breaking builds

Building:

  1. Build a production bundle of the Sourcegraph web app
  2. Upload bundle to Buildkite so we can parallelize upcoming Lighthouse runs for perf improvements

Running:

  1. Download prebuilt production bundle artifact
  2. Run Lighthouse through Puppeteer. 2.1. Config derived from lighthouse.js and some additional flags that are only relevant in CI
  3. Reports back to the Lighthouse GitHub app using the LHCI_GITHUB_APP_TOKEN set in Buildkite. 3.1. Status checks are attached to PRs with the results of each performance run 3.2. If running on main, we store the results of the build so future runs can generate comparison reports.

Local flow

Locally there is no need to upload reports and update PR status checks. You can still configure Lighthouse through the CLI, or by updating lighthouse.js. Just run yarn test-lighthouse and a report will be opened automatically when the tests have finished

Notes

It is quite difficult to get our local 'production' server to mimic how it would be on a deployed site. There are some caveats to address:

  1. Local reports have a lower SEO score than in production. This is because the meta-description tag is attached dynamically in the actual production server, we don't mimic this locally. It is also because we do not serve our robots.txt file locally.
  2. Local reports have a lower 'Best practices' score than in production. This is mainly because we do not support HTTPS locally.
  3. The 'First Contentful Paint' is consistently around 0.6s on local reports. In production it averages between 1.0s - 1.6s. Need to investigate further what is happening here.

Aside from this, it is quite accurate compared to production results.

Next steps

  • Improve accuracy by making the local production server closer to actual production server.
  • Enforce minimum performance requirements by creating assertions and a performance budget.
    • Best implementation here would likely be to initially set our performance budget low, and slowly raise it as we improve various performance metrics until we hit our targets.
  • Enforce other assertions available through Lighthouse checks (e.g. accessibility, best practices, SEO)

Closes https://github.com/sourcegraph/sourcegraph/issues/24870

Merge request reports

Loading