Lighthouse's score is not a tuning tool, but a mirror reflecting the health of the architecture.

robot
Abstract generation in progress

Why Do Differences Appear on the Same Site?

If you are aiming for high Lighthouse scores, simply repeatedly compressing images, delaying script loading, and addressing layout shifts are not enough. Observing actual projects reveals that the difference between sites that consistently maintain high scores and those whose scores drop with each new feature addition lies not in how much effort is made, but in the choices made during the design phase. Sites with less processing required during browser loading tend to have more stable scores.

What Lighthouse Truly Evaluates

Lighthouse is not a tool to judge the superiority of frameworks but a means to quantify actual user experience.

  • Speed until content is visible on the screen
  • Degree to which JavaScript blocks the main thread
  • Layout stability during page load
  • Accessibility and crawlability of the document structure

These metrics demonstrate how initial development decisions influence performance. Pages heavily reliant on large client-side bundles inevitably score lower. Conversely, pages based on static HTML tend to have more predictable performance.

JavaScript and Hydration: The Main Culprits of Performance Decline

What many audit projects reveal is that JavaScript execution is the biggest factor dragging down Lighthouse scores. This is not a matter of code quality but a fundamental constraint of the browser’s single-threaded environment.

The hydration process is particularly demanding, as all tasks such as framework runtime initialization, dependency graph analysis, and state setup are executed before the page becomes interactive. It’s not uncommon for a disproportionately large JavaScript bundle to be required for minimal interactivity.

Architectures that assume JavaScript by default require ongoing optimization to maintain performance. On the other hand, architectures that treat JavaScript as an explicit opt-in produce more stable results.

The Certainty Brought by Static Generation

Delivering pre-rendered HTML eliminates several variables from performance calculations:

  • No delay from server-side rendering
  • No need for bootstrap processing on the client
  • Browsers receive complete, predictable HTML

As a result, key metrics like TTFB, LCP, and CLS automatically improve. Static generation does not guarantee perfect scores but significantly narrows the range of potential failures.

Case Study: Lessons Learned from Rebuilding a Personal Blog

When rebuilding my blog, I tried several standard approaches. A React-based setup that relied on hydration by default was flexible but caused headaches with rendering modes, data fetching, and bundle sizes whenever new features were added.

I decided to experiment with a different approach: making static HTML the default and treating JavaScript as an exception. I chose Astro for this experiment because its default constraints aligned with the hypotheses I wanted to test.

What stood out was not the initial high score but how little effort was needed to maintain that score over time. Publishing new content did not cause regressions, and small interactive elements did not trigger unrelated warnings. The baseline simply remained unperturbed.

No One-Size-Fits-All Solution

This approach is not necessarily optimal in all cases. For applications requiring authenticated user data, real-time updates, or complex client-side state management, a static-first architecture may fall short.

Client-side rendering frameworks are more advantageous where such flexibility is needed, though they come with increased runtime complexity. The key point is that the choice of architecture directly influences Lighthouse metrics.

What Influences the Stability of Lighthouse Scores

Lighthouse exposes not effort but entropy.

Systems that depend on runtime calculations tend to accumulate complexity as features are added. Conversely, systems that shift processing to build time can inherently suppress this complexity.

This difference explains why some sites require constant performance tuning, while others remain stable with minimal intervention.

Conclusion: Scores Are Not to Be Chased, But to Be Observed

High Lighthouse scores are less a result of aggressive optimization efforts and more a natural outcome of architectures that minimize the work browsers do during page load.

By embedding performance as a design constraint rather than a goal, Lighthouse becomes less a metric to chase and more an indicator to observe system health. The key is not choosing the right framework but deciding where complexity is acceptable.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)