Vite + React Lighthouse Part 2: 81 → 100 (render-blocking, code-splitting, fonts)
In Part 1 I got this site’s Lighthouse performance to 81 by prerendering HTML at build time (SSG). The next step was to push it to 100 by fixing what was still dragging the score down: render-blocking resources, unused JavaScript, and layout shift.
This post covers the changes that took the score from 81 to 100: moving Tailwind and fonts off the critical path, code-splitting routes and post content, and self-hosting the grain texture.
Results (measured)
Measured on a production build (pnpm build + pnpm preview) using Lighthouse mobile (same as Part 1).
| Metric | Before (Part 1 baseline) | After |
|---|---|---|
| Performance (score) | 81 | 100 |
| FCP | 2.92 s | 1.0 s |
| LCP | 2.92 s | 1.9 s |
| Speed Index | 2.92 s | 1.0 s |
| CLS | 0.186 | 0 |
| TBT | 0 ms | 0 ms |
| Main thread | 1.2 s | 0.5 s |
| Bootup time | 0.8 s | 0.1 s |
| Render-blocking (est. savings) | 1,470 ms | 50 ms |
| Unused JavaScript | 125 KiB (~450 ms) | 32 KiB (~150 ms) |
The “Before” column uses Part 1’s after numbers (81, 2.92s FCP/LCP) so the two posts line up: Part 1 went 73 → 81 with SSG; Part 2 goes 81 → 100 with the fixes below. Main thread, bootup, render-blocking, and unused JS are from the same baseline run (before Part 2 optimizations).
Takeaway: Removing render-blocking Tailwind and Google Fonts, self-hosting fonts, and code-splitting cut FCP/LCP and main-thread work sharply; CLS went to zero once fonts were no longer loading late from a third party.
Tip: Run Lighthouse against pnpm preview, not pnpm dev, or dev-server overhead will distort the score.
What was still wrong at 81
After SSG, the HTML had real content and TBT was already 0. The remaining issues were:
- Render-blocking (~1.47 s): The Tailwind CDN script (147 KB) and Google Fonts CSS were blocking the initial render; Lighthouse estimated ~875 ms and ~901 ms respectively. They delayed FCP/LCP.
- Unused JavaScript (~125 KiB): A single large main bundle meant a lot of JS was parsed but not used on the initial route; Lighthouse reported ~450 ms potential savings.
- CLS (0.186): Cumulative Layout Shift was above the “good” threshold; in practice this was driven by late-loading fonts (Google Fonts) and layout without reserved space.
- Third-party requests: tailwindcss.com, Google Fonts (fonts.googleapis.com + fonts.gstatic.com), and grainy-gradients.vercel.app (noise.svg) added latency and blocking.
I fixed these by: (1) moving Tailwind to build-time and self-hosting fonts, (2) code-splitting routes and post content and adding manualChunks, (3) self-hosting the grain texture.
Fix 1 — Tailwind from CDN to build-time
Previously, index.html loaded Tailwind from the CDN and had a large inline tailwind.config and <style type="text/tailwindcss"> block. That meant the browser had to fetch and parse the CDN script, then run Tailwind at runtime. Both blocked first paint.
I removed from index.html:
<script src="https://cdn.tailwindcss.com?plugins=forms,typography,..."></script>- Google Fonts
<link>and preconnect - The inline
<script>withtailwind.config - The entire
<style type="text/tailwindcss">block (base, grain, prose, keyframes, etc.) <link rel="stylesheet" href="/index.css">
Tailwind is now run at build time via PostCSS (@tailwindcss/postcss, tailwindcss v4). Theme, plugins (forms, typography), and custom utilities live in a root styles.css that is imported from the client entry so Vite bundles one CSS file. The <head> stays minimal: only meta, title, and the prerender placeholder script.
Client entry now pulls in fonts and styles first:
import './fonts.css';
import './styles.css';
import React from 'react';
// ...
That way CSS is part of the main bundle and no longer blocks on a third-party script or inline runtime compilation.
Fix 2 — Fonts non-blocking and self-hosted
Google Fonts added a render-blocking stylesheet and a late font swap, which hurt FCP and CLS. I replaced them with self-hosted subsets via @fontsource/inter and @fontsource/jetbrains-mono.
I added a fonts.css that imports only the weights I use (e.g. Latin subsets for Inter and JetBrains Mono) and import it in entry-client.tsx before styles.css. The fonts are bundled with the app, so there’s no extra network request and no late swap: the browser gets font data with the same origin and the layout is stable. That removed the CLS from fonts and reduced render-blocking to near zero.
Fix 3 — Code-split routes
The app used static imports for every page, so one big bundle was loaded on first paint even though the user only needed one route. I switched to React.lazy and wrapped the route tree in Suspense:
import React, { Suspense, lazy } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
const Home = lazy(() => import('./pages/Home'));
const Experience = lazy(() => import('./pages/Experience'));
const Projects = lazy(() => import('./pages/Projects'));
const Writing = lazy(() => import('./pages/Writing'));
const Post = lazy(() => import('./pages/Post'));
const Contact = lazy(() => import('./pages/Contact'));
const NotFound = lazy(() => import('./pages/NotFound'));
export const AppRoutes: React.FC = () => (
<Suspense fallback={null}>
<Routes>
<Route path="/" element={<Home />} />
{/* ... */}
</Routes>
</Suspense>
);
Now the initial load only fetches the route chunk for the current URL; other pages load on demand. That cut unused JS on the homepage and improved bootup time (Lighthouse “bootup time” went from 0.8 s to 0.1 s).
Fix 4 — Split post metadata vs full content
The blog list only needs post metadata (title, excerpt, date, etc.), but previously constants.ts imported every post module, so the full markdown of every post was in the main bundle. I split that:
- Client:
constants.tsnow importsPOSTS_METADATAfrom a generated file (posts-metadata.generated.ts) produced by a small script that reads only metadata from each post module. So the client bundle gets a small list of posts, not their bodies. - Full content on demand: When the user navigates to
/blog/:id, thePostpage callsgetPostContent(id), which usesimport.meta.glob('./posts/*.ts', { eager: false })to dynamically load only that post’s module. - SSR/prerender:
entry-server.tsximportsPOSTSfromconstants-server.ts, which still imports full post modules so prerendered blog pages have the full content in HTML.
constants.ts after the change:
import { POSTS_METADATA } from './posts-metadata.generated';
/** Metadata only; full content is loaded on demand via getPostContent (client) or constants-server (SSR). */
export const POSTS: PostMetadata[] = POSTS_METADATA;
Post.tsx loads content when there’s no serverPost (client navigation) and shows a skeleton while loading:
const [clientPost, setClientPost] = useState<PostMetadata & { content?: string } | null>(null);
const [loading, setLoading] = useState(!serverPost && !!id);
useEffect(() => {
if (serverPost || !id) return;
getPostContent(id)
.then(setClientPost)
.catch(() => setNotFound(true))
.finally(() => setLoading(false));
}, [id, serverPost]);
if (loading || !post) {
return (
<Layout>
<div className="animate-pulse space-y-4">
{/* skeleton placeholders */}
</div>
</Layout>
);
}
getPost.ts uses Vite’s glob import so each post is a separate chunk:
const postLoaders = import.meta.glob<{ default: PostMetadata & { content?: string } }>('./posts/*.ts', {
eager: false,
});
export async function getPostContent(id: string): Promise<PostMetadata & { content?: string }> {
const loader = postLoaders[`./posts/${id}.ts`];
const mod = await loader();
return mod.default;
}
For prerender, the server uses full posts:
import { POSTS } from './constants-server';
So the client pays only for metadata + the one post the user opens; SSR still gets full content for static HTML.
Fix 5 — Vite manualChunks
I added manualChunks in vite.config.ts so React/React DOM/React Router and the markdown stack are in separate chunks. That improves caching (vendor and markdown change less often than app code) and keeps the main bundle smaller:
build: {
rollupOptions: {
output: {
manualChunks(id) {
if (id.includes('node_modules/react/') || id.includes('node_modules/react-dom/')) {
return 'vendor-react';
}
if (id.includes('node_modules/react-router')) {
return 'vendor-react';
}
if (id.includes('node_modules/react-markdown') || id.includes('node_modules/remark-gfm')) {
return 'markdown';
}
},
},
},
},
Fix 6 — Self-host grain texture
The grain overlay used background-image: url("https://grainy-gradients.vercel.app/noise.svg"), which added a third-party request and a small delay. I dropped a noise.svg into public/ and updated the CSS (in styles.css) to url("/noise.svg"). Same visual, no extra origin, no blocking.
What we didn’t fix / next
Lighthouse still reports ~32 KiB unused JavaScript and ~50 ms render-blocking; the score is 100 because the impact is small. If you need to optimize further, you could trim more dependencies, lazy-load the markdown bundle only on blog post routes, or profile hydration with the Performance panel and React Profiler.