Performance profiling in React: from "feels slow" to measured wins
Last month, a product manager pinged me: "The dashboard feels slow." No video, no steps to reproduce, just that sentence. This is how most performance work starts—vague reports that could mean anything from a 50ms delay to a 10-second hang.
Here's the workflow I've developed over years of production React work to turn those reports into measured, validated improvements that actually matter to users.
From "it's slow" to a reproducible baseline
The first step is always the same: get a reproducible case and a number.
Reproduce the slow path
Ask questions until you can see it yourself:
- Which page or feature?
- What data was loaded (account size, filters applied)?
- What action triggered the slowness (initial load, clicking something, scrolling)?
- What device/browser?
Often "slow" means different things. Initial load slowness is a different problem than interaction jank. A dashboard that's fine with 50 rows but unusable with 5,000 is a different problem than one that's slow on first render.
Capture a baseline number
Once you can reproduce it, measure it. I use three tools depending on what I'm investigating:
-
Chrome DevTools Performance panel – Record a trace, look at the flame chart. This tells you where time is spent: scripting, rendering, painting, or idle.
-
React DevTools Profiler – Shows component render times and what triggered re-renders. Essential for React-specific issues.
-
performance.mark()andperformance.measure()– For custom measurements. I'll often wrap a specific interaction:
performance.mark('filter-start');
applyFilters(data);
performance.mark('filter-end');
performance.measure('filter-duration', 'filter-start', 'filter-end');
Write down your baseline. "Dashboard initial render: 2,400ms" or "Filter interaction: 800ms." You need this to validate that your changes actually helped.
What to measure: load, interaction, memory
Different symptoms require measuring different things.
Load performance
If the complaint is about initial load:
- FCP – First pixels show up (diagnostic).
- LCP – Main content visible (primary).
- CLS – Layout stability (primary).
- TTFB – Server responsiveness (diagnostic; often explains slow LCP).
- (Lab only) TBT – Proxy for interactivity issues during load.
Core Web Vitals are typically evaluated at the 75th percentile and segmented by mobile vs. desktop; web.dev spells out both the thresholds and this guidance.
Check the Network panel for large bundles or slow API calls. Check the Performance panel for long tasks blocking the main thread during load.
Interaction performance
If it's sluggish to use after loading:
- INP – Field metric (real users): time from click/keypress to visual feedback. Lighthouse-style lab runs can't measure INP without real user input.
- TBT – Lab metric (e.g. Lighthouse): proxy for INP when you don't have real user input; reflects time spent in long tasks.
In the React Profiler, look for:
- Components that re-render when they shouldn't
- Components with high "self" time (expensive render logic)
- Cascading re-renders (one state change triggering renders across the tree)
Memory
If the app degrades over time or with heavy use:
- Take heap snapshots in DevTools Memory panel
- Compare snapshots before and after actions to find leaks
- Watch for detached DOM nodes or growing arrays
I once tracked down a memory leak where event listeners weren't being cleaned up on unmount—the app would slow to a crawl after 20 minutes of use.
Techniques that consistently move the needle
Once you know what's slow and have a baseline, here are the techniques I reach for most often.
Code-splitting
If your bundle is large and blocking initial render, split it.
const HeavyChart = lazy(() => import('./HeavyChart'));
function Dashboard() {
return (
<Suspense fallback={<ChartSkeleton />}>
<HeavyChart data={data} />
</Suspense>
);
}
Code-splitting helps when:
- You have features not everyone uses (admin panels, export tools)
- You have heavy dependencies used in specific routes (charting libraries, editors)
- Initial bundle is over 200–300KB of JavaScript (compressed)
It doesn't help if users always need the code immediately anyway—you're just trading one wait for another.
Memoization
React.memo, useMemo, and useCallback prevent unnecessary work. But they're not free—they add memory overhead and comparison costs.
Use memoization when:
- A component re-renders often but its props rarely change
- You're computing derived data that's expensive to recalculate
- You're passing callbacks to memoized children
// Memoize expensive filtering
const filteredData = useMemo(
() => data.filter(item => item.status === status),
[data, status]
);
// Memoize a component that renders often with same props
const DataRow = memo(function DataRow({ item, onSelect }) {
return (/* ... */);
});
Don't scatter memo() everywhere hoping it helps. Profile first, identify which re-renders are actually expensive, then memoize those.
Virtualization
If you're rendering hundreds or thousands of items, virtualize.
import { FixedSizeList } from 'react-window';
function VirtualList({ items }) {
return (
<FixedSizeList
height={600}
itemCount={items.length}
itemSize={50}
>
{({ index, style }) => (
<div style={style}>{items[index].name}</div>
)}
</FixedSizeList>
);
}
Virtualization helps when:
- You have lists or tables with 100+ items
- Each item isn't trivial to render
- Users don't need all items visible simultaneously
It adds complexity (scroll position management, dynamic heights, keyboard navigation), so don't reach for it until you've confirmed rendering is the bottleneck.
Debouncing and throttling
For interactions that fire rapidly (typing, scrolling, resizing):
import debounce from 'lodash.debounce';
import { useCallback, useMemo, useEffect } from 'react';
const fetchResults = useCallback((query) => {
// ...
}, []);
const debouncedSearch = useMemo(
() => debounce((query) => fetchResults(query), 300),
[fetchResults]
);
useEffect(() => () => debouncedSearch.cancel(), [debouncedSearch]);
This is often the fix for "typing in the search box is laggy"—the app was fetching or filtering on every keystroke. Keep fetchResults stable (e.g. via useCallback) so the debounced function doesn't call a stale closure; cancel on unmount so you don't get late state updates after the component is gone.
Moving work off the main thread
For truly expensive computations, consider Web Workers. Worker wiring depends on your bundler; a copy-paste-safe pattern with Vite/webpack-style resolution. Create the worker once and terminate on unmount so you don't leak it:
import { useMemo, useEffect } from 'react';
const worker = useMemo(
() => new Worker(new URL('./filterWorker.js', import.meta.url), { type: 'module' }),
[]
);
useEffect(() => () => worker.terminate(), [worker]);
useEffect(() => {
worker.onmessage = (e) => setFilteredData(e.data);
return () => { worker.onmessage = null; };
}, [worker]);
// When you have data to process:
worker.postMessage({ data, filters });
I use this sparingly—it adds complexity and data serialization costs—but it's valuable when you're processing large datasets and blocking the main thread.
Validating improvements
After making changes, measure again with the same methodology. Compare to your baseline.
A few rules I follow:
-
Measure multiple times. Performance varies. I usually take 3-5 measurements and use the median.
-
Test with realistic data. That dashboard might be fast with 10 rows but slow with 10,000. Test with production-scale data.
-
Test on realistic devices. Your M3 MacBook Pro is not representative. Use DevTools CPU throttling (4x or 6x slowdown) or test on actual mid-range devices.
-
Disable extensions. Browser extensions can skew results significantly.
If your change didn't improve the number, it didn't help. Revert it and try something else.
Closing the loop with RUM and Web Vitals
Local profiling tells you what's slow on your machine with your data. Real User Monitoring (RUM) tells you what's slow for actual users.
Collecting Web Vitals
Use the web-vitals library to capture Core Web Vitals. Send name, value, and id so you can dedupe, aggregate, and compute percentiles per page or segment. The library uses PerformanceObserver with the buffered flag, so it doesn't have to load super early to be accurate—you can usually defer it.
import { onLCP, onINP, onCLS } from 'web-vitals';
function sendToAnalytics(metric, context = {}) {
const payload = {
name: metric.name,
value: metric.value,
id: metric.id,
...context, // page, route, device class, AB bucket, etc.
};
const body = new Blob([JSON.stringify(payload)], { type: 'application/json' });
(navigator.sendBeacon && navigator.sendBeacon('/analytics', body)) ||
fetch('/analytics', {
method: 'POST',
body: JSON.stringify(payload),
headers: { 'Content-Type': 'application/json' },
keepalive: true,
});
}
onLCP(m => sendToAnalytics(m, { page: location.pathname }));
onINP(m => sendToAnalytics(m, { page: location.pathname }));
onCLS(m => sendToAnalytics(m, { page: location.pathname }));
What to track
At minimum:
- LCP – Are users seeing content quickly?
- INP – Are interactions responsive?
- CLS – Is the layout stable?
I also track custom metrics for critical interactions:
- Time from clicking "Apply Filters" to results rendering
- Time from page load to data table being interactive
- Time to open a modal with complex content
Using RUM data
RUM data shows you:
- P50, P75, P95 – The median user experience vs. the worst 5%; Core Web Vitals targets are usually judged at P75.
- Segmentation – Mobile users might have different problems than desktop
- Regressions – Did that deploy last Tuesday make things worse?
I've had cases where local testing showed great numbers, but RUM revealed that users with slow connections or older devices had terrible experiences. RUM keeps you honest.
Before and after
When you ship a performance improvement:
- Note the RUM metrics before (P75 LCP was 3.2s)
- Ship the change
- Wait until you have a stable sample size (often a few days; traffic varies)
- Compare the metrics (P75 LCP is now 2.1s)
This is the real validation. Lab measurements are useful for debugging, but RUM tells you if users actually benefited.
The workflow, summarized
- Get a repro – Turn "it's slow" into specific steps
- Measure a baseline – Use DevTools, React Profiler, or custom marks
- Identify the bottleneck – Is it load, interaction, memory? Which component or operation?
- Apply targeted fixes – Code-splitting, memoization, virtualization, debouncing—pick based on what you found
- Validate locally – Did the number improve?
- Ship and monitor RUM – Did real users benefit?
The key is staying grounded in numbers throughout. "Feels faster" isn't good enough—you need measurements before and after, both in the lab and in the field.
Performance work is satisfying when you do it this way. You start with a vague complaint, dig until you find the real problem, apply a fix, and prove it worked with data. No guessing, no premature optimization, just measured wins.