15/10/2510 min read
#graphql#relay#react#architecture

Relay for complex React UIs: fragments, connections, and sane cache strategy

After years of building data-heavy React applications with Relay, I've collected a set of patterns that consistently help when UI complexity is the main challenge—not just fetching data, but managing it across dozens of interconnected components. This isn't a Relay tutorial. It's the stuff I wish I'd known earlier.

TL;DR — Draw fragment boundaries at what each component actually renders. Compose via spreads; keep page queries thin. Use @connection with a stable key (and filters when args affect identity). Prefer refetch over clever store updaters; use directives when they fit. Preload at route boundaries and place Suspense intentionally. When data goes wrong, check the store first.

Fragment boundaries: who owns what data

The most important decision in a Relay codebase is where to draw fragment boundaries. Get this wrong and you'll either overfetch (pulling data you don't need) or underfetch (requiring parent components to know too much about their children).

My rule: a component should declare exactly the data it renders, nothing more. If UserAvatar renders a name and image, its fragment should contain name and avatarUrl. Not the user's email. Not their role. Just what it paints on screen.

fragment UserAvatar_user on User {
  name
  avatarUrl
}

This sounds obvious, but it breaks down in two common scenarios:

1. Derived data. If a component needs to compute something from multiple fields, include those fields—even if they're not directly rendered. A status badge that shows "Active" based on lastSeenAt and isOnline needs both fields in its fragment.

2. Conditional rendering. If you show different UI based on user type, include the discriminator field. Don't rely on a parent to pass it as a prop.

fragment MemberCard_member on TeamMember {
  role
  user {
    ...UserAvatar_user
  }
  # Include role because we render differently for admins
}

The anti-pattern I see most often: a parent component fetching a blob of data and passing pieces down as props. This defeats Relay's colocation benefits and makes refactoring painful. When UserAvatar changes what it needs, you have to update the parent's fragment too.

Composing fragments without leaking concerns

Fragment composition is where Relay shines, but it's easy to create implicit dependencies. The pattern I follow:

  • Leaf components define their own fragments
  • Container components spread child fragments, never cherry-pick fields from them
  • Page-level queries compose everything but add only route-level concerns (like IDs from URL params)
# Page query - only adds route-level data
query TeamPageQuery($teamId: ID!) {
  team(id: $teamId) {
    ...TeamHeader_team
    ...TeamMemberList_team
  }
}

# TeamMemberList spreads but doesn't pick fields
fragment TeamMemberList_team on Team {
  members {
    id
    ...MemberCard_member
  }
}

If TeamMemberList needed members.user.name for sorting, I'd add it to the fragment explicitly rather than assuming MemberCard_member includes it. This keeps the dependency graph honest.

Pagination: connections done right

Relay's connection spec is powerful but has ergonomic pitfalls. Here's how I approach paginated lists:

Use @connection for anything that might grow. Even if you're showing 10 items today, if it could be 1000 tomorrow, model it as a connection. The refactor cost later is high.

fragment NotificationList_user on User
  @refetchable(queryName: "NotificationListPaginationQuery")
  @argumentDefinitions(
    count: { type: "Int", defaultValue: 20 }
    cursor: { type: "String" }
    status: { type: "NotificationStatus" }
  ) {
  notifications(first: $count, after: $cursor, status: $status)
    @connection(
      key: "NotificationList_notifications",
      filters: ["status"]
    ) {
    edges {
      node {
        id
        ...NotificationItem_notification
      }
    }
    pageInfo {
      hasNextPage
      endCursor
    }
  }
}

The key matters for cache identity. If you have two components showing the same connection with different filters, give them different keys—or use filters so Relay knows which arguments affect identity. Otherwise Relay merges them and you get weird behavior.

Loading states need thought. I use a pattern where the list component handles three states:

  1. Initial load (show skeleton)
  2. Loaded with data (show items + "load more" if hasNextPage)
  3. Loading more (show items + spinner at bottom)
function NotificationList({ user }: Props) {
  const { data, loadNext, hasNext, isLoadingNext } = usePaginationFragment(
    fragment,
    user
  );

  const edges = data.notifications?.edges ?? [];
  if (!data.notifications) return <Skeleton />;

  return (
    <>
      {edges.map((edge) =>
        edge?.node ? (
          <NotificationItem key={edge.node.id} notification={edge.node} />
        ) : null
      )}

      {hasNext && (
        <LoadMoreButton
          loading={isLoadingNext}
          onClick={() => loadNext(20)}
        />
      )}
    </>
  );
}

Cursor-based pagination and "jump to page" don't mix well. If the UI needs page numbers, consider offset pagination or redesign the UX. Otherwise you're choosing between: offset-based on the backend (simpler, but consistency issues), "page 5" meaning load 1–5 sequentially (slow but consistent), or infinite scroll / "load more" (often the best fit).

Tables with connections

Data tables are a special case. You often need:

  • Sorting (changes the cursor order)
  • Filtering (changes the result set)
  • Column visibility (changes what fields you fetch)

My approach: treat sort and filter as connection arguments, not client-side transforms.

fragment DataTable_query on Query
  @argumentDefinitions(
    sortBy: { type: "SortField", defaultValue: CREATED_AT }
    sortDir: { type: "SortDirection", defaultValue: DESC }
    filter: { type: "RowFilter" }
  ) {
  rows(
    first: 50
    sortBy: $sortBy
    sortDirection: $sortDir
    filter: $filter
  ) @connection(key: "DataTable_rows") {
    edges {
      node {
        id
        ...TableRow_row
      }
    }
  }
}

When sort or filter changes, I refetch the entire connection. Yes, this discards cached pages. But the alternative—client-side sorting of partially-loaded data—leads to confusing UX where the sorted view is incomplete.

For column visibility, I use @include directives so the server only sends columns the user has enabled:

fragment TableRow_row on Row
  @argumentDefinitions(
    showRevenue: { type: "Boolean!", defaultValue: true }
    showRegion: { type: "Boolean!", defaultValue: false }
  ) {
  id
  name
  revenue @include(if: $showRevenue)
  region @include(if: $showRegion)
}

Cache invalidation: when to refetch vs. update

This is where most Relay codebases get messy. The mental model I use:

Refetch when:

  • The mutation affects data you don't have locally (e.g., creating a new item that should appear in a filtered list)
  • The mutation's effects are complex or server-computed (e.g., updating a status that triggers workflows)
  • You're not sure what changed

Update the store when:

  • You know exactly what changed and have the new data in the mutation response
  • The update is a simple field change on an existing record
  • Optimistic updates are important for perceived performance

Refetchable fragments are the middle ground: refetch just the fragment (and its subtree) instead of the whole page or hand-written updaters. Use them when a mutation affects one area and you want fresh data without a full query.

For list mutations (add/remove), I almost always refetch the connection rather than trying to insert into the right position:

const [commit] = useMutation(AddItemMutation);

commit({
  variables: { input },
  onCompleted: () => {
    // Refetch the connection to get the item in the right sorted position
    refetch({ sortBy, filter });
  },
});

(Via useRefetchableFragment on a refetchable fragment, or refetch returned by usePaginationFragment.)

The exception: if the list is unsorted and you're adding to the end, you can use @appendEdge or @prependEdge directives. But sorted or filtered lists? Just refetch.

Store updates without getting clever: Prefer built-in mutation directives (append/prepend/delete helpers, e.g. @appendEdge, @prependEdge) when they fit. Reach for updaters only when directives can't express the change. When sort/filter makes "correct insertion" ambiguous, refetch.

For field updates, use the mutation response and let Relay's normalized cache do its job:

mutation UpdateUserNameMutation($input: UpdateUserNameInput!) {
  updateUserName(input: $input) {
    user {
      id
      name  # Relay auto-updates any component using this user's name
    }
  }
}

Optimistic updates that don't lie

Optimistic updates are great until they're wrong. My rules:

  1. Only use optimistic updates for reversible actions. Toggling a like? Fine. Deleting a record? Show a loading state instead.

  2. Match the optimistic response shape exactly to the real response. If the mutation returns a updatedAt timestamp, include a fake one in the optimistic response. Components that depend on that field will break otherwise.

  3. Handle rollback gracefully. If the mutation fails, Relay rolls back the optimistic update. Make sure your UI doesn't flash weirdly—sometimes a toast explaining the failure is better than silently reverting.

commit({
  variables: { id, newStatus },
  optimisticResponse: {
    updateStatus: {
      item: {
        id,
        status: newStatus,
        updatedAt: new Date().toISOString(), // Fake but necessary
      },
    },
  },
  onError: (error) => {
    toast.error('Failed to update status');
  },
});

Error handling: what to surface, what to swallow

GraphQL's error model is flexible, which means you have to make decisions. Here's my breakdown:

Network errors (request failed entirely): Show a full-page error or retry UI. The user can't do anything useful.

GraphQL errors (partial response with errors array): Depends on severity.

  • Auth errors → redirect to login
  • Not found → show empty state or 404
  • Validation errors → surface to the form that triggered them
  • Internal errors → log, show generic message, maybe offer retry

Null fields (field returned null unexpectedly): This is the sneaky one. Relay's generated types will tell you a field is nullable, but you need to decide what null means in your UI.

I handle this at the component level:

function UserProfile({ user }: Props) {
  // Fragment guarantees we asked for these, but they might be null
  if (!user.email) {
    return <EmailNotSet />;
  }

  return <ProfileCard email={user.email} />;
}

For debugging, the most valuable thing is logging the full operation name and variables when errors occur:

function commitMutation(config) {
  return originalCommitMutation({
    ...config,
    onError: (error) => {
      console.error('Mutation failed:', {
        operation: config.mutation.operation.name,
        variables: config.variables,
        error,
      });
      config.onError?.(error);
    },
  });
}

This lets you reproduce issues without digging through network tabs.

Data orchestration: avoiding waterfalls with preloading

Most "complex UI" Relay pain comes from orchestrating queries and Suspense boundaries, not just writing fragments. A few habits that help:

  • Use useQueryLoader + usePreloadedQuery at route boundaries. Load the query when the route matches (or on hover/focus), then render with the preloaded reference. That way the tree doesn't suspend on the first paint—you've already started the request.
  • Preload on navigation. When the user navigates, kick off the next route's query immediately. Relay's cache will satisfy what it can; the rest streams in.
  • Place Suspense boundaries intentionally. Page-level vs panel-level vs list-row-level each has different tradeoffs. Too coarse and you block the whole screen; too fine and you get loading flicker. I usually suspend at page or major panel level, and use skeletons for list rows.
  • Choose a fetch policy deliberately. store-or-network is great for fast repeat visits; network-only or store-and-network when you're debugging "stale" reports or need a guaranteed fresh read. When you need to force a refetch for the same variables, bump a fetchKey.

Debugging the cache

When data isn't showing up or is stale, the Relay store is usually the culprit. Things I check:

  1. Is the record in the store? Use Relay DevTools to inspect the normalized cache. Look up the record by its ID.

  2. Is the fragment reading the right record? Check that the parent is passing the correct fragment ref. A common bug: passing a query response directly instead of the specific fragment key.

  3. Is the data being overwritten? If a later operation queries a field and the server returns null, it can overwrite a previously non-null value—often due to permissions, conditional resolvers, or differing query contexts. Use @required to catch this at runtime, e.g. email @required(action: THROW).

  4. Is the connection key unique? Duplicate connection keys (or missing filters when args differ) cause data from different queries to merge unexpectedly.

Testing fragments without pain

At scale, testing Relay UIs can feel heavy. Two things that help: mock resolvers / test utilities (Relay's relay-test-utils or custom mock environments) so you can render components with controlled data, and testing leaf components at fragment boundaries so you're asserting on the right level—the component's fragment plus its props, not the whole query. That keeps tests fast and stable while still catching "this component expects this shape" regressions.

Putting it together

The patterns above aren't about Relay specifically—they're about managing data dependencies in complex UIs. Relay just makes the dependencies explicit.

The payoff: when a designer says "let's add the user's timezone to this card," I add one line to a fragment. No prop drilling, no "where does this data come from" archaeology. The component declares what it needs, and the query compiler figures out the rest.

That's how I think about fragments, colocation, caching, and UI state for large React apps. The specifics matter less than the principle: make data dependencies visible, keep them close to where they're used, and don't be clever about cache updates when a refetch will do.

Related Articles