Advanced React
Advanced React
by Nadia Makarevich
Illustrations and book cover design: Nadia
Makarevich
Technical review: Andrew Grischenko
Copyright © 2023
All rights reserved. No part of this book may be
reproduced, distributed, or transmitted in any form or
by any means, including photocopying, recording, or
other electronic or mechanical methods, without the
prior written permission of the copyright holder,
except in the case of brief quotations embodied in
critical reviews and certain other noncommercial uses
permitted by copyright law.
This book is provided for informational purposes
only. The author make no representations or
warranties with respect to the accuracy or
completeness of the contents of this book and
specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose.
The information contained in this book is based on
the author's knowledge and research at the time of
writing, and the author has made a good faith effort
to ensure its accuracy.
However, the advice and strategies contained herein
may not be suitable for every individual or situation.
Readers are advised to consult with a professional
where appropriate. The author shall not be liable for
any loss, injury, or damage arising from the use or
reliance on the information presented in this book,
nor for any errors or omissions in its contents.
For permission requests, please contact the author at
[email protected]
Table Of Contents
Forewords
Introduction: how to read this
book
React is one of the most popular front-end frameworks out there. There is
no arguing about that. As a result, the internet is filled with courses, books,
and blogs about React. Also, the newly released documentation is very
good. So, what is the point of this book? Is there even a gap it can fill?
I believe there is. I have been writing a blog dedicated to advanced patterns
for React developers (https://www.developerway.com) for a while now. One
of the most consistent pieces of feedback I receive is that there is not
enough advanced-level content.
The docs are very good to start with React. Millions of books, courses, and
blogs are out there aimed at beginners. But what to do after you've started
successfully? Where to go if you want to understand how things work on a
deeper level? What to read if you've been writing React for a while and
beginner or even intermediate-level courses are not enough? There are not
many resources available for this. This is the gap this book aims to fill.
What it aims to provide is the knowledge that allows you to progress from
the "basic app" to "React guru in my team". It begins right away with
investigating and fixing a performance bug. It digs deep into what re-
renders are and how they affect performance. Walks you through how the
reconciliation algorithm works, how to deal with closures in React, various
composition patterns that can replace memoization, how memoization
works, how to implement debouncing correctly, and much more.
So, I recommend reading the book in the order of the chapters. If your
knowledge already extends beyond the simple "todo" app, it's very likely
that you'll know a lot of the concepts already. For this case, every chapter
has a bullet-point list of things you can expect to learn from it at the
beginning, and a "Key takeaways" section, with a very short bullet-point
summary of the things introduced. Just skimming through these first will
give you a good idea of what's inside.
Let's dive right in, shall we? And let's talk about performance right away:
it's one of the most important topics these days when it comes to building
applications, and as a result, it's an overarching theme of this book.
The problem
Imagine yourself as a developer who inherited a large, complicated, and
very performance-sensitive app. Lots of things are happening there, many
people have worked on it over the years, millions of customers are using it
now. As your first task on the job, you are asked to add a simple button that
opens a modal dialog right at the top of this app.
You look at the code and find the place where the dialog should be
triggered:
Then you implement it. The task seems trivial. We've all done it hundreds
of times:
return (
<div className="layout">
{/* add the button */}
<Button onClick={() =>
setIsOpen(true)}>
Open dialog
</Button>
{/* add the dialog itself
*/}
{isOpen ? (
<ModalDialog onClose={()
=> setIsOpen(false)} />
) : null}
<VerySlowComponent />
<BunchOfStuff />
<OtherStuffAlsoComplicated
/>
</div>
);
};
Just add some state that holds whether the dialog is open or closed. Add the
button that triggers the state update on click. And the dialog itself that is
rendered if the state variable is true .
You start the app, try it out - and oops. It takes almost a second to open that
simple dialog!
But first, let's review what exactly is happening here and why.
Every re-render starts with the state. In React, every time we use a hook
like useState , useReducer , or any of the external state management
libraries like Redux, we add interactivity to a component. From now on, a
component will have a piece of data that is preserved throughout its
lifecycle. If something happens that needs an interactive response, like a
user clicking a button or some external data coming through, we update the
state with the new data.
return (
<Button onClick={() =>
setIsOpen(true)}>
Open dialog
</Button>
);
};
After the state is updated and the App component re-renders, the new data
needs to be delivered to other components that depend on it. React does this
automatically for us: it grabs all the components that the initial component
renders inside, re-renders those, then re-renders components nested inside
of them, and so on until it reaches the end of the chain of components.
If you imagine a typical React app as a tree, everything down from where
the state update was initiated will be re-rendered.
In the case of our app, everything that it renders, all those very slow
components, will be re-rendered when the state changes:
// everything that is
returned here will be re-
rendered when the state is
updated
return (
<div className="layout">
<Button onClick={() =>
setIsOpen(true)}>
Open dialog
</Button>
{isOpen ? (
<ModalDialog onClose={()
=> setIsOpen(false)} />
) : null}
<VerySlowComponent />
<BunchOfStuff />
<OtherStuffAlsoComplicated
/>
</div>
);
};
As a result, it takes almost a second to open the dialog - React needs to re-
render everything before the dialog can appear on the screen.
The important thing to remember here is that React never goes "up" the
render tree when it re-renders components. If a state update originated
somewhere in the middle of the components tree, only components "down"
the tree will re-render.
The only way for components at the "bottom" to affect components at the
"top" of the hierarchy is for them either to explicitly call state update in the
"top" components or to pass components as functions.
Normal React behavior is that if a state update is triggered, React will re-
render all the nested components regardless of their props. And if a state
update is not triggered, then changing props will be just "swallowed": React
doesn't monitor them.
If I have a component with props, and I try to change those props without
triggering a state update, something like this:
return (
<div className="layout">
{/* nothing will happen
*/}
<Button onClick={() =>
(isOpen = true)}>
Open dialog
</Button>
{/* will never show up */}
{isOpen ? (
<ModalDialog onClose={()
=> (isOpen = false)} />
) : null}
</div>
);
};
It just won't work. When the Button is clicked, the local isOpen variable
will change. But the React lifecycle is not triggered, so the render output is
never updated, and the ModalDialog will never show up.
return (
<div className="layout">
{/* state is used here */}
<Button onClick={() =>
setIsOpen(true)}>
Open dialog
</Button>
{/* state is used here */}
{isOpen ? (
<ModalDialog onClose={()
=> setIsOpen(false)} />
) : null}
<VerySlowComponent />
<BunchOfStuff />
<OtherStuffAlsoComplicated
/>
</div>
);
};
As you can see, it's relatively isolated: we use it only on the Button
component and in ModalDialog itself. The rest of the code, all those very
slow components, doesn't depend on it and therefore doesn't actually need
to re-render when this state changes. It's a classic example of what is called
an unnecessary re-render.
const ButtonWithModalDialog = ()
=> {
const [isOpen, setIsOpen] =
useState(false);
// render only Button and
ModalDialog here
return (
<>
<Button onClick={() =>
setIsOpen(true)}>
Open dialog
</Button>
{isOpen ? (
<ModalDialog onClose={()
=> setIsOpen(false)} />
) : null}
</>
);
};
And then just render this new component in the original big App :
Now, the state update when the Button is clicked is still triggered, and
some components re-render because of it. But! It only happens with
components inside the ButtonWithModalDialog component. And it's just a
tiny button and the dialog that should be rendered anyway. The rest of the
app is safe.
Essentially, we just created a new sub-branch inside our render tree and
moved our state down to it.
As a result, the modal dialog appears instantly. We just fixed a big
performance problem with a simple composition technique!
return {
isOpen,
open: () => setIsOpen(true),
close: () =>
setIsOpen(false),
};
};
And then use this hook in our App instead of setting state directly:
const App = () => {
// state is in the hook now
const { isOpen, open, close }
= useModalDialog();
return (
<div className="layout">
{/* just use "open"
method from the hook */}
<Button onClick=
{open}>Open dialog</Button>
{/* just use "close"
method from the hook */}
{isOpen ? <ModalDialog
onClose={close} /> : null}
<VerySlowComponent />
<BunchOfStuff />
<OtherStuffAlsoComplicated
/>
</div>
);
};
Why did I call this "the danger"? It seems like a reasonable pattern, and the
code is slightly cleaner. Because the hook hides the fact that we have state
in the app. But the state is still there! Every time it changes, it will still
trigger a re-render of the component that uses this hook. It doesn't even
matter whether this state is used in the App directly or even whether the
hook returns anything.
If, for example, I want to be fancy with this dialog's positioning and
introduce some state inside that hook that listens for the window's resize:
useEffect(() => {
const listener = () => {
setWidth(window.innerWidth);
}
window.addEventListener('resize'
, listener);
return () =>
window.removeEventListener('resi
ze', listener);
}, []);
The entire App component will re-render on every resize, even though this
value is not even returned from the hook!
Hooks are essentially just pockets in your trousers. If, instead of carrying a
10-kilogram dumbbell in your hands, you put it in your pocket, it wouldn't
change the fact that it's still hard to run: you have 10 kilograms of
additional weight on your person. But if you put that ten kilograms in a self-
driving trolley, you can run around freely and fresh and maybe even stop
for coffee: the trolley will take care of itself. Components for the state are
that trolley.
Exactly the same logic applies to the hooks that use other hooks: anything
that can trigger a re-render, however deep in the chain of hooks it's
happening, will trigger a re-render in the component that uses that very first
hook. If I extract that additional state into a hook that returns null , App
will still re-render on every resize:
const useResizeDetector = () =>
{
const [width, setWidth] =
useState(0);
useEffect(() => {
const listener = () => {
setWidth(window.innerWidth);
};
window.addEventListener('resize'
, listener);
return () =>
window.removeEventListener('resi
ze', listener);
}, []);
return null;
}
In order to fix our app, you'd still need to extract that button, dialog, and the
custom hook into a component:
const ButtonWithModalDialog = ()
=> {
const { isOpen, open, close }
= useModalDialog();
Key takeaways
This is just the beginning. In the following chapters, we'll dig into more
details on how all of this works. In the meantime, here are some key points
to remember from this Chapter:
The problem
Imagine again that you've inherited a large, complicated, and very
performance-sensitive app. And that app has a scrollable content area.
Probably some fancy layout with a sticky header, a collapsible sidebar on
the left, and the rest of the functionality in the middle.
The code for that main scrollable area looks something like this:
Just a div with a className and CSS overflow: auto underneath. And
lots of very slow components inside that div. On your very first day on the
job, you're asked to implement a very creative feature: a block that shows
up at the bottom of the area when a user scrolls for a bit and slowly moves
to the top as the user continues to scroll down. Or slowly moves down and
disappears if the user scrolls up. Something like a secondary navigation
block with some useful links. And, of course, the scrolling and everything
associated with it should be smooth and lag-free.
return (
<div className="scrollable-
block" onScroll={onScroll}>
{/* pass position value
to the new movable component */}
<MovingBlock position=
{position} />
<VerySlowComponent />
<BunchOfStuff />
<OtherStuffAlsoComplicated
/>
</div>
);
};
However, from the performance and re-renders perspective, this is far from
optimal. Every scroll will trigger a state update, and as we already know,
the state update will trigger a re-render of the App component and every
nested component inside. So all the very slow bunch of stuff will re-render,
and the scrolling experience will be slow and laggy. Exactly the opposite of
what we need.
And as you can see, we can't just easily extract that state into a component
anymore. The setPosition is used in the onScroll function, which is
attached to the div that wraps everything.
So, what to do here? Memoization or some magic with passing Ref around?
Not necessarily! As before, there's a simpler option. We can still extract that
state and everything needed for the state to work into a component:
const ScrollableWithMovingBlock
= () => {
const [position, setPosition]
= useState(300);
return (
<div className="scrollable-
block" onScroll={onScroll}>
<MovingBlock position=
{position} />
{/* slow bunch of stuff
used to be here, but not
anymore */}
</div>
);
};
And then just pass that slow bunch of stuff to that component as props.
Something like this:
return (
<ScrollableWithMovingBlock
content={slowComponents} />
);
};
return (
<div className="scrollable-
block" onScroll={onScroll}>
<MovingBlock position=
{position} />
{content}
</div>
)
}
Now, onto the state update and re-renders situation. If a state update is
triggered, we will once again trigger a re-render of a component, as usual.
However, in this case, it will be the ScrollableWithMovingBlock
component - just a div with a movable block. The rest of the slow
components are passed through props, they are outside of that component.
In the "hierarchical" components tree, they belong to the parent. And
remember? React never goes "up" that tree when it re-renders a component.
So our slow components won't re-render when the state is updated, and the
scrolling experience will be smooth and lag-free.
Wait a second, some might think here. This doesn't make much sense. Yes,
those components are declared in the parent, but they are still rendered
inside that component with the state. So why don't they re-render? It's
actually a very reasonable question.
As you can see, it's just a function. What makes a component different from
any other function is that it returns Elements, which React then converts
into DOM elements and sends to the browser to be drawn on the screen. If
it has props, those would be just the first argument of that function:
The object definition for our <Child /> element would look something
like this:
{
type: Child,
props: {}, // if Child had
props
... // lots of other internal
React stuff
}
This tells us that the Parent component, which returns that definition,
wants us to render the Child component with no props. The return of the
Child component will have its own definitions, and so on, until we reach
the end of that chain of components.
Elements are not limited to components; they can be just normal DOM
elements. Our Child could return an h1 tag, for example:
{
type: "h1",
... // props and internal
react stuff
}
The part that matters for this chapter's problem is this: if the object
(Element) before and after re-render is exactly the same, then React will
skip the re-render of the Component this Element represents and its nested
components. And by "exactly the same," I mean whether
Object.is(ElementBeforeRerender, ElementAfterRerender) returns
true . React doesn't perform the deep comparison of objects. If the result of
this comparison is true , then React will leave that component in peace and
move on to the next one.
If the comparison returns false , this is the signal to React that something
has changed. It will look at the type then. If the type is the same, then
React will re-render this component. If the type changes, then it will
remove the "old" component and mount the "new" one. We'll take a look at
it in more detail in Chapter 6. Deep dive into diffing and reconciliation.
Let's take a look at the Parent/Child example again and imagine our
Parent has state:
Now, imagine what will happen here if, instead of rendering that Child
component directly, I would pass it as a prop?
return child;
};
// someone somewhere renders
Parent component like this
<Parent child={<Child />} />;
And this is exactly what we did for our component with the scroll!
const ScrollableWithMovingBlock
= ({ content }) => {
const [position, setPosition]
= useState(300);
const onScroll = () => {...}
// same as before
return (
<div className="scrollable-
block" onScroll={onScroll}>
<MovingBlock position=
{position} />
{content}
</div>
)
}
Children as props
While this pattern is cool and totally valid, there is one small problem with
it: it looks weird. Passing the entire page content into some random props
just feels... wrong for some reason. So, let's improve it.
First of all, let's talk about the nature of props. Props are just an object that
we pass as the first argument to our component function. Everything that
we extract from it is a prop. Everything. In our Parent/Child code, if I
rename the child prop to children , nothing will change: it will continue
to work.
// before
const Parent = ({ child }) => {
return child;
};
// after
const Parent = ({ children }) =>
{
return children;
};
// before
<Parent child={<Child />} />
// after
<Parent children={<Child />} />
However, for children props, we have a special syntax in JSX . That nice
nesting composition that we use all the time with HTML tags, we just never
thought about it and paid attention to it:
<Parent>
<Child />
</Parent>
This will work exactly the same way as if we were passing the children
prop explicitly:
{
type: Parent,
props: {
// element for Child here
children: {
type: Child,
...
},
}
}
And it will have exactly the same performance benefits as passing Elements
as props as well! Whatever is passed through props won't be affected by the
state change of the component that receives those props. So we can re-write
our App from this:
return (
<ScrollableWithMovingBlock
content={slowComponents} />
);
};
const ScrollableWithMovingBlock
= ({ content }) => {
// .. the rest of the code
return (
<div ...>
...
{content}
</div>
)
}
After:
const ScrollableWithMovingBlock
= ({ children }) => {
// .. the rest of the code
return (
<div ...>
...
{children}
</div>
)
}
Key takeaways
Hope this made sense and you're now confident with the "components as
props" and "children as props" patterns. In the next chapter, we'll take a
look at how components as props can be useful outside of performance. In
the meantime, here are a few things to remember:
<Parent>
<Child />
</Parent>
Let's continue our investigation into how React works. This time, we're
going to build a simple "button with icon" component. What could possibly
be complicated about this one, right? But in the process of building it, you'll
find out:
The problem
Imagine, for example, that you need to implement a Button component.
One of the requirements is that the button should be able to show the
"loading" icon on the right when it's used in the "loading" context. Quite a
common pattern for data sending in forms.
No problem! We can just implement the button and add the isLoading
prop, based on which we'll render the icon.
The next day, this button needs to support all available icons from your
library, not only the Loading . Okay, we can add the iconName prop to the
Button for that. The next day - people want to be able to control the color
of that icon so that it aligns better with the color palette used on the website.
The iconColor prop is added. Then iconSize , to control the size of the
icon. And then, a use case appears for the button to support icons on the left
side as well. And avatars.
Eventually, half of the props on the Button are there just to control those
icons, no one is able to understand what is happening inside, and every
change results in some broken functionality for the customers.
const Button = ({
isLoading,
iconLeftName,
iconLeftColor,
iconLeftSize,
isIconLeftAvatar,
...
}) => {
// no one knows what's
happening here and how all
those props work
return ...
}
Sounds familiar?
Elements as props
Luckily, there is an easy way to drastically improve this situation. All we
need to do is to get rid of those configuration props and pass the icon as an
Element instead:
Whether doing something like this for a Button is a good idea or not is
sometimes debatable, of course. It highly depends on how strict your design
is and how much deviation it allows for those who implement product
features.
Unless your designers are very strict and powerful, chances are you'd need
to have different configurations of those buttons in different dialogs: one,
two, three buttons, one button is a link, one button is "primary," different
texts on those of course, different icons, different tooltips, etc. Imagine
passing all of that through configuration props!
But with elements as props, it couldn't be easier: just create a footer prop
on the dialog
// two buttons
<ModalDialog
content={<SomeFormHere />}
footer={<><SubmitButton />
<CancelButton /></>}
/>
<ThreeColumnsLayout
leftColumn={<Something />}
middleColumn={<OtherThing />}
rightColumn={<SomethingElse
/>}
/>
// before
<ModalDialog
content={<SomeFormHere />}
footer={<SubmitButton />}
/>
// after
<ModalDialog
footer={<SubmitButton />}
>
<SomeFormHere />
</ModalDialog>
Always remember: "children" in this context are nothing more than a prop,
and the "nested" syntax is just syntax sugar for it!
Conditional rendering and
performance
One of the biggest concerns that sometimes arises with this pattern is the
performance of it. Which is ironic, considering that in the previous chapter,
we discussed how to use it to improve performance. So, what's going on?
return isDialogOpen ? (
<ModalDialog footer={footer}
/>
) : null;
};
The question here, with which even very experienced developers sometimes
struggle, is this: we declare our Footer before the dialog. While the dialog
is still closed and won't be open for a while (or maybe never). Does this
mean that the footer will always be rendered, even if the dialog is not on the
screen? What about the performance implications? Won't this slow down
the App component?
This Footer will actually be rendered only when it ends up in the return
object of one of the components, not sooner. In our case, it will be the
ModalDialog component. It doesn't matter that the <Footer /> element
was created in the App . It's the ModalDialog that will take it and actually
return it:
This is what makes routing patterns, like in one of the versions of React
router, completely safe:
There is no condition here, so it feels like the App owns and renders both
<Page /> and <OtherPage /> at the same time. But it doesn't. It just
creates small objects that describe those pages. The actual rendering will
only happen when the path in one of the routes matches the URL and the
element prop is actually returned from the Route component.
One of the objections against passing those icons as props is that this
pattern is too flexible. It's okay for the ThreeColumnsLayout component to
accept anything in the leftColumn prop. But in the Button's case, we don't
really want to pass everything there. In the real world, the Button would
need to have some degree of control over the icons. If the button has the
isDisabled property, you'd likely want the icon to appear "disabled" as
well. Bigger buttons would want bigger icons by default. Blue buttons
would want white icons by default, and white buttons would want black
icons.
Half of the time, it will be forgotten, and the other half misunderstood.
What we need here is to assign some default values to those icons that the
Button can control while still preserving the flexibility of the pattern.
Luckily, we can do exactly that. Remember that these icons in props are just
objects with known and predictable shapes. And React has APIs that allow
us to operate on them easily. In our case, we can clone the icon in the
Button with the help of the React.cloneElement function[2], and assign
any props to that new element that we want. So nothing stops us from
creating some default icon props, merging them together with the props
coming from the original icon, and assigning them to the cloned icon:
return <button>Submit
{clonedIcon}</button>;
};
And now, all of our Button with icon examples will be reduced to just this:
// primary button will have
white icons
<Button appearance="primary"
icon={<Loading />} />
No additional props on any of the icons, just the default props that are
controlled by the button now! And then, if someone really needs to override
the default value, they can still do it: by passing the prop as usual.
In fact, consumers of the Button won't even know about the default props.
For them, the icon will just work like magic.
return <button>Submit
{clonedIcon}</button>;
};
I will basically destroy the icon's API. People will try to pass different sizes
or colors to it, but it will never reach the target:
Good luck to anyone trying to understand why setting the color of the icon
outside of the button works perfectly, but doesn't work if the icon is passed
as this prop.
So be very careful with this pattern, and make sure you always override the
default props with the actual props. And if you feel uneasy about it - no
worries. In React, there are a million ways to achieve exactly the same
result. There is another pattern that can be very helpful for this case: render
props. It can also be very helpful if you need to calculate the icon's props
based on the button's state or just plainly pass that state back to the icon.
The next chapter is all about this pattern.
Key takeaways
Before we move on to the Render Props pattern, let's remember:
return isDialogOpen ? (
<ModalDialog footer={footer}
/>
) : null;
};
This is where the pattern known as "render props" comes in handy. In this
chapter, you'll learn:
The problem
Here is the Button component that we implemented in the previous
chapter:
const Button = ({ appearance,
size, icon }) => {
// create default props
const defaultIconProps = {
size: size === 'large' ?
'large' : 'medium',
color: appearance ===
'primary' ? 'white' : 'black',
};
const newProps = {
...defaultIconProps,
// make sure that props
that are coming from the icon
override default if they exist
...icon.props,
};
return (
<button className={`button
${appearance}`}>
Submit {clonedIcon}
</button>
);
};
The Button accepts an icon Element and sets its size and color props
by default.
While this approach works pretty well for simple cases, it is not that good
for something more complicated. What if I want to introduce some state to
the Button and give Button 's consumers access to that state? Like
adjusting the icon while the button is hovered, for example? It's easy
enough to implement that state in the button:
Another problem with this approach is that we're making some major
assumptions about the Element that comes through the icon prop. We
expect it to have at least size and color props. What if we wanted to use
a different library for icons, and those icons didn't have those exact props?
Our default props logic will just stop working with no way of fixing it.
In the case of the Button and its icon, here is how it would look like with
the render function:
And we can still adjust that icon to our needs, of course, same as the regular
Element:
// red icon
<Button renderIcon={() =>
<HomeIcon color="red" /> } />
// large icon
<Button renderIcon={() =>
<HomeIcon size="large" /> } />
So, what's the point of using this function? First of all, icons' props. Now,
instead of cloning elements, which is a bit of a shady move anyway, we can
just pass the object to the function:
const Button = ({ appearance,
size, renderIcon }) => {
// create default props as
before
const defaultIconProps = {
size: size === 'large' ?
'large' : 'medium',
color: appearance ===
'primary' ? 'white' : 'black',
};
And then, on the icon's side, we can accept them and spread them over the
icon:
<Button
renderIcon={(props) => (
<HomeIcon {...props}
size="large" color="red" />
)}
/>
<Button
renderIcon={(props) => (
<HomeIcon
fontSize={props.size}
style={{ color:
props.color }}
/>
)}
/>
Sharing state is also not a problem anymore. We can simply merge that state
value into the object we're passing to the icon:
const iconParams = {
size: size === 'large' ?
'large' : 'medium',
color: appearance ===
'primary' ? 'white' : 'black',
// add state here - it's
just an object after all
isHovered,
}
const iconParams = {
size: size === 'large' ?
'large' : 'medium',
color: appearance ===
'primary' ? 'white' : 'black',
}
And then on the icon side, we can again do whatever we want with that
hovered state. We can render another icon:
<Parent>
<Child />
</Parent>
// make it a function
<Parent children={() => <Child
/>} />
useEffect(() => {
const listener = () => {
const width =
window.innerWidth;
setWidth(width)
}
window.addEventListener("resize"
, listener);
// the rest of the code
}, [])
return ...
}
And you want to make it generic so that different components throughout
the app can track the window width without implementing that code
everywhere. So ResizeDetector needs to share that state with other
components somehow. Technically, we could do this through props, just by
adding the onWidthChange prop to the detector:
const ResizeDetector = ({
onWidthChange }) => {
const [width, setWidth] =
useState();
useEffect(() => {
const listener = () => {
const width =
window.innerWidth;
setWidth(width);
// trigger onWidthChange
prop here
onWidthChange(width);
}
window.addEventListener("resize"
, listener);
// the rest of the code
}, [])
return ...
}
But this would mean that any component that wants to use it would have to
maintain its own state for it:
return (
<>
<ResizeDetector
onWindowWidth={setWindowWidth}
/>
{windowWidth > 600 ? (
<WideLayout />
) : (
<NarrowLayout />
)}
</>
);
};
A bit messy.
What we can do instead is just make ResizeDetector accept children as
a function and pass that width to children directly:
const ResizeDetector = ({
children }) => {
const [width, setWidth] =
useState();
Then, any component that needs that width can just use it without
introducing unnecessary state for it:
In real life, of course, we'd have a re-renders problem here: we're triggering
state updates on every width change. So we'd have to either calculate the
layout inside the detector or debounce it. But the principle of sharing state
will remain the same.
useEffect(() => {
const listener = () => {
const width = ... // get
window width here
setWidth(width);
}
window.addEventListener("resize"
, listener);
// the rest of the code
}, [])
return width;
}
Just extract the entire logic of the ResizeDetector component into a hook
and then use it everywhere:
const Layout = () => {
const windowWidth =
useResizeDetector();
const ScrollDetector = ({
children }) => {
const [scroll, setScroll] =
useState();
return (
<div
onScroll={(e) =>
setScroll(e.currentTarget.scroll
Top)}
>
{children}
</div>
);
};
Exactly the same situation as before: you have some value, and you want to
share that value with other components. Props again will be messy. And
extracting it to a hook won't be as straightforward as before: you need to
attach the onScroll listener to a div this time, not window . So you'd need
to either introduce a Ref and pass it around (more about Refs in Chapter 9.
Refs: from storing data to imperative API). Or just use the render prop
pattern:
const ScrollDetector = ({
children }) => {
const [scroll, setScroll] =
useState();
return (
<div
onScroll={(e) =>
setScroll(e.currentTarget.scroll
Top)}
>
{children(scroll)}
</div>
);
};
And use it where you need to do something based on how much the user
scrolled:
Key takeaways
Hope all of this makes sense and the pattern is as clear as day now. A few
things to remember from this chapter:
<Button renderIcon={(props,
state) => <IconComponent
{...props} someProps={state} />
} />
Render props were very useful when we needed to share stateful logic
between components without lifting it up.
But hooks replaced that use case in 99% of cases.
Render props for sharing stateful logic and data can still be useful even
today, for example, when this logic is attached to a DOM element.
Chapter 5. Memoization with
useMemo, useCallback and
React.memo
Now that we know the most important composition patterns and how they
work, it's time to talk about performance some more. More precisely, let's
discuss the topic that is strongly associated with improving performance in
React, but in reality, doesn't work as we intend to at least half the time we're
doing it. Memoization. Our favorite useMemo and useCallback hooks and
the React.memo higher-order component.
And I'm not joking or exaggerating about half the time by the way. Doing
memoization properly is hard, much harder than it seems. By the end of this
chapter, hopefully, you'll agree with me. Here you'll learn:
What is the problem we're trying to solve with memoization (and it's
not performance per se!).
How useMemo and useCallback work under the hood, and what is
the difference between them.
Why memoizing props on a component by itself is an anti-pattern.
What React.memo is, why we need it, and what are the basic rules for
using it successfully.
How to use it properly with the "elements as children" pattern.
What is the role of useMemo in expensive calculations.
const a = 1;
const b = 1;
With objects and anything inherited from objects (like arrays or functions),
it's a different story.
So even if these objects look exactly the same, the values in our fresh a
and b variables are different: they point to different objects in memory. As
a result, a simple comparison between them will always return false:
const a = { id: 1 };
const b = { id: 1 };
const a = { id: 1 };
const b = a;
This is what React has to deal with any time it needs to compare values
between re-renders. It does this comparison every time we use hooks with
dependencies, like in useEffect for example:
useEffect(() => {
// call the function here
submit();
then the value in the submit variable will be the same reference between
re-renders, the comparison will return true , and the useEffect hook that
depends on it won't be triggered every time:
useEffect(() => {
submit();
// submit is memoized, so
useEffect won't be triggered on
every re-render
}, [submit]);
return ...
}
Exactly the same story with useMemo , only in this case, I need to return the
function I want to memoize:
Since both hooks accept a function as the first argument, and since we
declare these functions inside a React component, that means on every re-
render, this function as the first argument will always be re-created. It's your
normal JavaScript, nothing to do with React. If I declare a function that
accepts another function as an argument and then call it multiple times with
an inline function, that inline function will be re-created from scratch with
each call.
// function as an argument -
first call
func(() => {});
// function as an argument -
second call, new function as an
argument
func(() => {});
And our hooks are just functions integrated into the React lifecycle, nothing
more.
let cachedCallback;
const func = (callback) => {
if (dependenciesEqual()) {
return cachedCallback;
}
cachedCallback = callback;
return callback;
};
It caches the very first function that is passed as an argument and then just
returns it every time if the dependencies of the hook haven't changed. And
if dependencies have changed, it updates the cache and returns the refreshed
function.
With useMemo , it's pretty much the same, only instead of returning the
function, React calls it and returns the result:
let cachedResult;
cachedResult = callback();
return cachedResult;
};
Why is all of this important? For real-world applications, it's not, other than
for understanding the difference in the API. However, there is this belief
that sometimes pops up here and there that useMemo is better for
performance than useCallback , since useCallback re-creates the
function passed to it with each re-render, and useMemo doesn't do that. As
you can see, this is not true. The function in the first argument will be re-
created for both of them.
The only time that I can think of where it would actually matter, in theory, is
when we pass as the first argument not the function itself, but a result of
another function execution hardcoded inline. Basically this:
const submit =
useCallback(something(), []);
In this case, the something function will be called every re-render, even
though the submit reference won't change. So avoid doing expensive
calculations in those functions.
There are only two major use cases where we actually need to memoize
props on a component. The first one is when this prop is used as a
dependency in another hook in the downstream component.
const Parent = () => {
// this needs to be memoized!
// Child uses it inside
useEffect
const fetch = () => {};
What is React.memo
React.memo or just memo is a very useful util that React gives us. It allows
us to memoize the component itself. If a component's re-render is triggered
by its parent (and only then), and if this component is wrapped in
React.memo , then and only then will React stop and check its props. If
none of the props change, then the component will not be re-rendered, and
the normal chain of re-renders will be stopped.
This is again the case when React performs that comparison we talked
about at the beginning of the chapter. If even one of the props has changed,
then the component wrapped in React.memo will be re-rendered as usual:
And in the case of the example above, data and onChange are declared
inline, so they will change with every re-render.
But making sure that all props are memoized is not as easy as it sounds.
We're doing it wrong in so many cases! And just one single mistake leads to
broken props check, and as a result - every React.memo , useCallback ,
and useMemo become completely useless.
const ComponentInBetween =
(props) => {
return <Component {...props}
/>;
};
How likely do you think that those who need to add that additional data to
the InitialComponent will go through every single component inside, and
deeper and deeper, to check whether any of them is wrapped in
React.memo ? Especially if all of those are spread among different files and
are quite complicated in implementation. Never going to happen.
So unless you're prepared and able to enforce the rule that every single prop
everywhere should be memoized, using the React.memo function on
components has to follow certain rules.
Rule 1: never spread props that are coming from other components.
Instead of this:
Rule 2: avoid passing non-primitive props that are coming from other
components.
Even the explicit example like the one above is still quite fragile. If any of
those props are non-memoized objects or functions, memoization will break
again.
Rule 3: avoid passing non-primitive values that are coming from custom
hooks.
The submit function is hidden in the useForm custom hook. And every
custom hook will be triggered on every re-render. Can you tell from the
code above whether it's safe to pass that submit to our ChildMemo ?
Nope, you can't. And chances are, it will look something like this:
return {
submit,
};
};
const ChildMemo =
React.memo(Child);
{
type: "div",
... // the rest of the stuff
}
return <ChildMemo>{content}
</ChildMemo>;
};
return <ChildMemo>{content}
</ChildMemo>;
};
return <ChildMemo>{content}
</ChildMemo>;
};
Take a look at your app right now. How many of these have slipped through
the cracks?
{
type: Parent,
... // the rest of React stuff
}
With memoized components, it's exactly the same. The <ParentMemo />
element will be converted into an object of a similar shape. Only the "type"
property will contain information about our ParentMemo .
And this object is just an object, it's not memoized by itself. So again, from
the memoization and props perspective, we have a ParentMemo component
that has a children prop that contains a non-memoized object. Hence,
broken memoization on ParentMemo .
return <ParentMemo>{child}
</ParentMemo>;
};
And then we might not even need the ChildMemo at all. Depends on its
content and our intentions, of course. At least for the purpose of preventing
ParentMemo from re-rendering, ChildMemo is unnecessary, and it can
return back to being just a normal Child :
Executing a regular expression on a text that takes 100ms feels slow. But if
it's run as a result of a button click, once in a blue moon, buried somewhere
deep in the settings screen, then it's almost instant. A regular expression that
takes 30ms to run seems fast enough. But if it's run on the main page on
every mouse move or scroll event, it's unforgivably slow and needs to be
improved.
And finally, useMemo is only useful for re-renders. That's the whole point
of it and how it works. If your component never re-renders, then useMemo
just does nothing.
More than nothing, it forces React to do additional work on the initial
render. Don't forget: the very first time the useMemo hook runs, when the
component is first mounted, React needs to cache it. It will use a little bit of
memory and computational power for that, which otherwise would be free.
With just one useMemo , the impact won't be measurable, of course. But in
large apps, with hundreds of them scattered everywhere, it actually can
measurably slow down the initial render. It will be death by a thousand cuts
in the end.
Key takeaways
Well, that's depressing. Does all of this mean we shouldn't use
memoization? Not at all. It can be a very valuable tool in our performance
battle. But considering so many caveats and complexities that surround it, I
would recommend using composition-based optimization techniques as
much as possible first. React.memo should be the last resort when all other
things have failed.
Also, we know that if the reference to that object itself changes between re-
renders, then React will re-render this Element if its type remains the same
and the component in type is not memoized with React.memo .
But this is just the beginning. There are more variables and moving pieces
here, and understanding this process in detail is very important. It will allow
us to fix some very not-obvious bugs, implement the most performant lists,
reset the state when we need it, and avoid one of the biggest performance
killers in React. All in one go. None of it seems connected at first glance,
but all of this is part of the same story: how React determines which
components need to be re-rendered, which components need to be removed,
and which ones need to be added to the screen.
In this chapter, we'll investigate a few very curious bugs, dive very deep
into how things work under the hood, and in the process of doing so, we
will learn:
The code for this app will look something like this:
return (
<>
{/* checkbox somewhere
here */}
{isCompany ? (
<Input id="company-tax-
id-number" placeholder="Enter
you company ID" ... />
) : (
<TextPlaceholder />
)}
</>
)
}
What will happen here from a re-rendering and mounting perspective if the
user actually claims that they are a company and the value isCompany
changes from the default false to true ?
No surprises here, and the answer is pretty intuitive: the Form component
will re-render itself, the TextPlaceholder component will be unmounted,
and the Input component will be mounted. If I flip the checkbox back, the
Input will be unmounted again, and the TextPlaceholder will be
mounted.
return (
<>
{/* checkbox somewhere
here */}
{isCompany ? (
<Input id="company-tax-
id-number" placeholder="Enter
you company Tax ID" ... />
) : (
<Input id="person-tax-
id-number" placeholder="Enter
you personal Tax ID" ... />
)}
</>
)
}
What will happen here now?
The answer is, of course, again pretty intuitive and exactly as any sensible
person would expect... The unmounting doesn't happen anymore! If I type
something in the field and then flip the checkbox, the text is still there!
React thinks that both of those inputs are actually the same thing, and
instead of unmounting the first one and mounting the second one, it just re-
renders the first one with the new data from the second one.
If you're not surprised by this at all and can without hesitation say, "Ah,
yeah, it's because of [the reason]," then wow, can I get your autograph? For
the rest of us who got an eye twitch and a mild headache because of this
behavior, it's time to dive into React's reconciliation process to get the
answer.
// somewhere else
<Input placeholder="Input
something here" />;
we expect React to add the normal HTML input tag with placeholder
set in the appropriate place in the DOM structure. If we change the
placeholder value in the React component, we expect React to update our
DOM element with the new value and to see that value on the screen.
Ideally, instantly. So, React can't just remove the previous input and append
a new one with the new data. That would be terribly slow. Instead, it needs
to identify that already existing input DOM element and just update its
attributes. If we didn't have React, we'd have to do something like this:
const input =
document.getElementById('input-
id');
input.placeholder = 'new data';
In React, we don't have to; it handles it for us. It does so by creating and
modifying what we sometimes call the "Virtual DOM." This Virtual DOM
is just a giant object with all the components that are supposed to render, all
their props, and their children - which are also objects of the same shape.
Just a tree. What the Input component from the example above should
render will be represented as something like this:
{
type: "input", // type of
element that we need to render
props: {...}, // input's
props like id or placeholder
... // bunch of other
internal stuff
}
then label and input from React's perspective would be just an array of
those objects:
[
{
type: 'label',
... // other stuff
},
{
type: 'input',
... // other stuff
}
]
DOM elements like input or label will have their "type" as strings, and
React will know to convert them to the DOM elements directly. But if we're
rendering React components, they are not directly correlated with DOM
elements, so React needs to work around that somehow.
{
type: Input, // reference to
that Input function we declared
earlier
... // other stuff
}
And then, when React gets a command to mount the app (initial render), it
iterates over that tree and does the following:
Until it eventually gets the entire tree of DOM nodes that are ready to be
shown. A component like this, for example:
{
type: 'div',
props: {
// children are props!
children: [
{
type: Input,
props: { id: "1",
placeholder: "Text1" }
},
{
type: Input,
props: { id: "2",
placeholder: "Text2" }
}
]
}
}
So it begins its journey through that tree again, starting from where the state
update was initiated. If we have this code:
{
type: Input,
... // other internal stuff
}
It will compare the "type" field of that object from "before" and "after" the
state update. If the type is the same, the Input component will be marked
as "needs update," and its re-render will be triggered. If the type has
changed, then React, during the re-render cycle, will remove (unmount) the
"previous" component and add (mount) the "next" component. In our case,
the "type" will be the same since it's just a reference to a function, and that
reference hasn't changed.
then, assuming that the update was triggered by isCompany value flipping
from true to false , the objects that React will be comparing are:
// Before update, isCompany was
"true"
{
type: Input,
...
}
You guessed the result, right? "Type" has changed from Input to
TextPlaceholder references, so React will unmount Input and remove
everything associated with it from the DOM. And it will mount the new
TextPlaceholder component and append it to the DOM for the first time.
Everything that was associated with the Input field, including its state and
everything you typed there, is destroyed.
{
type: Input,
}
It's just an object that has a "type" property that points to a function.
However, the function is created inside Component . It's local to it and will
be recreated with every re-render as a result. So when React tries to
compare those types, it will compare two different functions: one before re-
render and one after re-render. And as we know from Chapter 5.
Memoization with useMemo, useCallback and React.memo, we can't
compare functions in JavaScript, not like this.
As a result, the "type" of that child will be different with every re-render, so
React will remove the "previous" component and mount the "next" one.
In the associated code example above, you can see how it behaves: the
input component triggers a re-render with every keystroke, and the
"ComponentWithState" is re-mounted. As a result, if you click on that
component to change its state to "active" and then start typing, that state
will disappear.
Declaring components inside other components like this can be one of the
biggest performance killers in React.
return (
<>
{/*checkbox somewhere
here*/}
{isCompany ? (
<Input id="company-tax-
id-number" placeholder="Enter
you company Tax ID" ... />
) : (
<Input id="person-tax-
id-number" placeholder="Enter
you personal Tax ID" ... />
)}
</>
)
}
{
type: Input,
... // the rest of the stuff,
including props like
id="company-tax-id-number"
}
{
type: Input,
... // the rest of the stuff,
including props like id="person-
tax-id-number"
}
From the React perspective, the "type" hasn't changed. Both of them have a
reference to exactly the same function: the Input component. The only
thing that has changed, thinks React, are the props: id changed from
"company-tax-id-number" to "person-tax-id-number" , placeholder
changed, and so on.
So, in this case, React does what it was taught: it simply takes the existing
Input component and updates it with the new data. I.e., re-renders it.
Everything that is associated with the existing Input , like its DOM
element or state, is still there. Nothing is destroyed. This results in the
behavior that we've seen: I type something in the input, flip the checkbox,
and the text is still there.
This behavior isn't necessarily bad. I can see a situation where re-rendering
instead of re-mounting is exactly what I would want. But in this case, I'd
probably want to fix it and ensure that inputs are reset and re-mounted
every time I switch between them: they are different entities from the
business logic perspective, so I don't want to re-use them.
There are at least two easy ways to fix it: arrays and keys.
return (
<>
{/*checkbox somewhere
here*/}
{isCompany ? (
<Input id="company-tax-
id-number" ... />
) : (
<Input id="person-tax-
id-number" ... />
)}
</>
)
}
return (
<>
<Checkbox onChange={() =>
setIsCompany(!isCompany)} />
{isCompany ? (
<Input id="company-tax-
id-number" ... />
) : (
<Input id="person-tax-
id-number" ... />
)}
</>
)
}
Basically, if I flip the checkbox and trigger the Form re-render, React will
see this array of items:
[
{
type: Checkbox,
},
{
type: Input, // our
conditional input
},
];
and will go through them one by one. First element. "Type" before:
Checkbox , "type" after: also Checkbox . Re-use it and re-render it. Second
element. Same procedure. And so on.
Even if some of those elements are rendered conditionally like this:
React will still have a stable number of items in that array. Just sometimes,
those items will be null . If I re-write the Form like this:
return (
<>
<Checkbox onChange={() =>
setIsCompany(!isCompany)} />
{isCompany ? <Input
id="company-tax-id-number" ...
/> : null}
{!isCompany ? <Input
id="person-tax-id-number" ... />
: null}
</>
)
}
it will be an array of always three items: Checkbox , Input or null , and
Input or null .
So, what will happen here when the state changes and re-render runs
throughout the form?
And when React starts comparing them, item by item, it will be:
And voila! Magically, by changing the inputs' position in the render output,
without changing anything else in the logic, the bug is fixed, and inputs
behave exactly as I would expect!
The "key" should be familiar to anyone who has written any lists in React.
React forces us to add it when we iterate over arrays of data:
[
{ type: Input }, // "1" data
item
{ type: Input }, // "2" data
item
];
But the problem with dynamic lists like this is that they are, well, dynamic.
We can re-order them, add new items at the beginning or end, and generally
mess around with them.
Now, React faces an interesting task: all components in that array are of the
same type. How to detect which one is which? If the order of those items
changes:
[
{ type: Input }, // "2" data
item now, but React doesn't
know that
{ type: Input }, // "1" data
item now, but React doesn't
know that
];
how to make sure that the correct existing element is re-used? Because if it
just relies on the order of elements in that array, it will re-use the instance of
the first element for the data of the second element, and vice versa. This
will result in weird behavior if those items have state: it will stay with the
first item. If you type something in the first input field and re-order the
array, the typed text will remain in the first input.
This is why we need "key": it's basically React's version of a unique
identifier of an element within children's array that is used between re-
renders. If an element has a "key" in parallel with "type," then during re-
render, React will re-use the existing elements, with all their associated state
and DOM, if the "key" and "type" match "before" and "after." Regardless of
their position in the array.
With this array, the data would look like this. Before re-ordering:
[
{ type: Input, key: '1' }, //
"1" data item
{ type: Input, key: '2' }, //
"2" data item
];
After re-ordering:
[
{ type: Input, key: '2' }, //
"2" data item, React knows that
because of "key"
{ type: Input, key: '1' }, //
"1" data item, React knows that
because of "key"
];
Now, with the key present, React will know that after re-render, it needs to
re-use an already created element that used to be in the first position. So it
will just swap input DOM nodes around. And the text that we typed in the
first element will move with it to the second position.
Interactive example and full code
https://advanced-react.com/examples/06/04
const data = [
{ id: 'business', placeholder:
'Business Tax' },
{ id: 'person', placeholder:
'Person Tax' },
];
const InputMemo =
React.memo(Input);
const Component = () => {
// array's index is fine
here, the array is static
return data.map((value, index)
=> (
<InputMemo
key={index}
placeholder=
{value.placeholder}
/>
));
};
With dynamic arrays, it's a bit more interesting, and this is where the key
plays a crucial role. What will happen here if what triggered the re-render is
the re-ordering of that array?
If we just use the array's index as a key again, then from React's
perspective, the item with the key="0" will be the first item in the array
before and after the re-render. But the prop placeholder will change: it
will transition from "Business Tax" to "Person Tax." As a result, even if this
item is memoized, from React's perspective, the prop on it changed, so it
will re-render it as if memoization doesn't exist!
The fix for this is simple: we need to make sure that the key matches the
item it identifies. In our case, we can just put the id there:
If the data has nothing unique like an id , then we'd need to iterate over that
array somewhere outside of the component that re-renders and add that id
there manually.
In the case of our inputs, if we use the id for key , the item with the
key="business" will still have the prop placeholder="Business Tax,"
just in a different place in the array. So React will just swap the associated
DOM nodes around, but the actual component won't re-render.
And exactly the same story happens if we were adding another input at the
beginning of the array. If we use the array's index as key , then the item
with the key="0" , from React's perspective, will just change its
placeholder prop from "Business Tax" to "New tax"; key="1" item will
transition from "Person Tax" to "Business Tax". So they both will re-render.
And the new item with the key="2" and the text "Person Tax" will be
mounted from scratch.
And if we use the id as a key instead, then both "Business Tax" and
"Person Tax" will keep their keys, and since they are memoized, they won't
re-render. And the new item, with the key "New tax", will be mounted
from scratch.
Interactive example and full code
https://advanced-react.com/examples/06/05
return (
<>
<Checkbox onChange={() =>
setIsCompany(!isCompany)} />
{isCompany ? (
<Input id="company-tax-
id-number" ... />
) : (
<Input id="person-tax-
id-number" ... />
)}
</>
)
}
[
{ type: Checkbox },
{ type: Input }, // react
thinks it's the same input
between re-renders
];
All we need to fix the initial bug is to make React realize that those Input
components between re-renders are actually different components and
should not be re-used. If we add a "key" to those inputs, we'll achieve
exactly that.
{isCompany ? (
<Input id="company-tax-id-
number" key="company-tax-id-
number" ... />
) : (
<Input id="person-tax-id-
number" key="person-tax-id-
number" ... />
)}
Now, the array of children before and after re-render will change.
[
{ type: Checkbox },
{
type: Input,
key: 'person-tax-id-number',
},
];
[
{ type: Checkbox },
{
type: Input,
key: 'company-tax-id-
number',
},
];
Voila, the keys are different! React will drop the first Input and mount
from scratch the second one. State is now reset to empty when we switch
between inputs.
But be careful here, though. It's not just "state reset" as you can see. It
forces React to unmount a component completely and mount a new one
from scratch. For big components, that might cause performance problems.
The fact that the state is reset is just a by-product of this total destruction.
return (
<>
<Checkbox onChange={() =>
setIsCompany(!isCompany)} />
{isCompany ? <Input
id="company-tax-id-number" ...
/> : null}
{!isCompany ? <Input
id="person-tax-id-number" ... />
: null}
</>
)
}
From the data and re-renders' perspective, it will now be like this.
[
{ type: Checkbox },
null,
{
type: Input,
key: 'tax-input',
},
];
React sees an array of children and sees that before and after re-renders,
there is an element with the Input type and the same "key." So it will think
that the Input component just changed its position in the array and will re-
use the already created instance for it. If we type something, the state is
preserved even though the Input s are technically different.
For this particular example, it's just a curious behavior, of course, and not
very useful in practice. But I could imagine it being used for fine-tuning the
performance of components like accordions, tabs content, or some galleries.
and this:
will be exactly the same, just a fragment with two inputs as a children array:
So why, in one case, do we need a "key" for React to behave, and in another
- don't?
The difference is that the first example is a dynamic array. React doesn't
know what you will do with this array during the next re-render: remove,
add, or rearrange items, or maybe leave them as-is. So it forces you to add
the "key" as a precautionary measure, in case you're messing with the array
on the fly.
Where is the fun here, you might ask? Here it is: try to render those inputs
that are not in an array with the same "key," applied conditionally:
Try to predict what will happen if I type something in those inputs and
toggle the boolean on and off.
does it mean that items after this array will always re-mount themselves??
Basically, is this code a performance disaster or not?
Because if this is transformed into an array of three children - the first two
are dynamic, and the last one static - it will be. If this is the case, then the
definition object will be this:
[
{ type: Input, key: 1 }, //
input from the array
{ type: Input, key: 2 }, //
input from the array
{ type: Input }, // input
after the array
];
And if I add another item to the data array, on the third position there will
be an Input element with the key="3" from the array, and the "manual"
input will move to the fourth position, which would mean from the React
perspective that it's a new item that needs to be mounted.
Luckily, this is not the case. Phew... React is smarter than that.
When we mix dynamic and static elements, like in the code above, React
simply creates an array of those dynamic elements and makes that entire
array the very first child in the children's array. This is going to be the
definition object for that code:
[
// the entire dynamic array
is the first position in the
children's array
[
{ type: Input, key: 1 },
{ type: Input, key: 2 },
],
{
type: Input, // this is our
manual Input after the array
},
];
Our manual Input will always have the second position here. There will
be no re-mounting. No performance disaster. The heart attack was uncalled
for.
Key takeaways
Ooof, that was a long chapter! Hope you had fun with the investigation and
the mysteries and learned something cool while doing so. A few things to
remember from all of that:
What is a higher-order
component?
According to the React docs, a Higher-Order Component[6] is an advanced
technique for reusing component logic that is used for cross-cutting
concerns.
In English, it's a function that accepts a component as one of its arguments,
executes some logic, and then returns another component that renders the
component from the argument. The simplest variant of it, that does nothing,
is this:
// accept a Component as an
argument
const withSomeLogic =
(Component) => {
// do something
The key here is the return part of the function - it's just a component, like
any other component.
And then, when it's time to use it, it would look like this:
// just a button
const Button = ({ onClick }) =>
(
<button onClick=
{onClick}>Button</button>
);
You pass your Button component to the function, and it returns the new
Button , which includes whatever logic is defined in the higher-order
component. And then this button can be used like any other button:
The simplest and most common use case would be to inject props into
components. We can, for example, implement a withTheming component
that extracts the current theme of the website (dark or light mode) and sends
that value into the theme prop. It would look like this:
const withTheme = (Component) =>
{
// isDark will come from
something like context
const theme = isDark ? 'dark'
: 'light';
And now, if we use it on our button, it will have the theme prop available
for use:
As you can see, higher-order components are quite complicated to write and
understand. So when hooks were introduced, it's no wonder everyone
switched to them.
And while hooks have probably replaced 99% of shared logic concerns and
100% of use cases for accessing context, higher-order components can still
be useful even in modern code. Mostly for enhancing callbacks, React
lifecycle events, and intercepting DOM and keyboard events. Only if you're
feeling fancy, of course. Those use cases can also be implemented with
hooks, just not as elegantly.
Enhancing callbacks
Imagine you need to send some sort of advanced logging on some
callbacks. When you click a button, for example, you want to send some
logging events with some data. How would you do it with hooks? You'd
probably have a Button component with an onClick callback:
And then on the consumer side, you'd hook into that callback and send
logging events there:
And that is fine if you want to fire an event or two. But what if you want
your logging events to be consistently fired across your entire app whenever
the button is clicked? We probably can bake it into the Button component
itself:
But then what? For proper logs, you'd have to send some sort of data as
well. We surely can extend the Button component with some
loggingData props and pass it down:
But what if you want to fire the same events when the click has happened
on other components? Button is usually not the only thing people can click
on in our apps. What if I want to add the same logging to a ListItem
component? Copy-paste exactly the same logic there?
This is the first use case where hooks are of no use, but higher-order
components could come in handy.
// return original
component with all the props
// and overriding onClick
with our own callback
return <Component {...props}
onClick={onClick} />;
};
};
Now, I can just add it to any component that I want. I can have a Button
with logging baked in:
export const
ButtonWithLoggingOnClick =
withLoggingOnClick(SimpleButton)
;
export const
ListItemWithLoggingOnClick =
withLoggingOnClick(ListItem);
Or any other component that has an onClick callback that I want to track.
Without a single line of code changed in either Button or ListItem
components!
export const
withLoggingOnClickWithParams = (
Component,
// adding some params as a
second argument to the function
params,
) => {
return (props) => {
const onClick = () => {
// accessing params that
we passed as an argument here
// everything else stays
the same
console.log('Log on
click: ', params.text);
props.onClick();
};
Now, when we wrap our button with a higher-order component, we can pass
the text that we want to log:
const
ButtonWithLoggingOnClickWithPara
ms =
withLoggingOnClickWithParams(Sim
pleButton, {
text: 'button component',
});
On the consumer side, we'd just use this button as a normal button
component, without worrying about the logging text:
<ButtonWithLoggingOnClickWithPar
ams
onClick={onClickCallback}
>
Click me
</ButtonWithLoggingOnClickWithPa
rams>
);
};
But what if we actually want to worry about this text? What if we want to
send different texts in different contexts of where the button is used? We
wouldn't want to create a million wrapped buttons for every use case.
And then we can just extract that logText from props that were sent to the
button:
export const
withLoggingOnClickWithProps =
(Component) => {
// it will be in the props
here, just extract it
return ({ logText, ...props })
=> {
const onClick = () => {
// and then just use it
here
console.log('Log on
click: ', logText);
props.onClick();
};
or even read props and send them on re-renders, when a certain prop has
changed:
export const
withLoggingOnReRender =
(Component) => {
return ({ id, ...props }) => {
// fire logging every time
"id" prop changes
useEffect(() => {
console.log('log on id
change');
}, [id]);
useEffect(() => {
const keyPressListener =
(event) => {
// do stuff
};
window.addEventListener('keypres
s', keyPressListener);
return () =>
window.removeEventListener(
'keypress',
keyPressListener,
);
}, []);
And then, you have various parts of your app, like modal dialogs, dropdown
menus, drawers, etc., where you want to block that global listener while the
dialog is open. If it was just one dialog, you could manually add
onKeyPress to the dialog itself and there do event.stopPropagation()
for that:
But the same story as with onClick logging - what if you have multiple
components where you want to see this logic? Copy-paste that
event.stopPropagation everywhere? Meh.
export const
withSuppressKeyPress =
(Component) => {
return (props) => {
const onKeyPress = (event)
=> {
event.stopPropagation();
};
return (
<div onKeyPress=
{onKeyPress}>
<Component {...props} />
</div>
);
};
};
const
ModalWithSuppressedKeyPress =
withSuppressKeyPress(Modal);
const
DropdownWithSuppressedKeyPress =
withSuppressKeyPress(Dropdown);
// etc
Now, when this modal is open and focused, any key press event will bubble
up through the elements' hierarchy until it reaches our div in
withSuppressKeyPress that wraps the modal and will stop there. Mission
accomplished, and developers who implement the Modal component don't
even need to know or care about it.
Key takeaways
That's enough of a history lesson for the book, I think. A few things to
remember before we jump to the next chapter, with the most exciting and
the most controversial part of React: state management!
// accept a Component as an
argument
const withSomeLogic =
(Component) => {
// inject some logic here
One final and very important piece of the "re-renders in React" puzzle is
Context. Context has a bad reputation when it comes to re-renders. I have a
feeling, that sometimes people treat Context as an evil gremlin that just
roams around the app, causing spontaneous and unstoppable re-renders just
because it can. As a result, developers sometimes try to avoid using Context
at all costs.
From the code perspective, the app looks something like this. It would have
a Page component that assembles the entire app together:
const Page = () => {
return (
<Layout>
<Sidebar />
<MainPart />
</Layout>
);
};
Sidebar component that renders a bunch of links, plugins, menus, etc., and
the "expand/collapse" button:
And the MainPart component, which renders lots of slow stuff, and
somewhere at the bottom, it has that block that will render two or three
columns, depending on whether the Sidebar is expanded or collapsed:
return ...
}
And then pass the set function and the state itself through props of the
Sidebar and MainPart components to ExpandButton :
const Sidebar = ({
isNavExpanded, toggleNav }) => {
return (
<div className="sidebar">
{/* pass the props here
*/}
<ExpandButton
isExpanded=
{isNavExpanded}
onClick={toggleNav}
/>
{/* ... // the rest of
the stuff */}
</div>
);
};
and AdjustableColumnsBlock :
const MainPart = ({
isNavExpanded }) => {
return (
<>
<VerySlowComponent />
<AnotherVerySlowComponent
/>
<AdjustableColumnsBlock
isNavExpanded=
{isNavExpanded}
/>
</>
);
};
The full code of the Page component will look like this then:
const Page = () => {
const [isNavExpanded,
setIsNavExpanded] = useState();
return (
<Layout>
<Sidebar
isNavExpanded=
{isNavExpanded}
toggleNav={() =>
setIsNavExpanded(!isNavExpanded)
}
/>
<MainPart isNavExpanded=
{isNavExpanded} />
</Layout>
);
};
While technically, it will work, it's not the best solution. Firstly, our
Sidebar and MainPart now have props that they don't use but merely
pass to the components below - their API becomes bloated and harder to
read.
And secondly, performance will be pretty bad. What will happen here from
the re-renders perspective? Every time the button is clicked, and navigation
is expanded/collapsed, the state in the Page component will change. And
as we know from Chapter 1. Intro to re-renders, state update will cause this
component and every component inside, including their children, to re-
render. Both Sidebar and MainPart have a lot of components, some of
which are quite slow. So re-rendering of the entire page will be slow,
making navigation expanding/collapsing slow and laggy as a result.
And unfortunately, we can't just use any of the composition techniques from
the previous chapters to prevent this: all of them actually depend on the
state that causes re-rendering. We can probably memoize the intermediate
slow components that don't depend on that state. But the code of that will
become even more bloated: all of them would have to be memoized!
const NavigationController = ()
=> {
const [isNavExpanded,
setIsNavExpanded] = useState();
return children;
};
This is the "children as props" pattern. Our Page then uses that controller
on top of everything else:
All props will disappear, and most importantly, none of the components in
the Page, like Layout or Sidebar , will be affected by the state change
inside NavigationController . As covered in Chapter 2. Elements,
children as props, and re-renders, children when passed like this are
nothing more than props, and props are not affected by state changes.
const NavigationController = ({
children }) => {
const [isNavExpanded,
setIsNavExpanded] = useState();
const toggle = () =>
setIsNavExpanded(!isNavExpanded)
;
return <Context.Provider>
{children}</Context.Provider>;
};
And finally, the step that will make it work: we pass the value property to
this Context. Just an object that includes the isNavExpanded state value
and the toggle function.
const NavigationController = ({
children }) => {
const [isNavExpanded,
setIsNavExpanded] = useState();
Now every component that happens to be rendered down the tree from that
provider (even if they are passed as props like our children !) will now
have access to that value through the useContext hook.
And then use that hook to gain access to the state directly in those
components that actually need this information. We'll use it in the
expand/collapse button itself:
And directly in the block where we want to render the different number of
columns based on the navigation state:
const AdjustableColumnsBlock =
() => {
const { isNavExpanded } =
useNavigation();
return isNavExpanded ?
<TwoColumns /> : <ThreeColumns
/>;
};
No more passing props around anywhere! Now, when the state changes, the
value prop on the Context provider will change, and only components that
use the useNavigation hook will re-render. All other components inside
Sidebar or MainBlock don't use it, so they will be safe and won't re-
render. Just like that, with the simple use of Context, we've drastically
improved the performance of the entire app.
It's not all sunshine and roses when dealing with Context, of course.
Otherwise, it wouldn't have such a bad reputation. There are three major
things that you need to know like the back of your hand when introducing
Context into the app:
const NavigationController = ({
children }) => {
const [isNavExpanded,
setIsNavExpanded] = useState();
return (
<Context.Provider value=
{value}>
{children}
</Context.Provider>
);
};
Every time we change state, the value object changes, so every component
that uses this Context through useNavigation will re-render. This is
natural and expected: we want everyone to have access to the latest value,
and the only way in React to update components is to re-render them.
However, what will happen if the NavigationController re-renders for
any other reason other than its own state change? If, for example, this re-
render is triggered in its parent component? NavigationController will
also re-render: it's React's natural chain of re-renders. The value object
will be re-created, and we're again in the situation where React needs to
compare objects between re-renders. The referential equality problem kicks
in yet again (we've covered it in detail in Chapter 6. Deep dive into diffing
and reconciliation). As a result, the value that we pass to the provider will
change, and every single component that uses this Context will re-render
for no reason.
In the case of our small app, it's not a problem: the provider sits at the very
top, so nothing above it can re-render. However, this won't always be the
case. And in large, complicated apps, it's more likely than not that someone
will introduce something one day that triggers the re-render of that provider.
For example, in our Page component, I might decide one day to move that
provider inside the Layout component to simplify the Page :
useEffect(() => {
window.addEventListener('scroll'
, () => {
setScroll(window.scrollY);
});
}, []);
return (
<NavigationController>
<div className="layout">
{children}</div>
</NavigationController>
);
};
setIsNavExpanded(!isNavExpanded)
;
}, [isNavExpanded]);
return (
<Context.Provider value=
{value}>
{children}
</Context.Provider>
);
};
Interactive example and full code
https://advanced-react.com/examples/08/04
This is one of the few cases where always memoizing by default is actually
not premature optimization. It will prevent much bigger problems in the
future that will almost inevitably occur.
Preventing unnecessary
Context re-renders: split
providers
On top of the fact that all context consumers re-render when the value
changes, it's important to emphasize not only the "value changes" part but
also that all of them will do that. If I introduce open and close functions
to our navigation API that don't actually depend on the state:
// no dependencies, close
won't change
const close = useCallback(()
=> setIsNavExpanded(false), []);
return ...
}
return ...
}
It works like this. Instead of just one Context that holds everything, we can
create two: one will hold the value that changes, and another one will hold
those that don't.
const NavigationController = ({
children }) => {
...
return (
<ContextData.Provider value=
{data}>
<ContextApi.Provider
value={api}>
{children}
</ContextApi.Provider>
</ContextData.Provider>
)
}
The values that we pass to those providers will be the data that has the
state and api that only holds references to open and close functions.
const NavigationController = ({
children }) => {
...
return (
<ContextData.Provider value=
{data}>
<ContextApi.Provider
value={api}>
{children}
</ContextApi.Provider>
</ContextData.Provider>
)
}
We'd have to drop the toggle function here, unfortunately. It depends on
the state, so we can't put it into the api , and it doesn't really make sense to
include it in the data.
Now, we just need to introduce two hooks to abstract away the context:
Then, in our SomeComponent , we can use the open function freely. It will
trigger expand/collapse as intended, but SomeComponent won't re-render
because of it:
return ...
}
return isNavExpanded ?
<TwoColumns /> : <ThreeColumns
/>;
};
return (
<ContextData.Provider value=
{data}>
<ContextApi.Provider
value={api}>
{children}
</ContextApi.Provider>
</ContextData.Provider>
)
This isn't ideal, though. Now, anyone who tries to use that state would have
to implement that toggle functionality themselves:
return (
<button onClick=
{isNavExpanded ? close : open}>
{isNavExpanded ?
'Collapse' : 'Expand'}
</button>
);
};
This doesn't make much sense. Ideally, the navigation's API should be able
to handle common cases like this by itself.
And it can! All we need is to switch our regular state management from
useState hook to useReducer .
const [isNavExpanded,
setIsNavExpanded] = useState();
Notice how none of the functions depend on the state anymore, including
the toggle . All they do is dispatch an action.
Then, we'd introduce the reducer function, inside of which we'll perform all
the state manipulations for all of our actions. The reducer function
controls and changes that state. The function accepts the state it needs to
transform and the "actions" value: the value that we use in the dispatch
above.
And now, when we pass that api value to the provider, none of the
consumers of that Context will re-render on state change: the value never
changes! And we can safely use the toggle function everywhere, without
the fear of causing performance problems in the app.
Interactive example and full code
https://advanced-react.com/examples/08/06
This reducer pattern is especially powerful when you have multiple state
variables and more complex actions to perform on the state, rather than just
flipping a boolean from false to true . But from the re-render
perspective, it's the same as useState : updating the state through
dispatch will force the component to re-render.
Context selectors
But what if you don't want to migrate your state to reducers or split
providers? What if you only need to occasionally use one of the values from
Context in a performance-sensitive area, and the rest of the app is fine? If I
want to close my navigation and force the page to go into full-screen mode
when I focus on some heavy editor component, for example? Splitting
providers and going with reducers seems too extreme of a change just to be
able to use the open function from Context without re-renders once.
In something like Redux, we'd use memoized state selectors in this case.
Unfortunately, for Context, this won't work - any change in context value
will trigger the re-render of every consumer.
There is, however, a trick that can mimic the desired behavior and allow us
to select a value from Context that doesn't cause the component to re-
render. We can leverage the power of Higher Order Components for this!
Second, we'll use our Context to extract the open function from the
provider and pass it as a prop to the component from the arguments:
const withNavigationOpen =
(AnyComponent) => {
return (props) => {
// access Context here -
it's just another component
const { open } =
useContext(Context);
return <AnyComponent
{...props} openNav={open} />;
};
};
Now, every component that is wrapped in that HOC will have the openNav
prop:
const withNavigationOpen =
(AnyComponent) => {
// wrap the component from
the arguments in React.memo here
const AnyComponentMemo =
React.memo(AnyComponent);
// return memoized
component here
// now it won't re-render
because of Context changes
// make sure that whatever
is passed as props here don't
change between re-renders!
return <AnyComponentMemo
{...props} openNav={open} />;
};
};
Now, when the Context value changes, the component that uses anything
from Context will still re-render: our unnamed component that we return
from the withNavigationOpen function. But this component renders
another component that is memoized. So if its props don't change, it won't
re-render because of this re-render. And the props won't change: those that
are spread are coming from "outside", so they won't be affected by the
context change. And the open function is memoized inside the Context
provider itself.
Key takeaways
I hope this chapter has given you an idea of how useful Context can be
when it comes to re-renders. And for reducing props on components, for
that matter. I'm not advocating for using Context everywhere, of course: its
caveats are pretty serious. So for larger, more complex apps, it's probably
better to go with an external state management solution right away. Any
solution that supports memoized selectors. But it could work for smaller
apps, where you have just a few places that could benefit from the Context
mental model.
One of the many beautiful things about React is that it abstracts away the
complexity of dealing with the real DOM. Instead of manually querying
elements, scratching our heads over how to add classes to those elements,
or struggling with browser inconsistencies, we can just write components
and focus on the user experience now. There are, however, still cases (very
few though!) when we need to get access to the actual DOM.
Now, React gives us a lot, but it doesn't give us everything. Things like
"focus an element manually" or "shake that element" are not part of the
package. For that, we need to dust off our rusty native JavaScript API skills.
And for that, we need access to the actual DOM element.
const element =
document.getElementById('bla');
element.focus();
Or scroll to it:
element.scrollIntoView();
Or anything else our heart desires. Some typical use cases for using the
native DOM API in the React world would include:
What is Ref?
A Ref is a mutable object that React preserves between re-renders.
Remember that everything declared within a component will be re-created
all the time?
To create a Ref, we can use the useRef hook with the Ref's initial value
passed to it:
const Component = () => {
const ref = useRef({ id:
'test' });
};
That initial value will now be available via the ref.current property:
everything that we pass to the Ref is stored there.
useEffect(() => {
// access it here
console.log(ref.current);
});
};
And all of this looks awfully similar to the state, isn't it? Just the API is
different. What's the catch, then? Why do we use state everywhere, but Ref
is considered an escape hatch that should not be used? Let's figure this out
first before making our form too fancy. Maybe we don't need the state there
at all?
Now, in order for our submit to work, we need to extract the input field
content somehow. In React, normally, we'd just add an onChange callback
to the input , save that information in the state so that it's preserved
between re-renders, and then access it in the submit function:
return (
<>
<input type="text"
onChange={onChange} />
<button onClick=
{submit}>submit</button>
</>
);
};
But I mentioned a few times already that whatever we store in a Ref is also
preserved between re-renders. And, conveniently, anything can be assigned
to a Ref. What will happen if I just save the value from the input there
instead of the state?
return (
<>
<input type="text"
onChange={onChange} />
<button onClick=
{submit}>submit</button>
</>
);
};
It seems to work exactly as with state: I type something in the input field,
then press the button, and the value is sent.
So what's the difference? And why we don't usually see this pattern in our
apps? A few reasons for this.
useEffect(() => {
console.log('Form component
re-renders');
});
On the surface, this seems like great news. Isn't like half of this book
dedicated to re-renders and how to escape them? If Refs don't cause re-
renders, surely they are the solution to all our performance problems?
Not at all. If you remember from the first chapter, re-render is a crucial
piece of the React lifecycle. This is how React updates our components
with new information. If, for example, I want to show the number of letters
typed into the text field under the field, I can't do this with Refs.
return (
<>
<input type="text"
onChange={onChange} />
{/* Not going to work */}
Characters count:
{numberOfLetters}
<button onClick=
{submit}>submit</button>
</>
);
};
Refs update doesn't cause re-renders, so our return output will always
show 0 for numberOfLetters .
It gets even more interesting than that. That change in value will not be
picked up by the downstream components if passed as props as a primitive
value either.
return (
<>
Searching for: {search}
<br />
{/*This will trigger re-
render*/}
<button onClick={() =>
setShowResults(!showResults)}>
show results
</button>
</>
);
};
If I use that component in our Form where we saved the value in Ref, it just
won't work.
return (
<>
<input type="text"
onChange={onChange} />
{/* will never be updated
*/}
<SearchResults search=
{ref.current} />
</>
);
};
It becomes very visible when you try to access state and ref values in the
onChange callback after setting both of them.
Both "before" and "after" values in the code above will be the same. When
we call setValue , we're not updating the state right away. We're just letting
React know that it needs to schedule a state update with the new data after
it's done with whatever it's doing now.
We modified an object, the data in that object is available right away, but
nothing from the React lifecycle is triggered.
If the answer to both of these questions is "nope", then Ref is okay to use.
We can use Ref, for example, to store some "dev" information about
components. Maybe we're interested in counting how many times a
component renders:
useEffect(() => {
ref.current = ref.current + 1;
console.log('Render number',
ref.current);
});
useEffect(() => {
// this will be changed
after the value is returned
ref.current = value;
}, [value]);
return ref.current;
};
useEffect(() => {
if (previuosValue.length >
value.length) {
console.log('Text was
deleted');
} else {
console.log('Text was
added');
}
}, [previuosValue, value]);
And, of course, assign DOM elements to Ref. This is one of Ref's most
important and most popular use cases.
useEffect(() => {
// this will be a reference
to input DOM element!
// exactly the same as if I
did getElementById for it
console.log(ref.current);
});
The important thing to remember here is that ref will be assigned only
after the element is rendered by React and its associated DOM element is
created. We need something to assign to that Ref, isn't it? That means that
the ref.current value won't be available right away, and logic like this
will just not work:
We should only read and write ref.current either in the useEffect hook
or in callbacks.
return (
<>
...
<input
onChange={(e) =>
setName(e.target.value)}
ref={ref}
/>
<button onClick=
{onSubmitClick}>
Submit the form!
</button>
</>
);
};
Store the values from inputs in the state, create refs for all inputs, and when
the "submit" button is clicked, I would check whether the values are not
empty, and if they are - focus the needed input.
Interactive example and full code
https://advanced-react.com/examples/09/05
return (
<>
<InputField label="name"
onChange={setName} />
<button onClick=
{onSubmitClick}>
Submit the form!
</button>
</>
);
};
How can I tell the input to "focus itself" from the Form component? The
"normal" way to control data and behavior in React is to pass props to
components and listen to callbacks. I could try to pass the prop "focusItself"
to InputField that I would switch from false to true , but that would
only work once.
useEffect(() => {
if (focusItself) {
// focus input if the
focusItself prop changes
// will work only once,
when false changes to true
inputRef.current.focus();
}
}, [focusItself]);
// the rest is the same here
};
I could try to add an "onBlur" callback and reset that focusItself prop to
false when the input loses focus, or play around with random values
instead of a boolean, or come up with some other creative solution.
...
}
And the InputField component will have a prop that accepts the Ref and
will render an input field that expects a Ref as well. Only Ref there,
instead of being created in InputField , will be coming from props:
const InputField = ({ inputRef
}) => {
// the rest of the code is
the same
Ref is a mutable object and was designed that way. When we pass it to an
element, React underneath just mutates it. And the object that is going to be
mutated is declared in the Form component. So as soon as InputField is
rendered, the Ref object will mutate, and our Form will have access to the
input DOM element in inputRef.current :
useEffect(() => {
// the "input" element,
that is rendered inside
InputField, will be here
console.log(inputRef.current);
}, []);
return (
<>
{/* Pass Ref as prop to
the input field component */}
<InputField inputRef=
{inputRef} />
</>
);
};
In order for this to work, we need to signal to React that this ref is actually
intentional, and we want to do stuff with it. We can do it with the help of the
forwardRef function: it accepts our component and injects the Ref from
the ref attribute as a second argument of the component's function. Right
after the props.
We could even split the above code into two variables for better readability:
const InputFieldWithRef =
(props, ref) => {
// the rest is the same
};
And now the Form can just pass the Ref to the InputField component as
if it were a regular DOM element:
Whether you should use forwardRef or simply pass the Ref as a prop is
just a matter of personal taste: the end result is the same.
Speaking of focus, why does my Form component still have to deal with
the native DOM API to trigger it? Isn't it the responsibility and the whole
point of the InputField to abstract away complexities like this? Why does
the form even have access to the underlying DOM element - it's basically
leaking internal implementation details. The Form component shouldn't
care which DOM element we're using or whether we even use DOM
elements or something else at all. Separation of concerns, you know.
Looks like it's time to implement a proper imperative API for our
InputField component. React is declarative and expects us to write our
code accordingly. But sometimes we just need a way to trigger something
imperatively. Fortunately, React gives us an escape hatch for this:
useImperativeHandle [9] hook.
The first argument is our Ref, which is either created in the component
itself, passed from props, or through forwardRef . The second argument is
a function that returns an object - this is the object that will be available as
inputRef.current . The third argument is the array of dependencies, same
as any other React hook.
For our component, let's pass the Ref explicitly as the apiRef prop. And
the only thing that is left to do is to implement the actual API. For that,
we'll need another Ref - this time internal to InputField , so that we can
attach it to the input DOM element and trigger focus as usual:
return ...
}
Voila! Our Form can just create a ref , pass it to InputField , and will be
able to do simple inputRef.current.focus() and
inputRef.current.shake() , without worrying about their internal
implementation!
return (
<>
<InputField
label="name"
onChange={setName}
apiRef={inputRef}
/>
<button onClick=
{onSubmitClick}>
Submit the form!
</button>
</>
);
};
Interactive example and full code
https://advanced-react.com/examples/09/08
Pretty cool trick, isn't it? Just remember: the imperative way to trigger
something is more of an escape hatch in React. In 99% of cases, the normal
props/callbacks data flow is more than enough.
Key takeaways
In the next chapter, we'll dive deeper into how to use Refs for storing
functions rather than values, and what the consequences of that are. In the
meantime, a few things to take away:
A Ref is just a mutable object that can store any value. That value will
be preserved between re-renders.
A Ref's update doesn't trigger re-renders and is synchronous.
We can assign a Ref to a DOM element via the ref attribute. After
that element is rendered, we'll see that element in the ref.current
property.
We can pass Refs as regular props to any component.
If we want to pass it as the actual ref prop, we need to wrap that
component in forwardRef . Otherwise, it won't work on functional
components. The second argument of that component will be the ref
itself, which we then need to pass down to the desired DOM element.
Or, we can always just mutate that Ref manually in the useEffect
hook:
In the previous chapter, we learned everything about Refs: what they are,
why we need them, when to use them, and when not to. However, when it
comes to preserving something between re-renders, especially in Refs, there
is one additional topic that we need to discuss: functions. More specifically,
closures and how their existence affects our code.
Let's take a look at a few very interesting and quite typical bugs, how they
appear, and in the process, learn:
What closures are, how they appear, and why we need them.
What a stale closure is, and why they occur.
What the common scenarios in React are that cause stale closures, and
how to fight them.
Warning: if you've never dealt with closures in React, this chapter might
make your brain explode. Make sure to have enough chocolate with you to
stimulate brain cells while you're reading this.
The problem
Imagine you're implementing a form with a few input fields. One of the
fields is a very heavy component from some external library. You don't have
access to its internals, so you can't fix its performance problems. But you
really need it in your form, so you decide to wrap it in React.memo , to
minimize its re-renders when the state in your form changes. Something
like this:
const HeavyComponentMemo =
React.memo(HeavyComponent);
return (
<>
<input
type="text"
value={value}
onChange={(e) =>
setValue(e.target.value)}
/>
<HeavyComponentMemo />
</>
);
};
So far, so good. This Heavy component accepts just one string prop, let's
say title , and an onClick callback. This one is triggered when you click
a "done" button inside that component. And you want to submit your form
data when this click happens. Also easy enough: just pass the title and
onClick props to it.
const HeavyComponentMemo =
React.memo(HeavyComponent);
return (
<>
<input
type="text"
value={value}
onChange={(e) =>
setValue(e.target.value)}
/>
<HeavyComponentMemo
title="Welcome to the
form"
onClick={onClick}
/>
</>
);
};
But also, we know that the useCallback hook should have all
dependencies declared in its dependencies array. So if we want to submit
our form data inside, we have to declare that data as a dependency:
And here's the dilemma: even though our onClick is memoized, it still
changes every time someone types in our input. So our performance
optimization is useless.
Okay, fair enough, let's look for other solutions. React.memo has a thing
called comparison function[10]. It allows us more granular control over props
comparison in React.memo . Normally, React compares all "before" props
with all "after" props by itself. If we provide this function, it will rely on its
return result instead. If it returns true , then React will know that props are
the same, and the component shouldn't be re-rendered. Sounds exactly what
we need.
We only have one prop that we care about updating there, our title , so it's
not going to be that complicated:
const HeavyComponentMemo =
React.memo(
HeavyComponent,
(before, after) => {
return before.title ===
after.title;
},
);
The code for the entire form will then look something like this:
const HeavyComponentMemo =
React.memo(
HeavyComponent,
(before, after) => {
return before.title ===
after.title;
},
);
return (
<>
<input
type="text"
value={value}
onChange={(e) =>
setValue(e.target.value)}
/>
<HeavyComponentMemo
title="Welcome to the
form"
onClick={onClick}
/>
</>
);
};
Except for one tiny problem: it doesn't actually work. If you type something
in the input and then press that button, the value that we log in onClick is
undefined . But it can't be undefined, the input works as expected, and if I
add console.log outside of onClick it logs it correctly. Just not inside
onClick .
This is known as the "stale closure" problem. And in order to fix it, we first
need to dig a bit into probably the most feared topic in JavaScript: closures
and how they work.
function something() {
//
}
const something = () => {};
By doing that, we created a local scope: an area in our code where variables
declared inside won't be visible from the outside.
console.log(value); // not
going to work, "value" is local
to "inside" function
};
return inside;
};
We call our something function with the value "first" and assign the result
to a variable. The result is a reference to a function declared inside. A
closure is formed. From now on, as long as the first variable that holds
that reference exists, the value "first" that we passed to it is frozen, and the
inside function will have access to it.
The same story with the second call: we pass a different value, a closure is
formed, and the function returned will forever have access to that variable.
This is true for any variable declared locally inside the something
function:
const first =
something('first');
const second =
something('second');
It's like taking a photograph of some dynamic scene: as soon as you press
the button, the entire scene is "frozen" in the picture forever. The next press
of the button will not change anything in the previously taken picture.
In React, we're creating closures all the time without even realizing it.
Every single callback function declared inside a component is a closure:
useEffect(() => {
// closure!
});
};
All of them will have access to state, props, and local variables declared in
the component:
useEffect(() => {
// perfectly fine
console.log(state);
});
};
So what is the problem, then? Why are closures one of the most terrifying
things in JavaScript and a source of pain for so many developers?
It's because closures live for as long as a reference to the function that
caused them exists. And the reference to a function is just a value that can
be assigned to anything. Let's twist our brains a bit. Here's our function
from above, that returns a perfectly innocent closure:
return inside;
};
But the inside function is re-created there with every something call.
What will happen if I decide to fight it and cache it? Something like this:
return cache.current;
};
const first =
something('first');
const second =
something('second');
const third =
something('third');
No matter how many times we call the something function with different
arguments, the logged value is always the first one!
When we call the something function the next time, instead of creating a
new function with a new closure, we return the one that we created before.
The one that was frozen with the "first" variable forever.
In order to fix this behavior, we'd want to re-create the function and its
closure every time the value changes. Something like this:
// refresh it
prevValue = value;
return cache.current;
};
Save the value in a variable so that we can compare the next value with the
previous one. And then refresh the cache.current closure if the variable
has changed.
const first =
something('first');
const anotherFirst =
something('first');
const second =
something('second');
If we need access to state or props inside this function, we need to add them
to the dependencies array:
This dependencies array is what makes React refresh that cached closure,
exactly as we did when we compared value !== prevValue . If I forget
about that array, our closure becomes stale:
And every time I trigger that callback, all that will be logged is undefined .
What will happen if I try to use Ref for that onClick callback instead of
useCallback hook? It's sometimes what the articles on the internet
recommend doing to memoize props on components. On the surface, it does
look simpler: just pass a function to useRef and access it through
ref.current . No dependencies, no worries.
return ref.current;
};
So, in this case, the closure that was formed at the very beginning, when the
component was just mounted, will be preserved and never refreshed. When
we try to access the state or props inside that function stored in Ref, we'll
only get their initial values:
const Component = ({ someProp })
=> {
const [state, setState] =
useState();
To fix this, we need to ensure that we update that ref value every time
something that we try to access inside changes. Essentially, we need to
implement what the dependencies array functionality does for the
useCallback hook.
useEffect(() => {
// update the closure when
state or props change
ref.current = () => {
console.log(someProp);
console.log(state);
};
}, [state, someProp]);
};
return (
<>
<input
type="text"
value={value}
onChange={(e) =>
setValue(e.target.value)}
/>
<HeavyComponentMemo
title="Welcome to the
form"
onClick={onClick}
/>
</>
);
};
Every time we click on the button, we log "undefined". Our value inside
onClick is never updated. Can you tell why now?
It's a stale closure again, of course. When we create onClick , the closure is
first formed with the default state value, i.e., "undefined". We pass that
closure to our memoized component, along with the title prop. Inside the
comparison function, we compare only the title . It never changes, it's just
a string. The comparison function always returns true , HeavyComponent
is never updated, and as a result, it holds the reference to the very first
onClick closure, with the frozen "undefined" value.
Now that we know the problem, how do we fix it? Easier said than done
here…
However, in this case, it would mean we're just reimplementing the React
default behavior and doing exactly what React.memo without the
comparison function does. So we can just ditch it and leave it only as
React.memo(HeavyComponent) .
But doing that means that we need to wrap our onClick in useCallback .
But it depends on the state, so it will change with every keystroke. We're
back to square one: our heavy component will re-render on every state
change, exactly what we tried to avoid.
We could play around with composition and try to extract and isolate either
state or HeavyComponent . The techniques we explored in the first few
chapters. But it won't be easy: input and HeavyComponent both depend on
that state.
We can try many other things. But we don't have to do any heavy
refactorings to escape that closures trap. There is one cool trick that can
help us here.
Let's get rid of the comparison function in our React.memo and onClick
implementation for now. Just a pure component with state and memoized
HeavyComponent :
const HeavyComponentMemo =
React.memo(HeavyComponent);
return (
<>
<input type="text"
value={value} onChange={(e) =>
setValue(e.target.value)} />
<HeavyComponentMemo
title="Welcome to the form"
onClick={...} />
</>
);
}
Now we need to add an onClick function that is stable between re-renders
but also has access to the latest state without re-creating itself.
We're going to store it in Ref, so let's add it. Empty for now:
In order for the function to have access to the latest state, it needs to be re-
created with every re-render. There is no getting away from it, it's the nature
of closures, nothing to do with React. We're supposed to modify Refs inside
useEffect , not directly in render, so let's do that.
useEffect(() => {
// our callback that we
want to trigger
// with state
ref.current = () => {
console.log(value);
};
// no dependencies array!
});
};
But we can't just pass that ref.current to the memoized component. That
value will differ with every re-render, so memoization just won't work.
useEffect(() => {
ref.current = () => {
console.log(value);
};
});
return (
<>
{/* Can't do that, will
break memoization */}
<HeavyComponentMemo
onClick={ref.current} />
</>
);
};
useEffect(() => {
ref.current = () => {
console.log(value);
};
});
return (
<>
{/* Now memoization will
work, onClick never changes */}
<HeavyComponentMemo
onClick={onClick} />
</>
);
};
And here's the magic trick: all we need to make it work is to call
ref.current inside that memoized callback:
useEffect(() => {
ref.current = () => {
console.log(value);
};
});
const onClick = useCallback(()
=> {
// call the ref here
ref.current();
But when a closure freezes everything around it, it doesn't make objects
immutable or frozen. Objects are stored in a different part of the memory,
and multiple variables can contain references to exactly the same object.
If I mutate the object through one of the references and then access it
through another, the changes will be there:
a.value = 'two';
console.log(b.value); // will
be "two"
In our case, even that doesn't happen: we have exactly the same reference
inside useCallback and inside useEffect . So when we mutate the
current property of the ref object inside useEffect , we can access that
exact property inside our useCallback . This property happens to be a
closure that captured the latest state data.
useEffect(() => {
ref.current = () => {
// will be latest
console.log(value);
};
});
return (
<>
<input
type="text"
value={value}
onChange={(e) =>
setValue(e.target.value)}
/>
<HeavyComponentMemo
title="Welcome closures"
onClick={onClick}
/>
</>
);
};
Now, we have the best of both worlds: the heavy component is properly
memoized and doesn't re-render with every state change. And the onClick
callback on it has access to the latest data in the component without ruining
memoization. We can safely send everything we need to the backend now!
In the previous chapters, we covered in detail what Ref is, how to use it,
and how not to use it. There is, however, one very important and quite
common use case for Refs that we haven't covered yet. It's storing various
timers and timeout ids when dealing with functions like setInterval or
debounce . It's a very common scenario for various form elements. We
usually would want to debounce/throttle inputs' onChange callbacks, for
example, so that the form is not re-rendered with every keystroke.
What debouncing and throttling are, and what the difference between
them is (a very quick knowledge refresh).
Why we can't just use debounce directly on our event handlers.
How to use useMemo or useCallback for that, and what the
downsides of those are.
How to use Refs for debouncing, and what the difference between Refs
and using useMemo and useCallback is.
How to use the closure trap escape trick for implementing debouncing.
Instead of sending that request on every keypress, we can wait a little bit
until the user stops typing, and then send the entire value in one go. This is
what debouncing does. If I apply debounce to my onChange function, it
will detect every attempt I make to call it, and if the waiting interval hasn't
passed yet, it will drop the previous call and restart the "waiting" clock.
const debouncedOnChange =
debounce(onChange, 500);
Before, if I was typing "React" in the search field, the requests to the
backend would be on every keypress instantaneously, with the values "R",
"Re", "Rea", "Reac", "React". Now, after I debounced it, it will wait 500 ms
after I stopped typing "React" and then send only one request with the value
"React".
return debouncedFunc;
};
The actual implementation is, of course, a bit more complicated. You can
check out the lodash debounce code[13] to get a sense of it.
Throttle is very similar, and the idea of keeping the internal tracker and a
function that returns a function is the same. The difference is that
throttle guarantees to call the callback function regularly, every wait
interval, whereas debounce will constantly reset the timer and wait until
the end.
The difference will be obvious if we use not an async search example, but
an editing field with auto-save functionality: if a user types something in
the field, we want to send requests to the backend to save whatever they
type "on the fly", without them pressing the "save" button explicitly. If a
user is writing a poem in a field like that really, really fast, the "debounced"
onChange callback will be triggered only once. And if something breaks
while typing, the entire poem will be lost. The "throttled" callback will be
triggered periodically, the poem will be regularly saved, and if a disaster
occurs, only the last milliseconds of the poem will be lost. Much safer
approach.
Interactive example and full code
https://advanced-react.com/examples/11/01
First of all, let's take a closer look at the Input implementation that has a
debounced onChange callback (from now forward, I'll only use debounce
in all examples; every concept described will also be relevant for throttle).
const debouncedOnChange =
debounce(onChange, 500);
return <input onChange=
{debouncedOnChange} />;
};
While the example works perfectly and seems like regular React code with
no caveats, it unfortunately has nothing to do with real life. In real life,
more likely than not, you'd want to do something with the value from the
input other than sending it to the backend. Maybe this input will be part of a
large form. Or you'd want to introduce a "clear" button there. Or maybe the
input tag is actually a component from some external library that
mandatory asks for the value field.
What I'm trying to say here is that at some point, you'd want to save that
value into the state, either in the Input component itself or pass it to
parent/external state management to manage it instead. Let's do it in the
Input component for simplicity.
I added a state value via the useState hook and passed that value to the
input field. One thing left to do is for the input to update that state on
typing. Otherwise, the input won't work. Normally, without debounce, it
would be done in the onChange callback:
return (
<input onChange=
{debouncedOnChange} value=
{value} />
);
};
I have to call setValue immediately when input calls its own onChange .
This means I can't debounce our onChange function anymore in its entirety
and can only debounce the part that I actually need to slow down: sending
requests to the backend.
Seems logical. Only... it doesn't work either! Now the request is not
debounced at all, just delayed a bit. If I type "React" in this field, I will still
send all "R", "Re", "Rea", "Reac", "React" requests instead of just one
"React," as a properly debounced function should, only delayed by half a
second.
// debouncedSendRequest is
created once, so state caused
re-renders won't affect it
anymore
debouncedSendRequest(value);
};
Until it doesn't...
const sendRequest =
useCallback((value: string) => {
console.log('Changed value:',
value);
}, []);
But we have this value in state as well. Can't I just use it from there? Maybe
I have a chain of those callbacks, and it's really hard to pass this value over
and over through it. Maybe I want to have access to another state variable.
It wouldn't make sense to pass it through a callback like this. Or maybe I
just hate callbacks and arguments and want to use state just because. Should
be simple enough, isn't it?
And of course, yet again, nothing is as simple as it seems. If I just get rid of
the argument and use the value from the state, I would have to add it to the
dependencies of the useCallback hook:
const Input = () => {
const [value, setValue] =
useState('initial');
const sendRequest =
useCallback(() => {
// value is now coming from
state
console.log('Changed
value:', value);
// adding it to dependencies
}, [value]);
};
Because of that, the sendRequest function will change with every value
change. This is how memoization works. The value is the same throughout
the re-renders until the dependency changes. This means our memoized
debounce call will now change constantly as well: it has sendRequest as a
dependency, which now changes with every state update.
Is there anything that can be done here? Of course! It's a perfect usecase for
Refs. If you search for articles about debouncing and React, half of them
will mention useRef as a way to avoid re-creating the debounced function
on every re-render.
Unfortunately, it will only work for the previous use-case: when we didn't
have state inside the callback. Remember the previous chapter and the
closures problem? A Ref's initial value is cached and never updated. It's
"frozen" at the time when the component is mounted and ref is initialized.
useEffect(() => {
// updating ref when state
changes
ref.current = debounce(() =>
{
// send request to the
backend here
}, 500);
}, [value]);
useEffect(() => {
// updating ref when state
changes
ref.current = debounce(() =>
{}, 500);
// cancel the debounce
callback before
return () =>
ref.current.cancel();
}, [value]);
In this case, with every update we're getting rid of the "old" debounced
closure, and starting a new one. Good solution for debouncing. But it won't
work for throttling unfortunately. If I keep canceling it, it will never have a
chance to be fired after the interval it's supposed to be fired, as throttle
should. I want something more universal.
Another good use case for the solution to escape the closures trap, which
we looked into in detail in the previous chapter! All we need to do is assign
our sendRequest to Ref, update that Ref in useEffect to get access to the
latest closure, and then trigger ref.current inside of our closure.
Remember: refs are mutable, and closures don't perform deep cloning. Only
the reference to that mutable object is "frozen", we're still free to mutate the
object it points to every time.
Thinking in closures breaks my brain, but it actually works, and it's easier to
follow that train of thought in code:
useEffect(() => {
// updating ref when state
changes
// now, ref.current will
have the latest sendRequest
with access to the latest state
ref.current = sendRequest;
}, [value]);
// creating debounced
callback only once - on mount
const debouncedCallback =
useMemo(() => {
// func will be created
only once - on mount
const func = () => {
// ref is mutable!
ref.current is a reference to
the latest sendRequest
ref.current?.();
};
// debounce the func that
was created once, but has
access to the latest sendRequest
return debounce(func, 1000);
// no dependencies! never
gets updated
}, []);
useEffect(() => {
ref.current = callback;
}, [callback]);
const debouncedCallback =
useMemo(() => {
const func = () => {
ref.current?.();
};
return debouncedCallback;
};
Then our production code can just use it without the eye-bleeding chain of
useMemo and useCallback , without worrying about dependencies, and
with access to the latest state and props inside!
const Input = () => {
const [value, setValue] =
useState();
const debouncedRequest =
useDebounce(() => {
// send request to the
backend
// access to the latest
state here
console.log(value);
});
debouncedRequest();
};
Key takeaways
That was fun, wasn't it? JavaScript's closures have to be the most loved
feature on the internet. In the next chapter, we'll try to recover from dealing
with them and play around with some UI improvements instead. More
specifically, we're going to learn how to get rid of the "flickering" effect for
positioned elements. But before that, let's quickly recap this chapter:
Let's talk a bit more about DOM access in React. In previous chapters, we
covered how to do it with Ref and learned everything about Ref as a bonus.
There is, however, another very important, although quite rare, topic when
it comes to dealing with the DOM: changing elements based on real DOM
measurements like the size or position of an element.
So, what is the problem with it, exactly, and why are "normal" tactics not
good enough? Let's do some coding and figure it out. In the process, we'll
learn:
Now, the component itself. It's going to be just a component that accepts an
array of data and renders proper links:
The only way to get the actual sizes is to make the browser render the items
and then extract the sizes via a native JavaScript API, like
getBoundingClientRect .
We'd have to do it in a few steps. First, get access to the elements. We can
create a Ref and assign it to the div that wraps those items:
Second, in useEffect , grab the div element and get its size.
useEffect(() => {
const div = ref.current;
const { width } =
div.getBoundingClientRect();
}, [ref]);
return ...
}
Third, iterate over the div's children and extract their widths into an array.
const Component = ({ items }) =>
{
useEffect(() => {
// same code as before
return ...
}
Now, all we need to do is iterate over that array, sum the widths of the
children, compare those sums with the parent div, and find the last visible
item as a result.
But wait, there is one thing we forgot: the "more" button. We need to take
its width into account as well. Otherwise, we might find ourselves in a
situation where a few items fit, but the "more" button doesn't.
Again, we can only get its width if we render it in the browser. So we have
to add the button explicitly during the initial render:
If we abstract all the logic of calculating widths into a function, we'll end up
with something like this in our useEffect :
useEffect(() => {
const itemIndex =
getLastVisibleItem(ref.current);
}, [ref]);
The important thing here is that we've got that number. What should we do
next from the React perspective? If we leave it as is, all links and the
"more" button will be visible. And there's only one solution here - we need
to trigger an update of the component and make it remove all those items
that are not supposed to be there.
And there is pretty much the only way to do it: we need to save that number
in the state when we get it:
useEffect(() => {
const itemIndex =
getLastVisibleItem(ref.current);
// update state with the
actual number
setLastVisibleMenuItem(itemIndex
);
}, [ref]);
};
And then, when rendering the menu, take that into account:
return (
<div className="navigation">
{/*render only visible
items*/}
{filteredItems.map(item =>
<a href={item.href}>{item.name}
</a>)}
{/*render "more"
conditionally*/}
{isMoreVisible && <button
id="more">...</button>}
</div>
)
}
That's about it! Now, after the state is updated with the actual number, it
will trigger a re-render of the navigation, and React will re-render items and
remove those that aren't visible. For a "proper" responsive experience, we
would also need to listen for the resize event and re-calculate the number,
but I'll leave that for you to implement.
You can find the full working example in the link below. With resize. Only
don't get too excited just yet: there is one major flaw in the user experience
here.
In React version from ~16.8 (the one with the hooks), however, all we need
to do is replace our useEffect hook with useLayoutEffect .
To answer those questions, we need to step aside from React for a moment
and talk about browsers and good old JavaScript instead.
Instead, it's more like showing slides to people: you show one slide, wait
for them to comprehend the genius idea on it, then transition to the next
slide, and so on.
const app =
document.getElementById('app');
const child =
document.createElement('div');
child.innerHTML = '<h1>Heyo!
</h1>';
app.appendChild(child);
What will happen if a "task" takes longer than 13ms? Well, that's
unfortunate. The browser can't stop it or split it. It will continue with it until
it's done, and then paint the final result. If I add 1-second synchronous
delays between those border updates:
Now, although React is just JavaScript, it's not executed as one single task,
of course. The internet would be unbearable if it was. We all would be
forced to play outside and interact in person, and who wants that, really?
The way to "break" a giant task like rendering an entire app into smaller
ones is by using various "asynchronous" methods: callbacks, event
handlers, promises, and so on.
setTimeout(() => {
child.style = 'border: 10px
solid red';
wait(1000);
setTimeout(() => {
child.style = 'border: 20px
solid green';
wait(1000);
setTimeout(() => {
child.style = 'border:
30px solid black';
wait(1000);
}, 0);
}, 0);
}, 0);
Then every one of those timeouts will be considered a new "task." So the
browser will be able to re-paint the screen after finishing one and before
starting the next one. And we'll be able to see the slow but glorious
transition from red to green to back, rather than meditating at the white
screen for three seconds.
This is what React does for us. Essentially, it's a crazy complicated and very
efficient engine that splits our giant, giant blobs of hundreds of npm
dependencies combined with our own coding into the smallest possible
chunks that browsers are able to process in under 13 ms (ideally).
Back to useEffect vs
useLayoutEffect
Now, finally, back to useEffect vs useLayoutEffect and how to answer
the questions we had at the beginning.
return ...
}
The flow with useEffect , on the other hand, will be split into two tasks:
The first one renders the "initial" pass of navigation with all the buttons.
The second one removes those children that we don't need. With screen re-
painting in between! Exactly the same situation as with borders inside
timeouts.
Use useLayoutEffect only when you need to get rid of the visual
"glitches" caused by the need to adjust the UI according to the real sizes of
elements. For everything else, useEffect is the way to go. And you might
not even need that one either[15].
Howevver, when we try to do that, the first thing we'll notice is that it
doesn't freaking work. Like at all. The glitching is still there, there is no
magic anymore. To replicate it, just copy-paste our previously fixed
navigation into your Next.js app, if you have one.
What's happening?
You see, when we have SSR enabled, the very first pass at rendering React
components and calling all the lifecycle events is done on the server before
the code reaches the browser. If you're not familiar with how SSR works, all
it means is that somewhere on the backend, some method calls something
like React.renderToString(<App />) . React then goes through all the
components in the app, "renders" them (i.e., just calls their functions), and
produces the HTML that these components represent.
Then, this HTML is injected into the page that is going to be sent to the
browser, and off it goes. Just like in the good old times when everything
was generated on the server, and we used JavaScript only to open menus.
After that, the browser downloads the page, shows it to us, downloads all
the scripts (including React), runs them (including React again), React goes
through that pre-generated HTML, sprinkles some interactivity on it, and
our page is now alive again.
The problem here is: there is no browser yet when we generate that initial
HTML. So anything that would involve calculating actual sizes of elements
(like we do in our useLayoutEffect ) will simply not work on the server:
there are no elements with dimensions yet, just strings. And since the whole
purpose of useLayoutEffect is to get access to the element's sizes, there is
not much point in running it on the server. And React doesn't.
As a result, what we see during the very first load when the browser shows
us the page that is not interactive yet is what we rendered during the "first
pass" stage in our component: the row of all the buttons, including the
"more" button. After the browser has a chance to execute everything and
React comes alive, it finally can run useLayoutEffect , and the buttons are
finally hidden. But the visual glitch is there.
useEffect(() => {
setShouldRender(true);
}, []);
if (!shouldRender) return
<SomeNavigationSubstitude />;
Key takeaways
That's all for the "flickering" for now. In the next chapter, we'll continue our
conversation about the UI and learn how to deal with Portals and why. In
the meantime, a few things to remember:
Let's talk about UI some more. In the previous chapter, we solved the
annoying "flickering" problem. Now, let's take a look at another fun UI bug:
content clipping.
You might have heard that we need Portals in React to escape it when
rendering elements inside elements with overflow: hidden . Every second
article on the internet about Portals has this example. This is actually not
true: we can escape content "clipping" with just pure CSS. We need Portals
for other reasons. This "overflow problem" also might give a false sense of
security: if we just don't have any overflow: hidden in the app, we can
just easily position everything everywhere safely. Also not true.
Just in case: this is a CSS-heavy chapter. The first half of it covers CSS-
only concepts in detail, since not every React developer is proficient in
CSS.
return (
<>
<SomeComponent />
<button onClick={() =>
setIsVisible(true)}>
show more
</button>
{isVisible && <ModalDialog
/>}
<AnotherComponent />
</>
);
};
The position property supports two values that allow us to break away
from the document flow: absolute and fixed . Let's start with absolute
and try to implement the dialog using it. All we need to do is apply the
position: absolute CSS to the div in the ModalDialog component:
.modal {
position: absolute;
width: 300px;
top: 100px;
left: 50%;
margin-left: -150px;
}
This dialog will appear in the middle of the screen, with a 100px gap at the
top.
So, technically, this works. But if you look at the existing dialogs in your
app or any of the UI libraries, it's highly unlikely that they use position:
absolute there. Or even tooltips, dropdown menus, or any UI element that
pops up, really.
Understanding Stacking
Context
The Stacking Context[18] is a nightmare for anyone who has ever tried to use
z-index on positioned elements. The Stacking Context is a three-
dimensional way of looking at our HTML elements. It's like a Z-axis, in
addition to our normal X and Y dimensions (window width and height), that
defines what sits on top of what when an element is rendered on the screen.
If an element has a shadow, for example, that overlaps with surrounding
elements, should the shadow be rendered on top of them or underneath
them? This is determined by the Stacking Context.
<div>grey</div>
<div>red</div>
<div>green</div>
The green div is after the red, so it will be "in front" from the Stacking
Context rules point of view, and the red will be in front of the grey. If I add
a small negative margin to them, we'll see this picture:
To fix this situation, we have the z-index CSS property. This property
allows us to manipulate that Z-axis within the same Stacking Context. By
default, it's zero. So if I set the z-index of the dialog to a negative value, it
will appear behind all the divs. If set to positive, then it will appear on top
of all the divs.
Within the same Stacking Context is the key here. If something creates a
new Stacking Context, that z-index will be relative to the new context. It's
a completely isolated bubble. The new Stacking Context will be controlled
as its own isolated black box by the rules of the parent Context, and what
happens inside stays inside.
Play around with the z-index on the grey div in the code example below;
it's truly fascinating. If I remove it, the new Stacking Context disappears,
and the dialog is now part of the global context and its rules and starts
appearing on top of the red div. As soon as I add a z-index to the grey div
that is less than the red div, it moves underneath.
And it's not only the combination of position and z-index that triggers
it, by the way. The transform property will do it. So any of your leftover
CSS animations have the potential to mess the positioned elements up. Or
z-index on Flex or Grid children. Or a bunch of other different
properties[19].
And, of course, finally, the elements with overflow . By the way, just
setting overflow on an element won't clip the absolutely positioned div
inside; it needs to be in combination with position: relative . But yeah,
if an absolutely positioned dialog is rendered inside the div with overflow
and position , then it will be clipped.
Also, since it's positioned relative to the screen, this position actually
allows us to escape the overflow trap. So, in theory, we could have used it
for our dialogs and tooltips.
However, even position: fixed cannot escape the rules of the Stacking
Context. Nothing can. It's like a black hole: as soon as it forms, everything
within its gravitational reach is gone. No one gets out.
If the grey div has z-index: 1 and the red div is with z-index: 2 - it's
game over for modals. They will appear underneath.
Interactive example and full code
https://advanced-react.com/examples/13/05
Another issue with position: fixed is that it's not always positioned
relative to the viewport. It's actually positioned relative to what is known as
the Containing Block. It just happens to be the viewport most of the time.
Unless some of the parents have certain properties set, then it will be
positioned relative to that parent. And we'll have the same situation we had
at the very beginning with position: absolute .
Properties that trigger the forming of the new Containing Block[20] for
position: fixed are relatively rare, but they include transform , and that
one is widely used for animation.
The prime candidates are all sorts of animations or "sticky" blocks like
headers or columns. Those are the most likely places where we'd be forced
to set either position with z-index , or translate . And those will form
a new Stacking Context.
Just open a few of your favorite popular websites that have "sticky"
elements or animations, open Chrome Dev Tools, find some block deep in
the DOM tree, set its position to fixed with a high z-index , and move it
around a bit. Just for the fun of it, I checked Facebook, Airbnb, Gmail,
OpenAI, and LinkedIn. On three of those, the main area is a trap: any block
with position: fixed and z-index: 9999 within it will appear
underneath the sticky header.
There is only one way to escape from that trap: to make sure that the modal
is not rendered inside the DOM elements that form the Stacking Context. In
the world without React, we'd just append that modal to the body or some
div at the root of the app with something like:
In React, we can escape that Stacking Context trap with the tool called
Portal. Finally, time to do React!
How React Portal can solve this
Let's recreate the trap in something more interesting than a bunch of
colorful divs just to make our code more realistic and to see how easily it
can happen. And then fix it for good.
return (
<>
<div className="header">
</div>
<div className="layout">
<div
className="sidebar">// some
links here</div>
<div className="main">
<button onClick={() =>
setIsVisible(true)}>
show more
</button>
{isVisible &&
<ModalDialog />}
</div>
</div>
</>
);
};
Our header is going to be sticky, so I'll set the sticky position for it:
.header {
position: sticky;
}
And I want our navigation to move into the "collapsed" state smoothly,
without any jumping or disappearing blocks. So I'll set the transition
property on it and the main area:
.main {
transition: all 0.3s ease-in;
}
.sidebar {
transition: all 0.3s ease-in;
}
And translate them to the left when the navigation is collapsed and back
when it's expanded:
return (
<>
<div className="header">
</div>
<div className="layout">
<div
className="sidebar"
// translate the nav
to the left if collapsed, and
back
style={{
transform:
isNavExpanded
? 'translate(0,
0)'
:
'translate(-300px, 0)',
}}
>
...
</div>
<div
className="main"
// translate the main
to the left if nav is
collapsed, and back
style={{
transform:
isNavExpanded
? 'translate(0,
0)'
:
'translate(-300px, 0)',
}}
>
{/*main here*/}
</div>
</div>
</>
);
};
That already works beautifully, except for one thing: when I scroll, the
header disappears under the sidebar and the main area. That's no problem, I
already know how to deal with it: just need to set z-index: 2 for the
header. Done, and now the header is always on top, and expand/collapse
works like a charm!
Except for one thing: the modal dialog in the main area is now completely
busted. It used to be positioned in the middle of the screen, but not
anymore. And when I scroll with it open, it appears under the header.
Everything in the code is reasonable, there are no random position:
relative , and still, that happened. The Stacking Context trap.
In order to fix it, we need to render the modal dialog outside of our main
area. In our simple app, we could just move it to the bottom, of course: the
button, state, and dialog are within the same component. In the real world,
it's not going to be that simple. More likely than not, the button will be
buried deep inside the render tree, and propagating state up will be a
massive pain and performance killer. Context could help, but it has its own
caveats.
Instead, we can use the createPortal function that React gives us. Well,
technically, the react-dom library, but it only matters for the import path in
our case. It accepts two arguments:
document.getElementById('root'),
)}
</>
);
};
That's it, the trap is no more! We still "render" the dialog together with the
button from our developer experience perspective. But it ends up inside the
element with id="root" . If you open Chrome Developer Tools, you'll see
it right at the bottom of it.
Interactive example and full code
https://advanced-react.com/examples/13/07
And the dialog is now centered, as it's supposed to be, and appears on top of
the header, as it should.
But what are the consequences of doing that? What about re-renders, React
lifecycle, events, access to Context, etc.? Easy. The rules of teleportation in
React are:
If our App has access to Context, the dialog will have access to exactly the
same Context.
If the part of the app where the dialog is created unmounts, the dialog will
also disappear.
If I want to intercept a click event that happens in the modal, the onClick
handler on the "main" div will be able to do that. "Click" here is part of
synthetic events, so they "bubble" through the React tree, not the regular
DOM tree. Same story with any synthetic events that React manages[21].
If you rely on CSS inheritance and cascading to style the dialog in the
"main" part, it won't work anymore.
If you rely on "native" events propagation, it also won't work. If, instead of
the onClick callback on the "main" div, you try to catch events that
originated in the modal via element.addEventListener , it won't work.
const App = () => {
const ref = useRef(null);
useEffect(() => {
const el = ref.current;
el.addEventListener("click",
() => {
// trying to catch
events, originated in the
portalled modal
// not going to work!!
});
}, []);
If you try to grab the parent of the modal via parentElement , it will return
the root div, not the main app. And the same story with any native
JavaScript functions that operate on the DOM elements.
And finally, onSubmit on <form> elements. This is the least obvious thing
about this. It feels the same as onClick , but in reality, the submit event is
not managed by React[22]. It's a native API and DOM elements thing. If I
wrap the main part of the app in <form> , then clicking on the buttons
inside the dialog won't trigger the "submit" event! From the DOM
perspective, those buttons are outside of the form. If you want to have a
form inside the dialog and want to rely on the onSubmit callback, then the
form tag should be inside the dialog as well.
Key takeaways
That's enough about CSS and portalling for the book, I think. Things to
remember next time you're trying to position elements:
Have you tried recently to wrap your head around what the latest on data
fetching is? The chaos of endless data management libraries, GraphQL or
not GraphQL, useEffect is evil since it causes waterfalls, Suspense is
supposed to save the world, but at the moment of publishing this book, it is
still not officially ready for data fetching. And then the patterns like fetch-
on-render , fetch-then-render , and render-as-you-fetch that confuse
even people who write about them sometimes. What on Earth is going on?
Why do I suddenly need a PhD to just make a simple GET request?
And what is the actual "right way" to fetch data in React? In this chapter,
you'll learn:
Data on demand is something that you fetch after a user interacts with a
page in order to update their experience. All the various autocompletes,
dynamic forms, and search experiences fall under this category. In React,
the fetch of this data is usually triggered in callbacks.
Initial data is the data you'd expect to see on a page right away when you
open it. It's the data we need to fetch before a component ends up on the
screen. It's something that we need to be able to show users some
meaningful experience as soon as possible. In React, if no SSR is involved,
fetching data like this usually happens in useEffect (or in
componentDidMount for class components).
useEffect(() => {
// fetch data
const dataFetch = async ()
=> {
const data = await (
await fetch(
'https://run.mocky.io/v3/b3bcb9d
2-d8e9-43c5-bfb7-0062c85be6f9',
)
).json();
dataFetch();
}, []);
return <>...</>;
};
But as soon as your use case exceeds "fetch once and forget," you're going
to face tough questions. What about error handling? What if multiple
components want to fetch data from this exact endpoint? Should I cache
that data? For how long? What about race conditions? What if I want to
remove the component from the screen? Should I cancel this request? What
about memory leaks? And so on and so forth.
Not a single question from that list is even React-specific; it's a general
problem of fetching data over the network. To solve these problems (and
more!), there are only two paths: you either need to reinvent the wheel and
write a lot of code to solve these. Or rely on some existing library that has
been doing this for years.
Some libraries, like Axios[23], will abstract some concerns, like canceling
requests, but will have no opinion on React-specific API. Others, like
swr[24], will handle pretty much everything for you, including caching. But
essentially, the choice of technology doesn't matter much here. No library or
Suspense in the world can improve the performance of your app just by
itself. They just make some things easier at the cost of making some things
harder. You always need to understand the fundamentals of data fetching
and data orchestration patterns and techniques in order to write performant
apps.
With async operations, which data fetching typically is, and in the context
of large apps and the user experience point of view, it's not that obvious.
2. Shows a loading state until sidebar data is loaded first, renders sidebar,
and keeps loading state until the data is finished in the main part. The
sidebar to appear takes ~1 second, the rest of the app appears in ~3
seconds. Overall, it takes ~4 seconds.
3. Shows a loading state until the main issue data is loaded, then renders
it, keeps the loading state for sidebar and comments. When sidebar
loaded - renders it, comments are still in the loading state. The main
part appears in ~2 seconds, the sidebar in ~1 second after that, it takes
another ~2 seconds for the comments to appear. Overall takes ~5s to
appear.
The first app loads in just 3 seconds - the fastest of them all. From a pure
numbers perspective, it's a clear winner. But it doesn't show anything to
users for 3 seconds - the longest of them all. Clear loser?
The second app loads something on the screen (Sidebar) in just 1 second.
From the perspective of showing at least something as fast as possible, it's a
clear winner. But it's the longest of them all to show the main part of the
issue. Clear loser?
The third app loads the Issue information first. From the perspective of
showing the main piece of the app first, it's a clear winner. But the "natural"
flow of information for left-to-right languages is from the top-left to the
bottom-right. This is how we usually read. This app violates it, and it makes
the experience the most "junky" one here. Not to mention it's the longest of
them all to load. Clear loser?
When, and only when, you have an idea of what your story should look
like, then it will be time to assemble the app and optimize the story to be as
fast as possible. And the true power comes here not from various libraries,
GraphQL, or Suspense, but from the knowledge of:
and knowing a few techniques that allow you to control all three stages of
the data fetching requests.
But before jumping into actual techniques, we need to understand two more
very fundamental things: the React lifecycle and browser resources and
their influence on our goal.
if (isLoading) return
'loading';
return child;
};
When we write const child = <Child /> , we don't "render" the Child
component. <Child /> is nothing more than syntax sugar for a function
that creates a description of a future element. It is only rendered when this
description ends up in the actual visible render tree - i.e., returned from the
component. Until then, it just sits there idly as an object and does nothing.
There are more things to know about the React lifecycle, of course: the
order in which all of this is triggered, what is triggered before or after
painting, what slows down what and how, the useLayoutEffect hook, etc.
But all of this becomes relevant much later, when you have orchestrated
everything perfectly and are now fighting for milliseconds in a very big,
complicated app.
Did you know that browsers have a limit on how many requests in parallel
to the same host they can handle? Assuming the server is HTTP1 (which is
still 70% of the internet), the number is not that big. In Chrome, it's just
6[25]. 6 requests in parallel! If you fire more at the same time, all the rest of
them will have to queue and wait for the first available "slot."
And 6 requests for initial data fetching in a large app is not unreasonable.
Our very simple "issue tracker" already has three, and we haven't even
implemented anything of value yet. Imagine all the angry looks you'll get if
you just add a somewhat slow analytics request that literally does nothing at
the very beginning of the app, and it ends up slowing down the entire
experience.
if (!data) return
'loading...';
Assume that the fetch request is super fast there, taking just ~50ms. If I add
just six requests before that app that take 10 seconds, without waiting for
them or resolving them, the whole app load will take those 10 seconds (in
Chrome, of course).
// no waiting, no resolving,
just fetch and drop it
fetch('https://some-
url.com/url1');
fetch('https://some-
url.com/url2');
fetch('https://some-
url.com/url3');
fetch('https://some-
url.com/url4');
fetch('https://some-
url.com/url5');
fetch('https://some-
url.com/url6');
Let's start by laying out components first, then wire the data fetching
afterward. We'll have the app component itself, which will render Sidebar
and Issue, and Issue will render Comments.
Now to the data fetching. Let's first extract the actual fetch and useEffect
and state management into a nice hook to simplify the examples:
useEffect(() => {
const dataFetch = async ()
=> {
const data = await (await
fetch(url)).json();
setState(data);
};
dataFetch();
}, [url]);
And exactly the same code for Issue , only it will render the Comments
component after loading:
return (
<>
<Sidebar data={data} />
<Issue />
</>
);
};
Boom, done!
There is only one small problem here. The app is terribly slow. Slower than
all our examples from above!
Waterfalls like that are not the best solution when you need to show the app
as quickly as possible. Luckily, there are a few ways to deal with them (but
not Suspense, about that one later).
useEffect(async () => {
const sidebar = await
fetch('/get-sidebar');
const issue = await
fetch('/get-issue');
const comments = await
fetch('/get-comments');
}, []);
useEffect(async () => {
const [sidebar, issue,
comments] = await Promise.all([
fetch('/get-sidebar'),
fetch('/get-issue'),
fetch('/get-comments'),
]);
}, []);
and then save all of them to state in the parent component and pass them
down to the children components as props:
useEffect(() => {
const dataFetch = async ()
=> {
// waiting for
allthethings in parallel
const result = (
await Promise.all([
fetch(sidebarUrl),
fetch(issueUrl),
fetch(commentsUrl),
])
).map((r) => r.json());
setComments(commentsResult);
};
dataFetch();
}, []);
This is how the very first app from the test at the beginning is implemented.
Parallel promises solution
But what if we don't want to wait for them all? Our comments are the
slowest and the least important part of the page. It doesn't make much sense
to block the rendering of the sidebar while we're waiting for them. Can we
fire all requests in parallel, but wait for them independently?
One thing to note here is that in this solution, we're triggering state change
three times independently, which will cause three re-renders of the parent
component. And considering that it's happening at the top of the app,
unnecessary re-render like this might cause half of the app to re-render
unnecessarily. The performance impact really depends on the order of your
components, of course, and how big they are, but it's something to keep in
mind.
Data providers to abstract away
fetching
Lifting data loading up like in the examples above, although good for
performance, is terrible for app architecture and code readability. Suddenly,
instead of nicely co-located data fetching requests and their components, we
have one giant component that fetches everything and massive props
drilling throughout the entire app.
const Context =
React.createContext();
export const
CommentsDataProvider = ({
children }) => {
const [comments, setComments]
= useState();
useEffect(async () => {
fetch('/get-comments')
.then((data) =>
data.json())
.then((data) =>
setComments(data));
}, []);
return (
<Context.Provider value=
{comments}>
{children}
</Context.Provider>
);
};
Exactly the same logic for all three of our requests. And then our monster
App component turns into something as simple as this:
Our three providers will wrap the App component and will fire fetching
requests as soon as they are mounted in parallel:
And then in something like Comments (i.e., far, far deep into the render tree
from the root app), we'll just access that data from "data provider":
If you're not a huge fan of Context - not to worry, exactly the same concept
will work with any state management solution of your choosing.
useEffect(() => {
const dataFetch = async ()
=> {
const data = await (
await fetch('/get-
comments')
).json();
setData(data);
};
dataFetch();
}, [url]);
if (!data) return 'loading';
const commentsPromise =
fetch('/get-comments');
setState(data);
};
dataFetch();
}, [url]);
};
A really fancy thing: our fetch call essentially "escapes" all React lifecycle
and will be fired as soon as JavaScript is loaded on the page, before any of
the useEffect anywhere is called. Even before the very first request in the
root App component is called. It will be fired, JavaScript will move on to
other things to process, and the data will just sit there quietly until someone
actually resolves it. Which is what we're doing in our useEffect in
Comments .
Just moving the fetch call outside of Comments resulted in this instead:
Interactive example and full code
https://advanced-react.com/examples/14/09
So why didn't we? And why it's not a very common pattern?
There are only two "legit" use cases that I can think of for this pattern: pre-
fetching of some critical resources at the router level and pre-fetching data
in lazy-loaded components.
In the first case, you actually need to fetch data as soon as possible, and you
know for sure that the data is critical and required immediately. And lazy-
loaded components' JavaScript will be downloaded and executed only when
they end up in the render tree, so by definition, after all the critical data is
fetched and rendered. So, it's safe.
useEffect(() => {
const dataFetch = async ()
=> {
const data = await (
await fetch('/get-
comments')
).json();
setState(data);
};
dataFetch();
}, [url]);
Underneath, all of them will use useEffect or equivalent to fetch the data
and state to update the data and trigger a re-render of the host component.
But let's imagine that it's available to the general public tomorrow. Will it
fundamentally solve data fetching, and will it make everything above
obsolete? Not at all.
Suspense is just a really fancy and clever way to replace fiddling with
loading states. Instead of this:
// render comments
};
Everything else, like browser limitations, React lifecycle, and the nature of
request waterfalls, stays the same.
Key takeaways
Data fetching on the frontend is a complicated topic. Probably a whole book
can be written just about it. In the next chapter, we'll continue the
conversation about data fetching and talk about race conditions. But before
that, a few things to take away from this chapter:
We can separate the client's data fetching into two broad categories:
initial and on demand.
We can use the simple fetch instead of using data fetching libraries,
but a lot of concerns we'd have to implement manually.
A "performant" app is always subjective and depends on the message
we're trying to convey to the users.
When fetching data, especially initially, we need to be aware of
browser limitations on parallel requests.
Waterfalls appear when we trigger data fetching not in parallel, but
conditionally or in sequence.
We can use techniques such as Promise.all , parallel promises, or
data providers with Context to avoid waterfalls.
We can pre-fetch critical resources even before React is initialized, but
we need to remember browser limitations while doing so.
Chapter 15. Data fetching and
race conditions
Another big topic when it comes to data fetching on the frontend that
deserves its own chapter is race conditions. Those are relatively rare in our
normal life, and it's possible to develop quite complicated apps without ever
having to deal with them. But when they happen, investigating and fixing
them can be a real challenge. And since fetch or any async operation in
JavaScript is just a glorified Promise most of the time, the main focus of
this chapter is Promises.
Let's investigate an app with a race condition, fix it, and in the process
learn:
What Promises are and how very innocent code can create a race
condition without us noticing it.
What are the reasons for race conditions to appear.
How to fix them in at least four different ways.
What is a Promise?
Before jumping into race conditions themselves, let's remember what a
Promise[31] is and why we need them.
One of the most important and widely used Promise situations is data
fetching. It doesn't matter whether it's the actual fetch call or some
abstraction on top of it like Axios[32], the Promise behavior is the same.
console.log('first step'); //
will log FIRST
fetch('/some-url') // create
promise here
.then(() => {
// wait for Promise to be
done
// log stuff after the
promise is done
console.log('second step');
// will log THIRD (if
successful)
})
.catch(() => {
console.log('something bad
happened'); // will log THIRD
(if error happens)
});
console.log('third step'); //
will log SECOND
It has a tabs column on the left, navigating between tabs sends a fetch
request, and the data from the request is rendered on the right. If we try to
navigate between tabs quickly, the experience is bad: the content is blinking
and data appears seemingly at random: sometimes the content of the first
tab appears, and then quickly replaced by the second tab, sometimes they
create some sort of carousel. The whole thing just behaves weird.
The implementation of that app looks something like this. We have two
components. One is the root App component, which manages the state of
the active "page" and renders the navigation buttons and the actual Page
component.
return (
<>
{/*left column buttons*/}
<button onClick={() =>
setPage('1')}>Issue 1</button>
<button onClick={() =>
setPage('2')}>Issue 2</button>
The Page component accepts the id of the active page as a prop, sends a
fetch request to get the data, and then renders it. The simplified
implementation (without the loading state) looks like this:
useEffect(() => {
fetch(url)
.then((r) => r.json())
.then((r) => {
// save data from fetch
request to state
setData(r);
});
}, [url]);
// render data
return (
<>
<h2>{data.title}</h2>
<p>{data.description}</p>
</>
);
};
With id , we determine the url from where to fetch data. Then we're
sending the fetch request in useEffect , and storing the result data in the
state - everything is pretty standard. So, where does the race condition and
that weird behavior come from?
Then the nature of Promises comes into effect: fetch within useEffect is
a promise, an asynchronous operation. It sends the actual request, and then
React just moves on with its life without waiting for the result. After ~2
seconds, the request is done, .then of the promise kicks in, within it we
call setData to preserve the data in the state, the Page component is
updated with the new data, and we see it on the screen.
If, after everything is rendered and done, I click on the navigation button,
we'll have this flow of events:
But what will happen if I click on a navigation button and the id changes
while the first fetch is in progress and hasn't finished yet? A really cool
thing!
Boom, race condition! After navigating to the new page, we see a flash of
content: the content from the first finished fetch is rendered, then it's
replaced by the content from the second finished fetch.
This effect is even more interesting if the second fetch finishes before the
first fetch. Then we'll see the correct content of the next page first, and then
it will be replaced by the incorrect content of the previous page.
You can see this behavior in the example below. Wait until everything is
loaded for the first time, then navigate to the second page, and quickly
navigate back to the first page.
This is just evil: the code looks innocent, but the app is broken. How to
solve it?
return (
<>
{page === 'issue' &&
<Issue />}
{page === 'about' &&
<About />}
</>
);
};
No passing down props, Issue and About components have their own
unique URLs from which they fetch the data. And the data fetching happens
in the useEffect hook, exactly the same as before:
This time there is no race condition in the app while navigating. Navigate as
many times and as fast as you want: it behaves normally.
Why?
The answer is here: {page === ‘issue' && <Issue />} . Issue and
About pages are not re-rendered when the page value changes, they are
re-mounted. When the value changes from issue to about , the Issue
component unmounts itself, and the About component is mounted in its
place.
The App component renders first, mounts the Issue component, data
fetching there kicks in.
When I navigate to the next page while the fetch is still in progress, the
App component unmounts the Issue page and mounts the About
component instead, it kicks off its own data fetching.
So, back to the unmounting situation. When the Issue 's fetch request
finishes while I'm on the About page, the .then callback of the Issue
component will try to call its setIssue state. But the component is gone.
From React's perspective, it doesn't exist anymore. So the promise will just
die out, and the data it got will just disappear into the void.
By the way, do you remember that scary warning "Can't perform a React
state update on an unmounted component"? It used to appear in exactly
these situations: when an asynchronous operation like data fetching finishes
after the component is already gone. "Used to", since it's gone as well. It
was removed quite recently[34].
Anyway. In theory, this behavior can be applied to solve the race condition
in the original app: all we need is to force the Page component to re-mount
on navigation. We can use the "key" attribute for this:
However, this is not a solution I would recommend for the general race
conditions problem. There are too many caveats: performance might suffer,
unexpected bugs with focus and state, unexpected triggering of useEffect
down the render tree. It's more like sweeping the problem under the rug.
There are better ways to deal with race conditions (see below). But it can be
a tool in your arsenal in certain cases if used carefully.
If the result returns the id that was used to generate the url , we can just
compare them. And if they don't match, ignore them. The trick here is to
escape the React lifecycle and locally scoped data in functions and get
access to the "latest" id inside all iterations of useEffect , even the "stale"
ones. Yet another use case for Refs, which we discussed in Chapter 9. Refs:
from storing data to imperative API.
fetch(`/some-data-
url/${id}`)
.then((r) => r.json())
.then((r) => {
// compare the latest
id with the result
// only update state if
the result actually belongs to
that id
if (ref.current ===
r.id) {
setData(r);
}
});
}, [id]);
};
useEffect(() => {
// update ref value with
the latest url
ref.current = url;
fetch(`/some-data-
url/${id}`).then((result) => {
// compare the latest url
with the result's url
// only update state if
the result actually belongs to
that url
if (result.url ===
ref.current) {
result.json().then((r)
=> {
setData(r);
});
}
});
}, [url]);
};
// normal useEffect
useEffect(() => {
// "cleanup" function -
function that is returned in
useEffect
return () => {
// clean something up here
};
// dependency - useEffect
will be triggered every time
url has changed
}, [url]);
url changes
"cleanup" function is triggered
actual content of useEffect is triggered
This, along with the nature of JavaScript's functions and closures[36], allows
us to do this:
useEffect(() => {
// local variable for
useEffect's run
let isActive = true;
// do fetch here
return () => {
// local variable from above
isActive = false;
};
}, [url]);
The fetch Promise, although async, still exists only within that closure
and has access only to the local variables of the useEffect run that started
it. So when we check the isActive boolean in the .then callback, only
the latest run, the one that hasn't been cleaned up yet, will have the variable
set to true . So all we need now is to check whether we're in the active
closure, and if yes - set state. If no - do nothing. The data will simply
disappear into the void again.
useEffect(() => {
// set this closure to
"active"
let isActive = true;
fetch(`/some-data-url/${id}`)
.then((r) => r.json())
.then((r) => {
// if the closure is
active - update state
if (isActive) {
setData(r);
}
});
return () => {
// set this closure to not
active before next re-render
isActive = false;
};
}, [id]);
useEffect(() => {
// create controller here
const controller = new
AbortController();
return () => {
// abort the request here
controller.abort();
};
}, [url]);
So, on every re-render, the request in progress will be cancelled, and the
new one will be the only one allowed to resolve and set state.
fetch(url, { signal:
controller.signal })
.then((r) => r.json())
.then((r) => {
setData(r);
})
.catch((error) => {
// error because of
AbortController
if (error.name ===
'AbortError') {
// do nothing
} else {
// do something, it's a
real error!
}
});
Interactive example and full code
https://advanced-react.com/examples/15/07
fetch('/some-url')
.then((r) => r.json())
.then((r) => setData(r));
We'd write:
And all the solutions and reasons from above apply, just the syntax will be
slightly different.
Key takeaways
Hope you're impressed by how cool and innocent-looking race conditions
can be and are now able to detect and avoid them with ease. In the final
chapter, we'll close the conversation about advanced React patterns with the
topic of "what to do if something goes terribly wrong?". But before that, a
few things to remember about Promises and race conditions:
A race condition can happen when we update state multiple times after
a promise is resolved in the same React component.
useEffect(() => {
fetch(url)
.then((r) => r.json())
.then((r) => {
// this is vulnerable to
the race conditions
setData(r);
});
}, [url]);
We all want our apps to be stable, work perfectly, and cater to every edge
case imaginable, don't we? But the sad reality is that we are all humans (at
least that is my assumption), we all make mistakes, and there is no such
thing as bug-free code. No matter how careful we are or how many
automated tests we write, there will always be a situation when something
goes terribly wrong. The important thing, when it comes to user experience,
is to predict that terrible thing, localize it as much as possible, and deal with
it in a graceful way until it can be actually fixed.
The answer is simple: starting from version 16, an error thrown during the
React lifecycle will cause the entire app to unmount itself if not stopped.
Before that, components would be preserved on the screen, even if
malformed and misbehaved. Now, an unfortunate uncaught error in some
insignificant part of the UI, or even some external library that you have no
control over, can destroy the entire page and render an empty screen for
everyone.
We have our good old try/catch [39] statement, which is more or less self-
explanatory: try to do stuff, and if they fail - catch the mistake and do
something to mitigate it:
try {
// if we're doing something
wrong, this might throw an error
doSomething();
} catch (e) {
// if error happened, catch
it and do something with it
without stopping the app
// like sending this error to
some logging service
}
This also works with async functions with the same syntax:
try {
await fetch('/bla-bla');
} catch (e) {
// oh no, the fetch failed!
We should do something about it!
}
fetch('/bla-bla')
.then((result) => {
// if a promise is
successful, the result will be
here
// we can do something
useful with it
})
.catch((e) => {
// oh no, the fetch failed!
We should do something about it!
});
It's the same concept, just a bit different implementation, so for the rest of
the chapter, I'm just going to use try/catch syntax for all errors.
The most obvious and intuitive answer would be to render something while
we wait for the fix. Luckily, we can do whatever we want in that catch
statement, including setting the state. So we can do something like this:
useEffect(() => {
try {
// do something like
fetching some data
} catch (e) {
// oh no! the fetch
failed, we have no data to
render!
setHasError(true);
}
});
We're trying to send a fetch request, if it fails - setting the error state, and if
the error state is true , then we render an error screen with some additional
info for users, like a support contact number.
This approach is pretty straightforward and works great for simple,
predictable, and narrow use cases like catching a failed fetch request.
But if you want to catch all errors that can happen in a component, you'll
face some challenges and serious limitations.
try {
useEffect(() => {
throw new Error('Hulk
smash!');
}, []);
} catch (e) {
// useEffect throws, but this
will never be called
}
try {
child = <Child />;
} catch (e) {
// useless for catching
errors inside Child component,
won't be triggered
}
return child;
};
or even this:
And the render will happen after the try/catch block is executed
successfully, exactly the same story as with promises and the useEffect
hook.
Simple code like this, for example, will cause an infinite loop of re-renders
if an error happens:
try {
doSomethingComplicated();
} catch (e) {
// don't do that! will
cause infinite loop in case of
an error
// see codesandbox below
with live example
setHasError(true);
}
};
We could, of course, just return the error screen here instead of setting state:
But this, as you can imagine, is a bit cumbersome and forces us to handle
errors in the same component differently: state for useEffect and
callbacks, and direct return for everything else.
// while it will work, it's
super cumbersome and hard to
maitain, don't do that
const SomeComponent = () => {
const [hasError, setHasError]
= useState(false);
useEffect(() => {
try {
// do something like
fetching some data
} catch (e) {
// can't just return in
case of errors in useEffect or
callbacks
// so have to use state
setHasError(true);
}
});
try {
// do something during
render
} catch (e) {
// but here we can't use
state, so have to return
directly in case of an error
return <SomeErrorScreen />;
}
return <SomeComponentContent
{...datasomething} />;
};
React ErrorBoundary
component
To mitigate the limitations from above, React gives us what is known as
"Error Boundaries"[40]: a special API that turns a regular component into a
try/catch statement in a way, only for React declarative code. Typical
usage that you can see in every example over there, including the React
docs, will be something like this:
But React doesn't give us the component per se, it just gives us a tool to
implement it. The simplest implementation would look something like this:
render() {
// if error happened,
return a fallback component
if (this.state.hasError) {
return <>Oh no! Epic fail!
</>;
}
return this.props.children;
}
}
componentDidCatch(error,
errorInfo) {
// send error to somewhere
here
log(error, errorInfo);
}
}
After the error boundary is set up, we can do whatever we want with it,
same as any other component. We can, for example, make it more reusable
and pass the fallback as a prop:
render() {
// if error happened, return
a fallback component
if (this.state.hasError) {
return this.props.fallback;
}
return this.props.children;
}
Or anything else that we might need, like resetting state on a button click,
differentiating between types of errors, or pushing that error to a context
somewhere.
ErrorBoundary component:
limitations
Error boundaries only catch errors that happen during the React lifecycle.
Things that happen outside of it, like resolved promises, async code with
setTimeout , various callbacks, and event handlers, will disappear if not
dealt with explicitly.
const ComponentWithBoundary = ()
=> {
return (
<ErrorBoundary>
<Component />
</ErrorBoundary>
);
};
if (hasError) return
'something went wrong';
But. We're back to square one: every component needs to maintain its
"error" state and, more importantly, make a decision on what to do with it.
const ComponentWithBoundary = ()
=> {
const [hasError, setHasError]
= useState();
const fallback = 'Oh no!
Something went wrong';
return (
<ErrorBoundary fallback=
{fallback}>
<Component onError={() =>
setHasError(true)} />
</ErrorBoundary>
);
};
But it's so much additional code! We'd have to do it for every child
component in the render tree. Not to mention that we're basically
maintaining two error states now: one in the parent component and another
in ErrorBoundary itself. And ErrorBoundary already has all the
mechanisms in place to propagate the errors up the tree, so we're doing
double work here.
Can't we just catch those errors from async code and event handlers with
ErrorBoundary instead?
The trick here is to catch those errors first with try/catch . Then inside the
catch statement, trigger a normal React re-render, and then re-throw those
errors back into the re-render lifecycle. That way, ErrorBoundary can
catch them like any other error. And since a state update is the way to
trigger a re-render, and the state set function can actually accept an updater
function[42] as an argument, the solution is pure magic:
useEffect(() => {
fetch('/bla')
.then()
.catch((e) => {
// throw async error
here!
throwAsyncError(e);
});
});
};
const
useCallbackWithErrorHandling =
(callback) => {
const [state, setState] =
useState();
const onClickWithErrorHandler
=
useCallbackWithErrorHandling(onC
lick);
return (
<button onClick=
{onClickWithErrorHandler}>
click me!
</button>
);
};
Or anything else that your heart desires and the app requires. There are no
limits! And no errors will get away anymore.
Key takeaways
That is it for the errors and this chapter, and in fact this book! Hope this was
an enjoyable experience for you. And don't forget, when dealing with errors
in React:
That's a wrap! Congratulations, you've made it! Hope it was worth the time
spent, you learned plenty of new things, and most importantly, had fun
while doing so.
Read the author's blog for more content like in this book:
https://www.developerway.com
Subscribe to the newsletter for updates on new product releases. There may
or may not be a video course coming on the topic of this book ;):
https://www.advanced-react.com
Nadia
Footnotes