Redux FAQ: Performance
Table of Contents
- How well does Redux “scale” in terms of performance and architecture?
- Won't calling “all my reducers” for each action be slow?
- Do I have to deep-clone my state in a reducer? Isn't copying my state going to be slow?
- How can I reduce the number of store update events?
- Will having “one state tree” cause memory problems? Will dispatching many actions take up memory?
- Will caching remote data cause memory problems?
Performance
How well does Redux “scale” in terms of performance and architecture?
While there's no single definitive answer to this, most of the time this should not be a concern in either case.
The work done by Redux generally falls into a few areas: processing actions in middleware and reducers (including object duplication for immutable updates), notifying subscribers after actions are dispatched, and updating UI components based on the state changes. While it's certainly possible for each of these to become a performance concern in sufficiently complex situations, there's nothing inherently slow or inefficient about how Redux is implemented. In fact, React Redux in particular is heavily optimized to cut down on unnecessary re-renders, and React-Redux v5 shows noticeable improvements over earlier versions.
Redux may not be as efficient out of the box when compared to other libraries. For maximum rendering performance in a React application, state should be stored in a normalized shape, many individual components should be connected to the store instead of just a few, and connected list components should pass item IDs to their connected child list items (allowing the list items to look up their own data by ID). This minimizes the overall amount of rendering to be done. Use of memoized selector functions is also an important performance consideration.
As for architecture, anecdotal evidence is that Redux works well for varying project and team sizes. Redux is currently used by hundreds of companies and thousands of developers, with several hundred thousand monthly installations from NPM. One developer reported:
for scale, we have ~500 action types, ~400 reducer cases, ~150 components, 5 middlewares, ~200 actions, ~2300 tests
Further information
Documentation
Articles
- How to Scale React Applications (accompanying talk: Scaling React Applications)
- High-Performance Redux
- Improving React and Redux Perf with Reselect
- Encapsulating the Redux State Tree
- React/Redux Links: Performance - Redux
Discussions
- #310: Who uses Redux?
- #1751: Performance issues with large collections
- React Redux #269: Connect could be used with a custom subscribe method
- React Redux #407: Rewrite connect to offer an advanced API
- React Redux #416: Rewrite connect for better performance and extensibility
- Redux vs MobX TodoMVC Benchmark: #1
- Reddit: What's the best place to keep the initial state?
- Reddit: Help designing Redux state for a single page app
- Reddit: Redux performance issues with a large state object?
- Reddit: React/Redux for Ultra Large Scale apps
- Twitter: Redux scaling
- Twitter: Redux vs MobX benchmark graph - Redux state shape matters
- Stack Overflow: How to optimize small updates to props of nested components?
- Chat log: React/Redux perf - updating a 10K-item Todo list
- Chat log: React/Redux perf - single connection vs many connections
Won't calling “all my reducers” for each action be slow?
It's important to note that a Redux store really only has a single reducer function. The store passes the current state and dispatched action to that one reducer function, and lets the reducer handle things appropriately.
Obviously, trying to handle every possible action in a single function does not scale well, simply in terms of function size and readability, so it makes sense to split the actual work into separate functions that can be called by the top-level reducer. In particular, the common suggested pattern is to have a separate sub-reducer function that is responsible for managing updates to a particular slice of state at a specific key. The combineReducers()
that comes with Redux is one of the many possible ways to achieve this. It's also highly suggested to keep your store state as flat and as normalized as possible. Ultimately, though, you are in charge of organizing your reducer logic any way you want.
However, even if you happen to have many different reducer functions composed together, and even with deeply nested state, reducer speed is unlikely to be a problem. JavaScript engines are capable of running a very large number of function calls per second, and most of your reducers are probably just using a switch
statement and returning the existing state by default in response to most actions.
If you actually are concerned about reducer performance, you can use a utility such as redux-ignore or reduxr-scoped-reducer to ensure that only certain reducers listen to specific actions. You can also use redux-log-slow-reducers to do some performance benchmarking.
Further information
Discussions
- #912: Proposal: action filter utility
- #1303: Redux Performance with Large Store and frequent updates
- Stack Overflow: State in Redux app has the name of the reducer
- Stack Overflow: How does Redux deal with deeply nested models?
Do I have to deep-clone my state in a reducer? Isn't copying my state going to be slow?
Immutably updating state generally means making shallow copies, not deep copies. Shallow copies are much faster than deep copies, because fewer objects and fields have to be copied, and it effectively comes down to moving some pointers around.
However, you do need to create a copied and updated object for each level of nesting that is affected. Although that shouldn't be particularly expensive, it's another good reason why you should keep your state normalized and shallow if possible.
Common Redux misconception: you need to deeply clone the state. Reality: if something inside doesn't change, keep its reference the same!
Further information
Documentation
- Recipes: Structuring Reducers - Prerequisite Concepts
- Recipes: Structuring Reducers - Immutable Update Patterns
Discussions
- #454: Handling big states in reducer
- #758: Why can't state be mutated?
- #994: How to cut the boilerplate when updating nested entities?
- Twitter: common misconception - deep cloning
- Cloning Objects in JavaScript
How can I reduce the number of store update events?
Redux notifies subscribers after each successfully dispatched action (i.e. an action reached the store and was handled by reducers). In some cases, it may be useful to cut down on the number of times subscribers are called, particularly if an action creator dispatches multiple distinct actions in a row.
If you use React, note that you can improve performance of multiple synchronous dispatches by wrapping them in ReactDOM.unstable_batchedUpdates()
, but this API is experimental and may be removed in any React release so don't rely on it too heavily. Take a look at redux-batched-actions (a higher-order reducer that lets you dispatch several actions as if it was one and “unpack” them in the reducer), redux-batched-subscribe (a store enhancer that lets you debounce subscriber calls for multiple dispatches), or redux-batch (a store enhancer that handles dispatching an array of actions with a single subscriber notification).
Further information
Discussions
- #125: Strategy for avoiding cascading renders
- #542: Idea: batching actions
- #911: Batching actions
- #1813: Use a loop to support dispatching arrays
- React Redux #263: Huge performance issue when dispatching hundreds of actions
Libraries
Will having “one state tree” cause memory problems? Will dispatching many actions take up memory?
First, in terms of raw memory usage, Redux is no different than any other JavaScript library. The only difference is that all the various object references are nested together into one tree, instead of maybe saved in various independent model instances such as in Backbone. Second, a typical Redux app would probably have somewhat less memory usage than an equivalent Backbone app because Redux encourages use of plain JavaScript objects and arrays rather than creating instances of Models and Collections. Finally, Redux only holds onto a single state tree reference at a time. Objects that are no longer referenced in that tree will be garbage collected, as usual.
Redux does not store a history of actions itself. However, the Redux DevTools do store actions so they can be replayed, but those are generally only enabled during development, and not used in production.
Further information
Documentation
Discussions
- Stack Overflow: Is there any way to "commit" the state in Redux to free memory?
- Stack Overflow: Can a Redux store lead to a memory leak?
- Stack Overflow: Redux and ALL the application state
- Stack Overflow: Memory Usage Concern with Controlled Components
- Reddit: What's the best place to keep initial state?
Will caching remote data cause memory problems?
The amount of memory available to JavaScript applications running in a browser is finite. Caching data will cause performance problems when the size of the cache approaches the amount of available memory. This tends to be a problem when the cached data is exceptionally large or the session is exceptionally long-running. And while it is good to be aware of the potential for these problems, this awareness should not discourage you from efficiently caching reasonable amounts of data.
Here are a few approaches to caching remote data efficiently:
First, only cache as much data as the user needs. If your application displays a paginated list of records, you don't necessarily need to cache the entire collection. Instead, cache what is visible to the user and add to that cache when the user has (or will soon have) an immediate need for more data.
Second, cache an abbreviated form of a record when possible. Sometimes a record includes data that is not relevant to the user. If the application does not depend on this data, it can be omitted from the cache.
Third, only cache a single copy of a record. This is especially important when records contain copies of other records. Cache a unique copy for each record and replace each nested copy with a reference. This is called normalization. Normalization is the preferred approach to storing relational data for several reasons, including efficient memory consumption.
Further information
Discussions