Immutability and React

Immutability through persistent data structures

Immutable data offers a lot of advantages : it makes programs easier to reason about, simpler, and in some cases, faster. Unfortunately, Javascript has not been designed with immutability in mind. In the past, processors were a lot slower and efficient persistent data structures were not yet implemented. That's why immutable data was considered too slow to be practical. As we will see, a naive implementation of a persistent data structure is indeed horribly inefficient, because it copies the DS into a newly created one, which is slow and wastes memory.

Thankfully people have implemented less naive data structures. A rough idea of those implementations is that they keep references to all the unchanged data, and they only need to allocate new space for the unchanged parts of the data structure.

In the end we might get a performance hit by using persistent data structures, but we'll also learn about nice features they provide, that allow us to regain speed in some areas of our application logic.

A naive implementation of immutability

Let's start by creating our own naive implementation of an immutable data structure. When they need to prevent mutating data structures in ES6, for example when working with Redux,
people often use the object spread operator to keep the state store immutable. Let's declare a function to do it for us :

function isObject(val) { if (val === null) { return false;} return ( (typeof val === 'function') || (typeof val === 'object') ); } function cloneDs(ds) { if(ds.constructor === Array) { return ds.slice(0); } if (isObject(ds)) { const newObj = {}; for(key in ds) { newObj[key] = ds[key]; } return newObj; } return ds; } function updateObject(ds, keysPath, newVal) { if (!keysPath || !keysPath.length) { return newVal; } const newObj = cloneDs(ds); const key = keysPath.shift(); newObj[key] = updateObject(ds[key], keysPath, newVal); return newObj; }

Let's also declare a helper function to test the shallow equality of our data structures, which, in fact, consists of checking whether two variable contain references to the same data structure in memory.

function hasItChanged(oldDs, newDs) { return !(oldDs === newDs); }

Now, let's create an object, then update it, using our new shiny immutability helpers !

const state = { immutable: { wait: 'what?' } }; const updatedState = updateObject(state, ['immutable', 'wait'], 'what?'); hasItChanged(state, updatedState);

As we can see, hasItChanged returns true, even though the object contents haven't really changed, because updatedState was completely recreated anew by copying the content of state. From Javascript's point of view, those look like two completely different objects.

If we always use updateObject to update objects, we can assume that, from now on, two different references will point towards two objects containing different data. So we could implement an easy equality check to change the object just when the references are diferent.

function cloneAndUpdate(ds, k, v) { const newDs = cloneDs(ds); newDs[k] = v; return newDs; } function setIn(obj, keysPath, newVal) { if (keysPath && keysPath.length) { const key = keysPath.shift(); const valueUpdated = setIn(obj[key], keysPath, newVal); return obj[key] === valueUpdated ? obj : cloneAndUpdate(obj, key, valueUpdated); } return newVal === obj ? obj : newVal; } const state2 = { immutable: { wait: 'what?' } }; const updatedState2 = setIn(state2, ['immutable', 'wait'], 'what?'); hasItChanged(state2, updatedState2);

Now we have a function that we can use to work with objects as if they were immutable. This is nice, but it has several flaws. As we discussed earlier, the first one is that detroying the object and recreating it is very expensive. Another problem is that our immutable data structures are not native so we can make mistakes and forget to update the object the "right way" some times. This makes our data structures difficult to debug, and errors will follow.

Implementing immutability the right way, with persistent data structures

ImmutableJs and beyond

People at Facebook created immutable-js because they realized that using immutability made react's reconciliation process more efficient. If you use words like "complect" and know the diference between simple and easy, chances are that you believe the interest in persistent data structures comes from the interactions of facebook developers with the Clojurescript community. Legend has it that a facebook developer thanked David Nolen for "saving facebook a lot of money" after they saw David demonstrate how Clojurescript + React was faster than vanilla JS just because of immutability. This is impressive because Clojurescript has all the overhead of the cljs runtime, and the additional build time when compiling it to JS.

Here is how our previous code would look like using immutable-js. It is easy to check that our data structures are not the same, and have been updated. However, using ImmutableJS Map.prototype.equals method to check deep equality of two data structures, we can check that the contents of the two data structures are actually equal :

(function usingImmutable() { const iState = Immutable.Map({ immutable: Immutable.Map({ oh: 'yes' }) }); const updatedIstate = iState.setIn(['immutable', 'oh'], 'yes' ); return ({ hasChanged: hasItChanged(iState, updatedIstate), hasSameContents: iState.equals(updatedIstate), }); })();

And here it is in Clojurescript :

(def state {:immutable {:oh "yes"}}) (def updatedState (assoc-in state [:immutable :oh] "yes")) (js/hasItChanged state updatedState)

We personally think that using a library instead of a native implementation of persistent data structures brings to the table a lot of additional complexity. That being said, until ASM.js becomes mainstream, it is the best option people that absolutely want to stick with native JS have.

Why is react faster with persistent DS?

React is faster with these data structures because, with immutable persistent data, comparing big data structures to test if they have changed suddenly becomes efficient. React components have one lifecycle method called shouldComponentUpdate that tests... what its name implies (it allows to control whether React should re-render the component when it detects a change in props or state). We can use this test to discard a whole subtree update during reconciliation, and improve performance a lot.

Unfortunately, we think it is not yet easy to take advantage of this. There are a lot of gotchas for beginners, like what happens when you declare an arrow function inside a component. Also, on the official React documentation they mention that using shouldComponentUpdate inside an stateful component won't exclude the whole subtree and that this optimization will take place only if we use a React.PureComponent.

Conclusion

Mixing immutable persistent data structures with React promises performance gains and a way to start writing more functional code. However, in practice, there are a lot of gotchas and quirks that make using immutable libraries with javascript painful. We would love to see people contributing to functional languages that compile to JS instead of adding complexity to javascript just to get something that is not ideal. In the meantime, there are ways to get started and wrap up your mind around the way of thinking that immutability introduces. Do you use immutability ? Has this article convinced you to try it in your apps ? We would love to read your input about in in the comments section below !

results matching ""

    No results matching ""