- Notifications
You must be signed in to change notification settings - Fork16
The magic memoization for the State management. ✨🧠
License
theKashey/memoize-state
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Caching (aka memoization) is very powerful optimization technique - however it only makes sense when maintaining the cache itself and looking up cached results is cheaper than performing computation itself again.You don't need WASM to speed up JS
Blazing fast usage-tracking based selection and memoization library, which always works....
Read me -How I wrote the world’s fastest memoization library
Reselect? Memoize-one? Most of memoization libraries remembers the parameters you provided, not what you did inside.Sometimes is not easy to achive high cache hit ratio. Sometimes you have tothink about how to properly dissolve computation into thememoizable parts.
I don't want to think how to use memoization, I want to use memoization!
Memoize-state is built to memoize more complex situations, even the ones which are faster to recompute, than to deside that recalculation is not needed.Just because one cheap computation can cause a redraw/reflow/recomputation cascade for a whole application.
Lets imagine some complex function.
constfn=memoize((number,state,string)=>({result:state[string]+number}))letfirstValue=fn(1,{value:1,otherValue :1},'value');// first callfirstValue===fn(1,{value:1,otherValue :2},'value');// "nothing" changedfirstValue===fn(1,{value:1,somethingElse:3},'value');// "nothing" changedfirstValue!==fn(2,{value:1,somethingElse:3},'value');// something important changed
Allordinal memoization libraries will drop cache each time, as longstate is different each time.More of it - they will return a unique object each time, as long the function is returning a new object each time.But not today!
Memoize-state memoizes tracks usedstate parts, using the samemagic, as you can find inMobX orimmer.It will know, that it should react only on somestate.value1 change, but notvalue2.Perfect.
Now you able just to write functions AS YOU WANT. Memoize-state will detect allreally used arguments, variables and keys, and then - react only to theright changes.
- React-memoize - magic memoization for React, componentWillReceiveProps optimization, selection from context, whole SFC memoization.
- beautiful-react-redux - instant memoization for React-Redux
- why-did-you-update-redux - selector quiality checker
- react-tracked - React Context API made using the same principles.
- your project!
memoizeState(function, options)- creates memoized variant of a function.
- Name, length (argument count), and any other own key will be transferred to memoized result
- If argument is an object - memoize will perform
proxyequalcomparison, resulting true, of you did no access any object member - If argument is not an object - memoize will compare values.
- result function will have
cacheStatisticsmethod. JFYI.
cacheSize, default 1. The size of the cache.shallowCheck, default true. Perform shallow equal between arguments.equalCheck, default true. Perform deep proxyequal comparision.strictArity, default false. Limit arguments count to the function default.nestedEquality, default true. Keep the object equality for sub-proxies.safe, default false. Activate thesafememoization mode. See below.
You know - it should be apure function, returning the same results for the same arguments.mapStateToProps, should be strict equal across the different callsmapStateToProps(state) === mapStateToProps(state)or, at least, shallow equalshallowEqual(mapStateToProps(state), mapStateToProps(state)).
Creating good memoization function, using reselect, avoiding side-effects - it could be hard. I know.
Memoize-state was created to solve this case, especially this case.
Memoize-state will track the way youUSE the state.
conststate={branch1:{...},branch2:{someKey1:1,someKey2:2}}constaFunction=(state)=>state.branch2.someKey2&&Math.random();constfastFunction=memoize(aFunction);
After the first launch memoize-state will detect the used parts of a state, and then react only for changes inside them
constresult1=fastFunction(state);// result1 - some random. 42 for exampleconstresult2=fastFunction({branch2:{someKey2:2}})// result2 - the same value! A new state is `proxyequal` to the oldconstresult3=fastFunction({branch2:{someKey2:3}})// result3 - is the NEW, at last.
- Wrap mapStateToProps by
memoize - Choose the memoization options (unsafe by default).
importmemoizefrom'memoize-state';constmapStateToProps=memoize((state,props)=>{//....});
You can use compose(flow, flowRight) to pipe result from one memoized function to another. But better to useflow
! All functions acceptsObject as input and return __Object as output.
import{memoizedFlow,memoizedFlowRight,memoizedPipe,memoizedCompose}from'memoize-state';// memoizedFlow will merge result with the current input// thus you can not import and not return all the keys// and memoization will workconstsequence=memoizedFlow([({a,b})=>({sumAB:a+b}),({a,c})=>({sumAC:a+c}),({sumAB, sumAC})=>({result:sumAB+sumAC})]);sequence({a:1,b:1,c:1})===({a:1,b:1,c:1,sumAB:2,sumAC:2,result:4})//----------------importflowfrom'lodash.flow';// You have to rethrow all the variables you might need in the future// and memoization will not properly work, as long step2 will be regenerated then you will change b// as long it depends on sumAB from step1constsequence=flow([({a,b, c})=>({sumAB:a+b, a,c}),({a,c, sumAB})=>({sumAC:a+c, sumAB}),({sumAB, sumAC})=>({result:sumAB+sumAC})]);sequence({a:1,b:1,c:1})===({result:4})
memoizedFlowis equal tomemoizedPipe, and applies functions from first to last.memoizedFlowRightis equal tomemoizedCompose, and applies functions from last to right(right).
You also could use memoize-state to double check your selectors.
import{shouldBePure}from'memoize-state';constmapStateToProps=shouldBePure((state,props)=>{//....});// then it will log all situations, when result was not shallow equal to the old one, but should.
shouldBePure will deactivate itself inproduction env. UseshallBePure if you need it always enabled.
Not all functions could besafely memoized. Just not all of them.The wrapped functionhave to be pure.
letcache=0;constfunc=(state)=>(cache||cache=state.a);constmemoizedState=memoize(func);memoizedState({a:1});// will return 1 AND fill up the cachememoizedState({a:2});// will return 1 FROM cache, and dont read anything from statememoizedState({a:3});// memoize state saw, that you dont read anything from a state.// and will ignore __ANY__ changes. __FOREVER__!
PS: this would not happened if state.a is a object. Memoize-state will understand the case, when you are returning a part of a state
It's easy to fix -memoize(func, { safe: true }), but func will becalled twice to detect internal memoization.
In case of internal memoization safe-memoize will deactivate itself.
Check performed only twice. Once on execution, and once on first cached result.In both cases wrapped function should return the "same" result.
Yes, you could.
But memoize-state could disable another underlying memoizations libraries.
Not everything is simple. Memoize-state works on copies of original object,returning the originalobject, if you have returned a copy.
That means - if you get an array.sort it and return result - you will return unsorted result.
Input has to be immutable, don't sort it, don't mutate it, don't forget to Array.slice().but you are the right person to watch over it.
UsesES6 Proxy underneath to detect used branches of a state (asMobX).Removes all the magic from result value.Should be slower than "manual" __reselect__ors, but faster than anything else.
We have a performance test, according to the results -
- memoize-stateis not slower than major competitors, and10-100x times faster, for the "state" cases.
- lodash.memoize and fast-memoize could not handlebig states as input.
- memoize-one should be super fast, but it is not
But the major difference is
- memoize-one are havinghighest hitratio, than means - it were able to "memoize" most of the cases
function of 3 arguments, all unchangedbase x 10230 ops/sec ±2.63% (5 runs sampled) hitratio 0% 5700 /5700memoize-one x 24150462 ops/sec ±3.02% (6 runs sampled) hitratio 100% 1 /14019795lodash.memoize x 2954428 ops/sec ±4.02% (6 runs sampled) hitratio 100% 1 /15818699fast-memoize x 1065755 ops/sec ±3.22% (6 runs sampled) hitratio 100% 1 /16243313memoize-state x 4910783 ops/sec ±2.55% (5 runs sampled) hitratio 100% 1 /18929141Fastest is memoize-onefunction of 1 arguments, object unchangedbase x 408704195 ops/sec ±0.55% (5 runs sampled) hitratio 100% 0 /188881067memoize-one x 77024718 ops/sec ±1.78% (6 runs sampled) hitratio 100% 0 /221442642lodash.memoize x 3776797 ops/sec ±1.55% (6 runs sampled) hitratio 100% 0 /223654022fast-memoize x 75375793 ops/sec ±3.08% (6 runs sampled) hitratio 100% 0 /267664702memoize-state x 5690401 ops/sec ±3.77% (5 runs sampled) hitratio 100% 0 /271589669Fastest is basefunction of 1 arguments, object unchangedbase x 398167311 ops/sec ±0.50% (6 runs sampled) hitratio 100% 0 /190155405memoize-one x 76062398 ops/sec ±3.71% (6 runs sampled) hitratio 100% 0 /231172341lodash.memoize x 3734556 ops/sec ±6.70% (6 runs sampled) hitratio 100% 0 /233243184fast-memoize x 37234595 ops/sec ±2.30% (6 runs sampled) hitratio 100% 0 /250419641memoize-state x 639290 ops/sec ±6.09% (6 runs sampled) hitratio 100% 0 /250718787Fastest is basefunction of 2 arguments, providing 3, all unchangedbase x 10426 ops/sec ±3.01% (6 runs sampled) hitratio 0% 3712 /3712memoize-one x 24164455 ops/sec ±6.67% (6 runs sampled) hitratio 100% 1 /15190474lodash.memoize x 2826340 ops/sec ±3.44% (6 runs sampled) hitratio 100% 1 /16624930fast-memoize x 1070852 ops/sec ±2.70% (6 runs sampled) hitratio 100% 1 /17155394memoize-state x 4966459 ops/sec ±1.13% (5 runs sampled) hitratio 100% 1 /19324311Fastest is memoize-onefunction of 3 arguments, all changed / 10base x 10189 ops/sec ±3.13% (6 runs sampled) hitratio 0% 3657 /3657memoize-one x 19842 ops/sec ±2.73% (6 runs sampled) hitratio 63% 5316 /14288lodash.memoize x 33160 ops/sec ±1.45% (5 runs sampled) hitratio 83% 5782 /33561fast-memoize x 19029 ops/sec ±6.04% (5 runs sampled) hitratio 86% 6731 /47024memoize-state x 18527 ops/sec ±10.56% (5 runs sampled) hitratio 93% 3868 /54760Fastest is lodash.memoizefunction with an object as argument, returning a partbase x 10095 ops/sec ±3.49% (5 runs sampled) hitratio 0% 4107 /4107memoize-one x 10054 ops/sec ±3.14% (6 runs sampled) hitratio 50% 4141 /8249lodash.memoize x 1695449 ops/sec ±3.68% (6 runs sampled) hitratio 100% 1 /950379fast-memoize x 1287216 ops/sec ±1.29% (6 runs sampled) hitratio 100% 1 /1590863memoize-state x 1574688 ops/sec ±2.24% (6 runs sampled) hitratio 100% 1 /2469327Fastest is lodash.memoizefunction with an object as argument, changing value, returning a partbase x 10187 ops/sec ±1.66% (6 runs sampled) hitratio 0% 4179 /4179memoize-one x 10205 ops/sec ±3.96% (6 runs sampled) hitratio 50% 4174 /8354lodash.memoize x 87943 ops/sec ±12.70% (5 runs sampled) hitratio 92% 4138 /49727fast-memoize x 90510 ops/sec ±1.05% (6 runs sampled) hitratio 96% 3972 /89439memoize-state x 76372 ops/sec ±6.67% (6 runs sampled) hitratio 97% 3612 /125554Fastest is fast-memoize,lodash.memoizefunction with an object as argument, changing other value, returning a partbase x 9867 ops/sec ±7.72% (5 runs sampled) hitratio 0% 4537 /4537memoize-one x 10066 ops/sec ±4.24% (5 runs sampled) hitratio 47% 5059 /9597lodash.memoize x 92596 ops/sec ±0.61% (6 runs sampled) hitratio 92% 4515 /54745fast-memoize x 89224 ops/sec ±1.24% (5 runs sampled) hitratio 96% 3445 /89181memoize-state x 1469865 ops/sec ±2.95% (5 runs sampled) hitratio 100% 1 /805990Fastest is memoize-statefunction with 2 objects as argument, changing both valuebase x 10127 ops/sec ±2.21% (5 runs sampled) hitratio 0% 5489 /5489memoize-one x 10030 ops/sec ±3.97% (6 runs sampled) hitratio 60% 3702 /9192lodash.memoize x 9745 ops/sec ±4.69% (6 runs sampled) hitratio 70% 3997 /13190fast-memoize x 9268 ops/sec ±5.04% (5 runs sampled) hitratio 77% 3855 /17046memoize-state x 63493 ops/sec ±6.49% (6 runs sampled) hitratio 94% 2736 /44395Fastest is memoize-statewhen changes anything, except the function gonna to consumebase x 9901 ops/sec ±3.78% (6 runs sampled) hitratio 0% 5121 /5121memoize-one x 10087 ops/sec ±2.59% (6 runs sampled) hitratio 57% 3914 /9036lodash.memoize x 9643 ops/sec ±1.25% (6 runs sampled) hitratio 67% 4361 /13398fast-memoize x 9554 ops/sec ±1.13% (6 runs sampled) hitratio 76% 4228 /17627memoize-state x 520442 ops/sec ±1.54% (5 runs sampled) hitratio 100% 1 /270727Fastest is memoize-statewhen state is very big, and you need a small partbase x 10097 ops/sec ±1.63% (6 runs sampled) hitratio 0% 4428 /4428memoize-one x 9262 ops/sec ±6.27% (5 runs sampled) hitratio 53% 3974 /8403lodash.memoize x 276 ops/sec ±3.31% (6 runs sampled) hitratio 100% 12 /8516fast-memoize x 280 ops/sec ±4.77% (6 runs sampled) hitratio 100% 10 /8615memoize-state x 83005 ops/sec ±6.47% (6 runs sampled) hitratio 92% 4042 /49019Fastest is memoize-statefunctionfn1(object){returnobject.value}// ^^ memoize state will react to any change of .valuefunctionfn2(object){return{...object.value}}// ^^ memoize state will react to any change of the values inside the .value// for example, if value contain booleans the X and they Y - they form 4 possible pairsconstsuperMemoize=memoize(fn2,{cacheSize:4});// ^^ you just got uber function, which will return 4 exactly the same objects
Executing the function against EMPTY function, but triggering most of internal mechanics.
base x 244.000.431 memoize-one x 18.150.966 lodash.memoize x 3.941.183 fast-memoize x 34.699.858 memoize-state x 4.615.104this 4 millions operations per second? A bit more that enough
Memoize-state is not a best fit for a common case. It is designed to handle
- the complex objects
- limited count of stored cache lines (default: 1)
This is a fibonacci test from - fast-memoize. The test uses different performance measuring tooland numbers differs.
│ fast-memoize@current │ 204,819,529 │ ± 0.85% │ 88 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ lru-memoize (single cache) │ 84,862,416 │ ± 0.59% │ 93 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ iMemoized │ 35,008,566 │ ± 1.29% │ 90 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ lodash │ 24,197,907 │ ± 3.70% │ 82 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ underscore │ 17,308,464 │ ± 2.79% │ 87 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ memoize-state <<---- │ 17,175,290 │ ± 0.80% │ 87 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ memoizee │ 12,908,819 │ ± 2.60% │ 78 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ lru-memoize (with limit) │ 9,357,237 │ ± 0.47% │ 91 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ ramda │ 1,323,820 │ ± 0.54% │ 92 │├────────────────────────────┼─────────────┼──────────────────────────┼─────────────┤│ vanilla │ 122,835 │ ± 0.72% │ 89 │└────────────────────────────┴─────────────┴──────────────────────────┴─────────────┘memoize-state is comparable with lodash and underscore, even in this example.
memoize-state: object spread detected in XXX. Consider refactoring.
Memoize state could not properly work if you "spread" state
constmapStateToProps=({prop,i,need,...rest})=>....//orconstmapStateToProps=(state,props)=>({ ...state, ...props})//orconstmapState=({ page, direction, ...state})=>({ page, direction,isLoading:isLoading(state)})
It will assume, that you need ALL the keys, meanwhile - you could not.
Workaround - refactor the code
constmapState=state=>({page:state.page,direction:state.direction,isLoading:isLoading(state)})
Seeissue for more details
IE11/Android compatible. Containsproxy-polyfill inside.
MIT
About
The magic memoization for the State management. ✨🧠
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors8
Uh oh!
There was an error while loading.Please reload this page.
