Introduction
Staying current in the Javascript ecosystem is not for the faint of heart. It’s challenging for those entering the industry to follow what’s happening amongst the new libraries, frameworks, concepts, and strong opinions.
It’s a good reminder that if you’re on the bleeding edge, you are usually the one bleeding. Defaulting to “boring” technologies, ones you are familiar with, and being a late adopter is often a great choice.
With that said, this post will get us up to speed on the bleeding edge of frameworks in the Javascript ecosystem.
We’ll make sense of the current landscape by looking at the past pain points when building large-scale web applications.
Rather than focus on the proliferation of solutions, we’ll dive into the underlying problems. Where each framework gives different answers and makes different trade-offs.
By the end, we’ll have a high-level model of popular frameworks like React, Svelte, Vue, Solid, Astro, Marko, Fresh, Next, Remix, Qwik, and the “meta frameworks” fit into today’s landscape.
It’s helpful to understand the past to make sense of the present. We’ll start with a trip down memory lane to see the path behind us.
This story’s been told before. This time we’ll focus on the problems on larger projects that sparked alternative approaches and ways of thinking.
A handwavy history of web pages
The web began as static documents linked together. Someone could prepare a document ahead of time, and put it on a computer.
The cool thing now was that somebody else could access it — without having to move their physical body to the same geographic location. Pretty neat.
At some point, we thought it would be cool to make these documents dynamic.
We got technologies likeCGI that allowed us to serve different content based on the request.
We then got expressive languages likePerl to write these scripts. Influencing the first language explicitly built for the web -PHP.
The nice innovation with PHP was connecting HTML directly to this backend code. It made it easy to programmatically create documents that embedded dynamic values.
One of the most significant breakthroughs for the web was going from this:
<html> <body> This document has been prepared ahead of time. Regards. </body></html>To having easily embedded dynamic values:
<html><body>Y2K? <?php echo time();?></body></html>Pandora’s box opened
These dynamic pages were a hit. We could now easily customize what we sent to users, including cookies that enabled sessions.
Server-based templating frameworks emerged across the language ecosystems that were now talking to databases. These frameworks made it easy to start with static pages and scale up to dynamic ones.
The web was evolving quickly, and we wanted more interactive experiences. For this we used browser plugins likeFlash. For everything else, we would “sprinkle” Javascript fragments over the HTML served from the backend.
Tools likejQuery andPrototype cropped up and smoothed over the rough edges of web APIs and the quirks between the competing browsers.
Fast forwarding and hand waving. Tech companies were getting bigger, and as projects and teams grew, it was common for more business logic to creep into these templates.
Server code was being written to massage data into the server templating language. Templates often evolved into a mishmash of business logic that accessed global variables. Security was becoming a concern, with attacks like SQL injection commonplace.
Eventually we got“Ajax: A New Approach to Web Applications”.
The new thing you could do now was update the page asynchronously, instead of a synchronous refresh.
This pattern was popularized by the first big client-side applications like Google maps and Google docs.
We were starting to see the power of the web’s distribution for desktop-style software. It was a significant step forward compared to buying software on CDs down at the shops.
Javascript gets big
When node came around, the new thing it enabled was writing your backend in the same language as the frontend. All in async-first model developers were familiar with.
This was (and is) compelling. With more businesses coming online, the competitive advantage was being able to ship and iterate fast.
Node’s ecosystem emphasized reusing small single-purpose packages you could grab off the shelf to get stuff done.
The frontend backend split
Our appetite for a web that could rival desktop and mobile continued to grow. We now had a collection of reusable “widget” libraries and utilities likejQuery UI,Dojo,Mootools,ExtJs andYUI etc.
We were getting heavy on those sprinkles and doing more in the frontend. This often led to duplicating templates across the frontend and backend.
Frameworks likeBackbone andKnockout and many others popped up. They added separation of concerns to the frontend via theMVC,MVVM et al. architectures, and were compatible with all the widgets and jQuery plugins we had collected.
Adding structure helped scale all this frontend code. And accelerated moving templates over from the backend.
We were still writing fine-tuned DOM manipulations to update the page and keep components in sync. This problem was non-trivial, and bugs related to data synchronization were common.
Angular, backed by Google, stepped onto the scene. It promoted a productivity boost by powering up HTML to be dynamic. It came with two-way data binding, with a reactivity system inspired by spreadsheets.
These declarative two-way bindings removed much of the boilerplate in updating things imperatively. This was nice and made us more productive.
As things scaled, it became hard to track down what was changing and often led to poor performance. Where cycles of updates would happen, occupying the main thread (today libraries likeSvelte keep two-way bindings while mitigating their downsides).
Beside the rise of mobile, these productivity-boosting frameworks accelerated the frontend backend split.
This paved the way for exploring different architectures that emphasized this decoupling.
This was a major part of theJAMstack philosophy, which emphasizes pre-baking HTML ahead of time and serving it from aCDN.
At the time, this was a throwback to serving static documents.
But now we had a git-based workflow, robust CDN infrastructure that didn’t rely on a centralized server far away, and decoupled frontends talking to independent APIs. Chucking static assets on a CDN had much lower operational cost than operating a server.
Today, tools likeGatsby,Next, and many others leverage these ideas.
React rises
Hand waving and fast forwarding into the era of big tech. We’re trying to move fast and break things.
For those entering the industry, Javascript was big, and building a decoupled SPA backed by a separate backend was becoming the status quo.
There were a couple of challenges that React was born from at Facebook:
Consistency when data changes frequently: Keeping many widgets in sync with each other was still a significant challenge. A lack of predictability in the data flow made this problematic at scale.
Scaling organizationally: Time to market and speed were prioritized. Onboarding new developers who can get up to speed quickly and be productive was essential.
React was born and the cool new thing you could do was write frontend code declaratively.
Separation of concerns on the frontend was famously re-thought, where previousMVC frameworks didn’t scale.
Moving up from templates to Javascript-driven JSX was initially hated. But most of us came around.
The component model allowed for decoupling separate frontend teams, who could more easily work on independent components in parallel.
As an architecture, it allowed the layering of components. From shared primitives, to “organisms” composed up to the page’s root.
A unidirectional dataflow made the data flow easier to understand, trace and debug. It added the predictability that was hard to find previously.
The virtual DOM meant we could write functions that returned descriptions of the UI and let React figure out the hard bits.
This solved the consistency issues when data changed frequently and made the composition of templates much more ergonomic.
React at scale - hitting CPU and network limits
More fast forwarding. React’s a hit and has become an industry-standard — often even for sites that don’t need its power. At the far end of scale, we start to see some limits.
Running up against the CPU
The DOM was a problem with React’s model. Browsers weren’t built to constantly create and destroy DOM nodes in a continuous rendering cycle.
Like any problem that can besolved by introducing a new level of indirection, React abstracted it behind the virtual DOM.
People need to perceive feedback under like 100ms for things to feel smooth. And much lower when doing things like scrolling.
Combined with a single-threaded environment, this optimization becomes the new bottleneck in highly interactive applications.
Large interactive apps were becoming unresponsive to user input while the reconciliation between the virtual DOM and the real DOM happened. Terms likelong task started popping up.
This led to an entirerewrite of React in 2017 that contained the foundations for concurrent mode.
Runtime costs adding up
Meanwhile moving faster meant shipping more code down the wire. Slow start-up times were an issue as browsers chewed through Javascript.
We started noticing all the implicit runtime costs, not only with HTML and the virtual DOM, but with how we wrote CSS.
The component model smoothed over our experience with CSS. We could colocate styles with components, which improved deletability. A fantastic attribute for anyone whose been scared to delete CSS code before.
The cascade and all it’s specificity issues we’d been fiddling with were being abstracted away by CSS in JS libraries.
The first wave of these libraries often came with implicit runtime costs. We needed to wait until the components were rendered before injecting those styles onto the page. This led to styling concerns being baked into Javascript bundles.
At scale poor performance is often a death by a thousand cuts, and we were noticing these costs. This has since led to new CSS in JS libraries that focus on having no run time cost by using an intelligent pre-compiler to extract stylesheets.
Network inefficiency and render-blocking components
When the browser renders HTML, arender-blocking resource like CSS or scripts prevent the rest of the HTML from displaying.
Parents often become render-blocking for child components in a component hierarchy. In practice, many components depend on data from a database and code from a CDN (via code-splitting).
This often leads to a waterfall of sequential blocking network requests. Where components fetch data after they render, unlocking async child components. Who then fetch the data they need, repeating the process.
It’s common to see “spinner hell” orcumulative layout shifts where bits of UI pop into the screen as they load.
React has since releasedSuspense to help smooth over the loading phases of a page. But by default, it does not prevent sequential network waterfalls. Suspense for data fetching allows the pattern of“render as you fetch”.
How does Facebook address these issues?
We’ll continue our detour to understand how some of React’s tradeoffs are mitigated at scale. This will help frame the patterns in the new frameworks.
Optimizing runtime costs
In React there’sno getting around the runtime cost of the virtual DOM. Concurrent mode is the answer to keeping things responsive in highly interactive experiences.
In the realm of CSS in JS, an internal library calledStylex is used. This keeps the ergonomic developer experience without the runtime cost when thousands of components are rendered.
Optimizing the network
Facebook avoids the sequential network waterfall problem withRelay.
For a given entry point, static analysis determines exactly what code and data need to load.
This means both code and data can be loaded in parallel in an optimized graphQL query.
This is significantly faster than sequential network waterfalls for initial loads and SPA transitions.
Optimizing Javascript bundles
A fundamental problem here is shipping Javascript that isn’t relevant to specific users.
This is hard when there are A/B tests, feature-flagged experiences, and code for particular types and cohorts of users. Also language and locale settings.
When there are many forking branches of code, a static dependency graph can’t see the modules that get used together in practice for specific cohorts of users.
Facebook uses an AI-powered dynamic bundling system. This leverages its tight client-server integration to calculate the optimal dependency graph based on the request at runtime.
This is combined with a framework for loading bundles in phased stages, based on priority.
What about the rest of the ecosystem?
Facebook has complex infrastructure and in-house libraries built up over many years. If you’re a big tech company, you can chuck an incredible amount of money and resources to optimize these trade-offs at the far scale.
This creates apit of success for frontend product developers to get stuff done while maintaining performance.
Most of us are not building a suite of applications at Facebook’s scale. Still, at a lot of large organizations performance is the hot topic. We can learn from these patterns - things like fetching data as high up as possible, parallelizing the network, and usinginline requires, etc.
Big tech companies often roll their own application frameworks internally. Leaving many solutions scattered across various userland libraries.
This has led to many having Javascript ecosystem fatigue and framework burnout.