I’ve long harbored the idea of writing an article about frontend patterns for a long time. Initially, my plan was to compare different patterns to provide a comprehensive overview of the various approaches and to enable a good comparison of them, the all time favourite - You are starting a new project and need to compare different things - blog article.
However, during the development of this concept, I realised that implementing my idea was somewhat more challenging than initially thought. It is relatively difficult to differentiate between some of these rendering patterns, or rather I have the feeling that the boundaries are becoming blurred. Many patterns and strategies partially intertwine and sometimes overlap. Strategies like hydration are used so diversely that they appear in different approaches.
So, in order to better understand how we got to where we are today with these patterns, I decided to proceed in a somewhat chronological manner, while going through these patterns to illuminate them in detail. This should make it quite clear what problems/challenges the individual strategies faced and why, perhaps, other approaches have emerged. Thus, the article can also be understood as an evolution in the field of frontend rendering.
So if you are interested in what current frontend patterns are, how they work and how they differ from others, this article could be interesting for you.
If you find yourself thinking ‘What the hell has happened in frontend development?’ while reading this article, i would encourage you to stick with it until the end. As i said, i’m going chronologically, and there really has been a lot of change in the frontend space over the last 10-15 years. Many things have been conceptualised, tested, discarded and evolved to get to where we are today. I think the wild times around 2018-2020, when new technologies were released almost daily and a new de-facto standards were (unofficially) proclaimed every time, are over. Today, we are at a point where patterns have proven themselves and become established.
As we delve into the details of these patterns, it’s worth noting that the complexity often lies beneath the surface, as they are under the hood. However, this doesn’t necessarily translate into complexity for developers at the end of the day. In most cases, developers don’t have to implement these strategies and patterns from scratch. Modern meta frameworks make these methods available through streamlined APIs, allowing developers to focus more on deciding when to use them rather than how to implement them. Despite the increase in the complexity of frontend applications, the developer experience has continued to improve drastically.
Client-Side Rendering
Client-Side Rendering (CSR) is a pattern, where the rendering of a web page is handled primarily by the client’s browser, using JavaScript. Unlike the opponent approach Server-Side Rendering (SSR), CSR shifts the workload from the server to the client. This method involves sending a minimal HTML document to the client, with the bulk of the page’s content being dynamically generated in the browser using JavaScript.
-
Initial Request: When the user accesses the website, the server sends a minimal HTML page with links to JavaScript files.
-
JavaScript Loading: The browser downloads the JavaScript files, which include the logic for generating the page’s content dynamically.
-
Rendering: The JavaScript executes in the browser, making API calls to fetch data, and then renders the page’s content directly in the user’s browser.
-
Interactivity: Once the JavaScript has rendered the initial page, all subsequent navigation and interaction are handled client-side, with the browser updating the UI dynamically without further full-page requests to the server.
Objectives
App-Like Interactivity: After the initial load, users enjoy faster page transitions and a smoother browsing experience, as content is updated dynamically without full page reloads. It is ideal for building highly interactive web applications, where user actions prompt immediate changes in the UI without server round trips.
Reduced Server Load: Since the server’s role is minimized after the initial page load, there is less strain on server resources. The reduced server load makes CSR apps much easier to scale, and often the hosting requirements are much simpler compared to methods like Streaming SSR, or similar. This also opens access to significantly more hosting providers, and the reduced server load may also impacts costs.
Development: Many modern JavaScript frameworks and libraries (React, Angular, Vue.js) are designed with CSR in mind, offering developers tools out of the box for building complex client-side applications with relative ease. Often, the standard methods for using these frameworks and libraries are geared towards client-side rendering, and customization is usually only required for more complicated use cases.
Considerations
Initial Load Time & JS Dependency: CSR can lead to slower initial page loads, as the browser must download, parse, and execute JavaScript before rendering the page content. To be able to render the content at all, the user must have JS enabled in their browser; if they do not, they cannot use the app.
SEO Challenges: Search engines have improved at indexing JavaScript-driven content, but CSR pages can still present SEO challenges, as the initial HTML is minimal and content is loaded dynamically.
Resource Intensive: Complex applications can become resource-intensive on the client side, potentially degrading performance on older devices or browsers.
Code Splitting: In CSR applications, code splitting is often leveraged to improve application startup time. Since the browser is responsible for loading and executing all the code, a large JavaScript bundle size can significantly delay the Time to Interactive (TTI). Splitting the code into smaller chunks ensures that only the code necessary for the initial page load is fetched, while additional code can be loaded as needed, such as when navigating to a new page. This can markedly enhances performance and user experience. But it also has some challenges.
While CSR offers significant advantages for certain types of applications, nowadays it is often seen in conjunction with SSR or Static Site Generation (SSG) techniques to overcome its limitations. Lets have a look into SSR.
Server-Side Rendering
SSR is the opponent to Client-Side Rendering, where HTML content is generated on the server and sent to the client.
-
Initial Request: The client requests a webpage.
-
Server Processing: The server receives the request and prepares the HTML content. This involves fetching data, executing any server-side logic, and rendering the page into HTML.
-
Response: The server sends the fully rendered HTML page to the client’s browser, ensuring that the page is viewable immediately upon arrival.
-
Hydration: For dynamic interactivity, JavaScript sent along with the HTML “hydrates” the page, attaching event handlers and enabling interactive components without requiring a full page reload for navigation or actions.
-
Subsequent Requests: Navigation or data fetching on the client-side might still require calls to the server or APIs.
Objectives
SEO Benefits: SSR significantly improves the initial loading by pre-rendering HTML content on the server, which has a profound impact on key performance metrics such as Time to First Paint and Time to Interactive, both critical for user experience and search engine optimization. Search engines can also more effectively crawl and index SSR pages, as the content is fully rendered upon delivery. This can lead to better search rankings and visibility.
Consistent Performance: SSR offloads rendering work to the server, providing a consistent experience across different devices and reducing the client-side computation required.
Considerations
Server Load: Each request requires server-side processing to render the page, which will increase the load on the server. As traffic increases, the server may struggle to keep up with the demand. Scalability can become a critical concern, requiring efficient load balancing and server optimization. To mitigate these challenges, strategies such as caching rendered pages, using CDNs, optimising server-side code for performance, and adopting serverless architectures or microservices to distribute the load more effectively, can be integrated.
Complexity: Developing applications with SSR introduces a level of complexity not found in purely client-side applications. This arises from the need to synchronize state and logic between the server and the client. When a page is initially rendered on the server, it delivers a complete HTML response to the client. However, to maintain a dynamic and responsive user experience, the client-side JavaScript needs to take over for subsequent interactions without reloading the page. This transition is known as hydration. You have to ensure that the client-side app can seamlessly pick up where the server left off. Additionally, integrating client-side interactivity into SSR applications means that developers have to deal with challenges such as event handling, data fetching on client actions, and updating the DOM in response to user interactions, all while keeping the initial server-rendered markup intact.
Caching: Caching dynamically generated pages in SSR applications poses significant challenges. Since the content of these pages can change based on user interactions, data updates, or personalised content, it becomes difficult to use traditional caching strategies effectively. For static content, caching is straightforward—once a page is generated, it can be stored and served to multiple users without the need for regeneration. However, for dynamic content, developers must implement more sophisticated caching mechanisms, such as invalidating cache entries when data changes or using techniques like edge caching, which stores cache closer to the user to reduce latency. These strategies require a delicate balance to ensure that users receive up-to-date content without putting excessive load on the server or causing delays in content delivery.
Initial Load vs. Interactivity Trade-off: SSR can significantly improve the Time to first paint (TTFP) and time to interactive (TTI), as it allows the browser to display a fully rendered page upon initial load. However, for the application to become interactive—allowing users to navigate, interact with page elements, and trigger dynamic updates without full page reloads—the client-side JavaScript needs to be loaded, parsed, and executed. This process, while necessary for a dynamic user experience, can add overhead and potentially delay the time when the page becomes interactive. Developers must carefully manage the size and complexity of the client-side JavaScript to avoid negating the benefits of SSR’s initial load. This involves optimizing JavaScript bundles, implementing code-splitting to load only the necessary code for the initial view, and prefetching or lazy-loading resources to ensure that user interactions are responsive and smooth.
JavaScript Dependency: Although users can see and interact with the first initial page load, JavaScript may be required for further use of the page. So it is the same as CSR - without the user activating Javascript in the browser, it will be difficult.
Code Splitting: Since the initial rendering process occurs on the server, code splitting helps reduce the amount of JavaScript that needs to be sent to the browser for page hydration. This is crucial because it decreases the volume of code transferred, shortening the time to become interactive. Additionally, code splitting in SSR can reduce server processing time as less code needs to be executed at once. However, you must ensure that the code necessary for rendering the initial page is correctly identified and loaded to ensure complete and accurate page rendering. It’s vital to handle the split code correctly both on the server and client to avoid inconsistencies and rendering errors.
Static Site Generation
Static Site Generation (SSG) represents a approach offering a blend of performance, scalability, and security benefits. Distinct from its counterparts (CSR and SSR), SSG involves generating static HTML files for each page of a website at build time. This process, will be done during the deployment phase and will result in a set of static assets ready to be served directly to the user without further server-side computation.
-
Build Time: The static site generator transforms content, often stored in files or fetched from APIs, into HTML, CSS, and JavaScript files during the build process. This step is completed before the website goes live.
-
Deployment: The generated static files are deployed to a CDN or static web hosting service. This ensures that the content is delivered to users with low latency and high reliability.
-
Serving Content: When a user requests a page, the server delivers the pre-generated static file, ensuring rapid content delivery without the need for dynamic generation or database queries.
Objectives
Performance: With pages pre-rendered at build time, SSG sites load exceptionally fast for end-users. This is because the server simply serves static files, eliminating the need for runtime rendering or database queries.
Scalability: Static sites can easily scale to handle high traffic volumes without complex infrastructure or additional server resources, as serving static files places minimal load on servers.
SEO: Static sites inherently offer excellent SEO performance due to their fast load times and straightforward indexing by search engines.
Security: The absence of dynamic server-side processes or database interactions significantly reduces the attack surface, making static sites inherently more secure against common web vulnerabilities.
Efficiency: Static sites can be hosted on a wide array of services, often at lower costs compared to dynamic sites. The simplicity of serving static files means less complexity and reduced hosting requirements.
Simplicity: Modern static site generators offer convenient tooling and integrations, simplifying the development workflow and enabling continuous integration and deployment pipelines for efficient updates and maintenance.
Won’t the bundle be huge and the initial load extremely slow? In fact, this is a common misconception about static websites. Or rather, this would actually be the case if no optimizations were made to the code. Optimizations that are taken over by modern static site generators. Here a a fews things that are done to reduce the bundle:
- Optimization and minimization: Modern static site generators offer extensive optimization options, including minimization and tree shaking to remove unused code. This effectively reduces the size of the final bundle.
- Code splitting: Many static site generators support code splitting “out of the box”. This means that only the JavaScript code required for the current page is loaded, which speeds up the initial load.
- Efficient caching: As the content is static, it can be cached effectively, both on the browser side and by CDNs. This significantly improves loading times for returning visitors.
It is therefore possible that the initial load of a statically generated page is even faster than pages that use other rendering methods, especially when the above techniques are applied.
Considerations
Freshness: While SSG excels for sites with content that does not change frequently, incorporating dynamic content or user interactivity requires additional client-side JavaScript or third-party services.
Build Time: For very large sites, the build process can become time-consuming, as each page must be generated at build time. Incremental builds can mitigate this.
Personalization: Can pose a challenge due to the static nature of the content, yet there are several strategies to enable personalized experiences even on static websites. A key approach involves using client-side JavaScript to dynamically modify the website’s content in the browser based on user interactions or data. This method facilitates personalization without the need for server-side processing, making it particularly suitable for static sites.
Moreover, leveraging edge computing presents an innovative way to execute personalization algorithms closer to the user. Some CDN providers offer edge computing capabilities that can be utilised to serve dynamic content or personalised experiences based on the user’s context, such as location or device type, without compromising the benefits of Static Site Generation (SSG).
Additionally, certain static site generators support the integration of dynamic elements through serverless functions or APIs. These can be employed to fetch personalized content or user-specific data at runtime, blending the advantages of static sites with the delivery of dynamic content. Through these approaches, the challenge of personalization on static websites can be addressed, offering an experience tailored to individual users while retaining the core benefits of static site generation. It would therefore be desirable to be able to enrich static content with dynamic content. This led to a concept that has had a major influence on the development of web frameworks. Hydration.
Hydration
The need for hydration arose with the growing popularity of frameworks like React, Vue, and Angular, which are based on the concept of SSR, which was developed to overcome the disadvantages of CSR, especially in terms of SEO-friendliness and performance during the initial page load. SSR allows a page to be pre-rendered on the server and then quickly sent to the client, enabling immediate interaction without the need for the full JavaScript library or framework initialization to be completed. However, this led to a new problem: How can we maintain the interactive features associated with Client-Side Frameworks in a SSR environment without having to reload the page? This is where hydration comes in. It serves as the bridge between the statically pre-rendered page sent from the server and the dynamic, reactive application running on the client.
What is Hydration? Hydration refers to the process where a static HTML document generated through SSR is transformed into a dynamic application once JavaScript is executed on the client. This means that the static markup sent from the server is “hydrated” by adding event listeners and other interactive components, turning it into a fully functional, dynamic web application. Hydration enables the page to become interactive without the need to be fully re-requested from the server.
- Initial Load: The user receives a fully rendered HTML page from the server, which is quickly viewable.
- Hydration: The client-side JavaScript takes over, enhancing the static page with dynamic features and interactivity.
- Interactivity: After hydration, the page responds to user interactions dynamically, leveraging client-side JavaScript for a seamless experience.
React is often regarded as the framework that popularised this technique, especially with the introduction of React Fiber, which offered better support for Server-Side Rendering (SSR) and Client-Side Hydration.
React Fiber: is an implementation of the React Core that emerged with version 16. This update aimed to improve and make more efficient the way React processes and renders updates. Fiber allows React to break work into chunks and distribute this work over multiple frames, instead of doing everything in a single, long update run.
However, the idea of hydration in JavaScript frameworks was present before React, even though perhaps not as explicitly known under the same name, but more under Progressive Enhancement in the context of Ajax. React’s approach to hydration enabled markup rendered on the server to be picked up in the browser and “brought to life” with event handlers, rather than simply replaced. Besides React, many other frameworks of today utilize the hydration approach.
Objectives
Fast Initial Loading: Hydration combines the fast load times of SSR with the rich interactivity of CSR. This positively impacts the Largest Contentful Paint, another Core Web Vital metric for measuring perceived load speed.
SEO & UX: The initially server-rendered content is more easily indexed by search engines, improving the website’s visibility and ranking. Especially vital for apps that are relying on organic search traffic.
Enhanced User Experience: The immediate interactivity upon page load, without additional wait times for JavaScript frameworks or libraries to initialize, provides a seamless experience. Dynamic features become available through hydration, elevating the overall usability.
Progressive Enhancement: Hydration supports progressive enhancement, where basic page functionality remains accessible even without JavaScript. This improves accessibility and ensures the website is usable even with limited JavaScript functionality.
Flexibility in User Interaction: Hydration enables more dynamic and richer interaction with the page as interactive elements are activated post-initial load. This opens up new possibilities for sophisticated user interactions and animations, potentially increasing user engagement further.
Considerations
Complexity in Implementation: Managing the transition from static to dynamic content can introduce complexity. This complexity arises mainly during the “Recovery” phase where the application has to download and execute component code, and then determine the appropriate event handlers based on the application and framework state. This process can be intricate and resource-intensive, especially on mobile devices where it can take significant time
Performance Adjustments Required: Hydration can lead to significant overhead due to the duplication of work. The server builds up and then discards critical state information during SSR or SSG, which the client must recover by downloading and executing the application code. To address this, advanced techniques such as progressive and partial hydration have been developed to optimize resource loading and improve time to interactive. Employing code-splitting and lazy-loading to manage JavaScript execution can also enhance performance.
Dependency on JavaScript: Full functionality requires that the client’s browser supports and enables JavaScript. This dependency introduces a potential point of failure, as any issues with JavaScript execution could prevent the proper hydration of the webpage, thus affecting the user experience adversely.
Hydration is more than just a technique, it is a multi-faceted technology that can be adapted to the specific requirements of different use cases. Modern frameworks have not only adopted the basic idea of hydration, but have also developed it further. This has led to concepts such as progressive, partial and selective hydration. Each of these methods addresses specific concerns.
Progressive Hydration
Progressive hydration is an approach where the hydration process is done gradually and according to priority. Rather than hydrating the entire DOM at once, which can be resource intensive and delay interactivity, progressive hydration hydrates only the visible part of the page (or the part with the highest priority) first. Other parts of the application are then hydrated as they come into focus or are needed. This approach improves Time to Interactive (TTI) as users can interact with the part of the application that is loaded and hydrated first, while the rest of the page finishes in the background.
For a news site, progressive hydration could be used to hydrate the visible article text first, while interactive elements such as comment sections or social media buttons are hydrated as the user scrolls to these areas. This method not only allows for a faster First Contentful Paint, but also reduces the initial non-interactive period - often referred to as the “uncanny valley” of hydration - where the UI appears ready but isn’t responsive.
This approach is particularly beneficial in environments where bandwidth or processing power is limited. By hydrating DOM nodes incrementally and only as needed, the minimum amount of JavaScript is requested, significantly reducing load times and resource consumption. This method avoids the pitfalls of rehydration issues commonly seen in server-rendered applications, where the DOM tree must be destroyed and rebuilt, leading to inefficient resource usage and potential user frustration due to perceived UI freezes.
React’s concurrent mode enhances progressive hydration by allowing rendering tasks to be prioritised based on their urgency. This prioritisation is critical to maintaining responsiveness and smooth user interactions. Vue and other frameworks offer similar capabilities with features such as Suspense and Async Components.
Progressive Hydration can often be seen as a component of [Streaming Server-Side Rendering Section], where content is delivered and hydrated in stages as it becomes necessary.
Partial Hydration
Refers to a technique where only certain parts of a webpage are hydrated, while other parts remain static. This is particularly useful in scenarios where only specific areas of the page need to be interactive (e.g., a comment field or a search bar), while the rest of the page does not require JavaScript interactivity. By applying partial hydration, the overhead associated with executing JavaScript can be reduced by only transitioning the necessary parts of the application to a dynamic state. This leads to faster load times and improved performance, especially on mobile devices or in low-bandwidth networks.
On a product detail page in an e-commerce shop, only the interactive elements such as the size selector and the “Add to cart” button are hydrated, while static content such as product descriptions or customer reviews remain not hydrated. This selective hydration reduces the initial JavaScript payload.
Partial hydration and [[#Islands Architecture]] are very similar. However, islands architecture is more specific in the way it looks at the structure of a website. It organises the site into well-defined, interactive units (islands) that are independent of each other and whose loading and hydration can be based on user interactions. Partial hydration is a broader term for the concept of making only certain components, rather than the whole site, interactive.
Partial Hydration is closely aligned with the principles of [Incremental Static Regeneration Section], where static pages are regenerated on demand. This selective dynamic rendering makes ISR particularly suitable for use with partial hydration strategies, ensuring that only components that need to be updated or interactive are regenerated and hydrated
Selective Hydration
Selective Hydration takes a slightly different approach. In this method, specific components or areas of an application are targeted for hydration based on user interactions or other events. Unlike partial hydration, which determines from the start which parts of the page will be hydrated, selective hydration allows the hydration process to be dynamically triggered. For example, a component might only be hydrated when the user hovers over it or clicks on it. This approach minimizes the initial JavaScript download and execution time, as only the parts of the application that actually need to be interactive are burdened with additional JavaScript.
In a social media dashboard, a comment component displayed in a sidebar could be selectively hydrated when a user starts to write a comment or a comment form is opened.
React 18 introduces this feature as a major improvement in SSR performance, allowing for the rendering of HTML in segments without waiting for all data to be ready before starting hydration. The new Suspense component is instrumental in this, allowing developers to specify fallback content (like spinners) while waiting for content to load, thus improving the perceived performance by the end-user.
Suspense
also integrates seamlessly with code-splitting and lazy-loading, further enhancing SSR’s effectiveness by reducing the amount of code that needs to be loaded and executed upfront. This can lead to significant performance gains, especially on lower-end devices or slow network conditions. Streaming HTML content and selectively hydrating parts of the application on-demand ensures that users see interactive content sooner, which is crucial for maintaining engagement and reducing bounce rates on web applications.
The advancements in selective hydration are complemented by React’s ability to handle incremental hydration without re-rendering the entire application. This means that the interactive portions of the application can be made available to the user without waiting for less critical parts to load and hydrate, providing a smoother and more responsive user experience.
Selective Hydration is a dynamic approach that can be effectively combined with [Streaming Server-Side Rendering Section] to optimize resource loading and interactivity. By hydrating components based on user interactions, this method synergizes well with the streamed delivery of content, ensuring that only the required parts of the application are loaded and executed as needed.
Further Reading:
Incremental Static Regeneration
Incremental Static Regeneration (ISR) allows for the regeneration of static pages on a per-page basis at runtime. This means that after a page has been created and deployed for a user, the server can update the page in the background once new data becomes available. This approach combines the performance benefits of static generation with the freshness of dynamic content by allowing static pages to become dynamic.
ISR is based on the idea of invalidating cached pages at the edge and regenerating them as needed – but only after the currently cached version has been delivered. This ensures that users receive content quickly, while the page is updated in the background for future requests.
-
Build Time: During the website’s build, the framework generates static versions of the pages. These are stored on a CDN. At this point, the pages are not dynamic and display the content that was available at the time of the build.
-
Revalidation: Pages & routes intended for ISR must be appropriately marked, depending on the framework. This specifies how often these pages should be revalidated, i.e., at what interval the content should be updated.
-
Request: When a user requests a page, the CDN first delivers the statically generated version of that page from the cache. This delivery is quick for the user since the page has already been generated and is delivered from a nearby location.
-
Background Update: If the page is marked for ISR and the set interval for revalidation has passed, a new version of the page is generated in the background. Importantly, users continue to see the cached version of the page during this process, meaning there is no delay in page delivery.
-
Replacement: Once the new version of the page is generated, the old version should be replaced in the cache with the new one. Future requests for this page then deliver the updated version.
-
Repeat: This process repeats at regular intervals determined by the revalidation interval, keeping the website’s content dynamic and fresh without compromising performance.
Wouldn’t this mean that the first user sees an ‘outdated’ page? Indeed, the first user who requests your page initially receives the cached, thus outdated, version. The logic behind this is that the page’s update occurs in the background after this first request has been made. This means the update happens without compromising loading speed. Essentially, you prioritize all subsequent users, with the first one, in case the call is the first, receiving outdated versions. In many cases, this compromize is considered acceptable, especially for content that does not change every second. To counteract this, one can:
- Choose a shorter revalidation interval if it’s important for content to be frequently updated
- Implement strategies that update certain dynamic content on the client side after the page has loaded
As often, it’s a matter of balancing. Whether one decides between performance and the freshness of content.
Objectives
Performance: Since ISR generates pages statically and delivers them via a CDN, loading times for users are often significantly faster than traditional SSR or CSR.
Scalability: Static pages delivered through a CDN can more easily scale to handle high user numbers without needing additional server capacity. This reduces server strain and can lower costs.
Freshness: By setting a revalidation interval, content can be regularly updated without needing to rebuild the entire app.
Reliability: You still have your static build as a fallback should the server fail to revalidate.
SEO Benefits: You benefit greatly from the static build.
Encourages Code Splitting: Properly encapsulating your components not only facilitates the implementation of ISR but also optimizes opportunities for effective code splitting.
One Size Fits All: Uncertain about where the project is headed at the start? Well, this decision can be ‘theoretically’ made anew with each new page.
Reduced Server Load: This can lead to cost savings on infrastructure.
Security: Static pages are inherently safer than dynamically generated pages, as they offer fewer attack vectors. Without direct database queries or server-side logic executed on each page request, the risk of security vulnerabilities is reduced.
I think the advantages of ISR read particularly well because this concept emerged precisely from combining the benefits of Static Generation and Server Rendering.
Considerations
Hosting and Support Requirements: A critical aspect to consider when opting for ISR is the need for the hosting service to support this technology. ISR heavily relies on specific features for caching and regenerating pages, which are not offered by all hosting services. The ability to invalidate pages at the edge and efficiently regenerate them requires tight integration between the web application and the hosting provider’s infrastructure. This can significantly reduce the choice of hosting providers and can of course also lead to vendor lock-ins.
Dependency on Specific Technologies: The implementation of ISR is also often closely tied to specific frameworks or libraries like for example Next.js. This can create a certain dependency on these technologies and limit flexibility in choosing tools or frameworks.
Challenges in Troubleshooting: Since content updates occur in the background, it can be more challenging to diagnose and resolve issues that arise during regeneration, especially if they only become visible under certain conditions.
Complexity: Setting up ISR can be quite more complex than using traditional rendering methods. You have to understand how and when pages are regenerated and correctly configure the logic for the revalidation interval. How complex it is probably also depends heavily on what the API of the framework used offers and how much customization is required.
Freshness: Potentially Outdated Information, like we already discussed.
Resource Usage During Revalidation: Although page regeneration occurs in the background, it can still consume resources on the server, especially if many pages need to be regenerated simultaneously. This could become a burden for very large websites with thousands of pages that need to be regularly updated.
Cache Inconsistencies: Occur when different versions are stored at different geographic locations or in different cache layers. This can lead to users, depending on their location or the specific cache they access, seeing different content. This can be caused by different CDNs, due to geographic distribution, as well as by the different configurations of the CDNs.
Limited Real-time Capability: For applications requiring high real-time capability (e.g., real-time notifications or live updates), ISR alone may not be sufficient. Additional client-side logic or other strategies might be necessary to meet these requirements, for example Streaming SSR, another hybrid approach. Lets take a look at it.
How differs ISR from Hydration? ISR is a strategy that combines the benefits of static generation with on-demand revalidation, allowing pages to be updated after deployment based on specified intervals. ISR primarily deals with how pages are generated and cached, serving up-to-date static content without needing to rebuild the entire site. Hydration, in contrast, is about making these statically served pages interactive on the client-side. ISR is a method for managing and updating content, while hydration is about enhancing user experience by enabling interactivity.
Streaming Server-Side Rendering
Unlike traditional SSR, where the entire HTML content of a page is sent from the server to the client at once, Streaming SSR allows for content to be sent in pieces as it becomes available. This approach optimizes the TTFB and significantly enhances the perceived loading speed from a user’s perspective.
Streaming SSR is particularly effective for dynamic applications where content needs to be updated in real-time. It serves as a powerful solution leveraging the benefits of server-side rendering to provide fast initial load times while handling dynamic updates in a way that enhances user experience.
-
Request Time: Upon a user’s request, the server generates the initial HTML content of the page and sends it immediately to the client, instead of waiting until the entire content is generated. This allows the browser to start rendering the page while more data is being loaded.
-
Data Streaming: While delivering the initial content, the server simultaneously works on generating the rest of the page. As new blocks of content are ready, they are gradually sent to the client and integrated into the partially rendered page.
-
Interactivity: Users can interact with the already loaded parts of the page while the rest is being loaded in the background. This significantly improves user experience as the page appears to be more responsive.
-
Optimization: Developers can set priorities for content blocks to ensure that critical information is loaded and displayed first. This optimizes the performance and efficiency of the loading process.
-
Complete Loading: Once all content has been generated by the server and sent to the client, the page is fully interactive. Users have access to all functionalities of the application at this point.
-
Repetition: This process ensures continuous optimization of loading speed and user experience, irrespective of the content’s complexity or application.
Does this mean users interact with incomplete pages? It’s possible for users to encounter parts of the page that are still loading. However, Streaming SSR is designed to prioritise critical content and functionalities, minimising the impact. The strategy aims to improve user experience through faster interaction times, even if not all content is immediately available.
Objectives
Improved Initial Load Time: By sending the initial content to the browser as soon as it’s ready, Streaming SSR significantly reduces the Time to First Byte. This leads to faster perceived loading times compared to classic SSR.
Enhanced User Experience: Users can interact with the parts of the page that have loaded, making the site feel quicker and more responsive. This interactivity, even before the entire page has fully loaded, can reduce bounce rates and improve user satisfaction.
Optimized Content Delivery: Developers have the option to prioritize the loading of critical content over less important elements. This ensures that users see the most valuable information first, which can be crucial for engagement and conversion rates.
Handling of Backpressure: Streaming SSR is adept at managing network backpressure or congestion, ensuring that websites remain responsive even under challenging conditions. This capability allows for adaptive content delivery, where the server dynamically adjusts the rate of content transmission based on the client’s ability to receive and process the data. This means that even in situations of slow internet connections or high network traffic, the user experience is not compromized. The website continues to function smoothly, with critical content being prioritized for loading and display. This responsive adaptation to varying network conditions not only enhances user satisfaction but also conserves bandwidth and optimizes server resource use. It’s particularly beneficial for maintaining the performance of dynamic, content-rich applications and real-time interactions without overwhelming the client or causing significant delays.
Flexibility in Content Updates: Streaming SSR is ideal for applications where content needs to be updated in real-time or very frequently. This method allows for dynamic content to be seamlessly integrated into the page as it becomes available, without the need for a page refresh.
Reduced Server Load: Since content is streamed as it’s generated, the server can distribute the load more evenly over time, rather than experiencing spikes of load with traditional SSR. This can lead to more efficient use of server resources and potentially lower costs.
SEO Benefits: Streaming SSR still allows for the server-side rendering of HTML content, which is beneficial for SEO. Search engines can crawl the initial content more effectively, improving the visibility of dynamic, content-rich applications.
Considerations
Complexity in Implementation: Setting up Streaming SSR can be more complex than traditional SSR due to the need to manage the streaming of content and ensure that the page remains interactive and responsive as more content loads.
Handling Incomplete Pages: Developers need to design the user experience carefully to ensure that users understand when content is loading. This may involve creating loading states or placeholders for content that has not yet been streamed.
Browser Compatibility: Ensuring that all target browsers support the technologies used for streaming content can be a challenge. Developers must test across a range of devices and browsers to guarantee a consistent user experience.
Server Infrastructure: Streaming SSR may require specific server capabilities or configurations to efficiently handle the streaming of content. This might necessitate updates or changes to existing server infrastructure.
Potential for Increased Bandwidth Use: As content is streamed in pieces, there’s a possibility that overall bandwidth usage could increase, especially if users leave pages before they have fully loaded. This could impact hosting costs and user data consumption.
Debugging and Troubleshooting: Debugging issues related to the streaming of content can be more complex than with traditional rendering methods. Developers need tools and processes to monitor and troubleshoot the streaming process effectively.
While streaming SSR offers significant benefits for user experience and content flexibility, the technical complexity and infrastructure requirements are important factors to consider.
How differs Streaming SSR from Hydration? The primary difference lies in the delivery and interaction model. Streaming SSR progressively sends HTML content to the browser as it’s generated, allowing users to see and interact with content faster, even before the entire page is loaded. Hydration, on the other hand, requires the entire page to be loaded and then made interactive through JavaScript. Streaming SSR focuses on improving initial load performance and user perception by streaming content, whereas hydration focuses on enhancing a fully loaded static page with interactivity.
Distinguishing Between ISR and Streaming SSR
While ISR and Streaming SSR share some advantages, they cater to different needs:
ISR: is ideal for websites where content changes are predictable and can be regenerated at specified intervals. It suits blogs, e-commerce sites, and news platforms where the balance between static performance and content freshness is key. ISR could be the better choise when building sites that benefit from static generation but require periodic updates to keep content fresh without the need for a full rebuild.
Streaming SSR: excels in scenarios demanding real-time content updates and high interactivity, such as social media platforms, live dashboards, and applications with frequent data changes. Consider opting for it in scenarios where user experience hinges on immediate content availability and interactive features, necessitating dynamic rendering capabilities.
Islands Architecture
The Islands architecture is emerging as a pattern, that suggests treating each piece of dynamic functionality as an island of interactivity within a sea of static content. This architecture addresses as like many other patterns the performance of applications as also the growing complexity by promoting local reasoning.
Local reasoning is a concept often found in functional programming that refers to the ability to understand and reason about the behavior of a part of a program without knowing the entire context.
This approach aims for strategically hydrating only those parts of the page, that require interactivity, while leaving the rest as fast-loading static content.
-
Selective Hydration: In Islands Architecture, not the entire webpage but only the required interactive components (islands) are hydrated with JavaScript. This selective hydration significantly reduces the amount of JavaScript that needs to be loaded, parsed, and executed, leading to faster initial load times and improved performance.
-
Incremental: The architecture forces the progressive enhancement by allowing you to start with a static site and add interactivity where needed. It encourages you to reconsider how and when to use JavaScript.
-
Modularity: Each island can be developed, tested, and deployed independently, promoting reusability and scalability. This modularity also facilitates the use of different frameworks or libraries for each island, tailoring the choice of technology to the specific needs of the functionality.
Objectives
Performance & UX: By minimizing the amount of JavaScript used and focusing on static content, Islands Architecture significantly enhances page load times and interactivity metrics. Static content is rendered quickly, while still providing the dynamic features users expect. Users benefit from faster perceived loading times and smoother interactions, as the critical parts of the webpage are interactive sooner. The architecture also improves accessibility and SEO, as the core content of the page is available as static HTML.
Flexibility and Scalability: The modular nature of Islands Architecture allows for easy scaling of applications. Developers can add new features as independent islands without affecting the existing structure, making it easier to maintain and update the application.
Resource Utilization: By loading only the necessary JavaScript for interactive elements, this architecture reduces bandwidth usage and improves the overall efficiency of web applications. This selective loading is particularly beneficial for users on limited data plans or slower network connections.
Multiple framework support: In Astro, ‘the all in one web-framework’ that propagates the Islands pattern, you could even mix different frameworks (React, Vue, etc.) per page.
Considerations
Bleeding Edge: The concept of Islands is still very new at the moment and not really battle-tested. The options available on the market for frameworks are still very limited.
Content-Focused-Apps: As the required interactivity of the site increases, the islands that need to be implemented also logically increase. It is questionable where the point is at which it becomes a hindrance or too complex if more and more islands have to be added.
Complexity in Architecture: While Islands Architecture offers benefits, it introduces complexity in the design and implementation phases. You must plan which parts of the application should be dynamic and how they will manage state and data flow between islands. For example in the Context of React. You are managing individual island layers that each have their own context and React tree. You need some custom logic, which connects those islands, that you are able to share context, or prop-drilling some events.
Resumability Pattern
Islands Architecture is pursuing an approach to reduce hydration in the app. There are now approaches that attempt to eliminate hydration altogether. Specifically, the resumability pattern pursued by the Qwik framework. In the following, I will always refer to the wqik framework, as I am not yet aware of any other framework that follows this rendering pattern.
Objectives
Reduction of Redundant Work: Traditional hydration involves downloading and executing component code to attach event handlers, essentially duplicating the efforts already completed by the server. This can not only increases the time taken to become interactive but also consumes additional bandwidth and processing power. The Resumability pattern tries to eliminate this redundancy by enabling the application to “resume” where the server rendering left off, without re-executing the already performed tasks.
Efficiency in Event Handling: Resumability focuses on lazily creating event handlers only in response to user interactions. This on-demand approach ensures that the system does not waste resources setting up event handlers that may never be used, unlike traditional hydration which sets up all possible event handlers upfront.
Resource Utilization: By avoiding the pre-loading and initialization of unnecessary JavaScript, Resumability aims for reducing the initial load time and system resource consumption.
Time to Interactive (TTI): Since Resumability does not require the entire application code to be downloaded and executed immediately upon page load, it can markedly improve the Time to Interactive.
Simplification of Client-Side Processing: The pattern simplifies client-side processing by transferring all necessary state and framework information from the server in a serialized form. This allows the client to remain lightweight and only fetches additional code as required by user interactions, thereby adhering to the principle of loading the minimum necessary code.
How it works
- Initial download: When the site is first loaded, only the necessary HTML and a minimal amount of JavaScript is downloaded. This JavaScript contains the basic functionality required to make the application interactive.
- Server-side processing and serialisation: While the server is rendering the page, the state of the application and the required data structures are serialised. This includes listeners for user interaction and internal framework data structures. Serialisation converts this information into a format that can be embedded in the HTML output.
- Embedding: The serialised data is inserted directly into the HTML, typically as part of the initial server response. This embedding is done in such a way that the browser can read and use this data to seamlessly restore the application state.
- Client-side application resumption: Once the browser loads the HTML, the minimum JavaScript required is executed to unpack the serialised state and activate the application without loading any additional resources. This resumes the state of the application exactly where the server processing left off.
- Progressive loading of additional resources: After the initial resume, the framework begins the progressive loading of additional resources.
Considerations
It’s worth noting that my experience with Qwik and the resumability pattern is relatively limited, and my understanding of its nuances is still evolving. This lack of direct project experience makes me cautious about the depth of my analysis on this topic.
In this article, by adopting a historical and chronological approach, it has generally been straightforward to identify which improvements and optimizations have been made over time, and to understand the trade-offs they entail, particularly with hydration methods. Resumability, on the other hand, introduces a radically different approach to tackling the challenges of JavaScript-heavy web applications. It prioritizes reducing load times and introduces a delay in interactivity, which challenges the traditional paradigm of instant interaction.
Resumability seeks to streamline the conventional hydration process by embedding only the minimal necessary JavaScript for immediate interactions. This approach can significantly shorten the steps involved, potentially accelerating the Time to Interactivity (TTI), especially during the initial page load when JavaScript is not yet cached. However, when users return to the site and the JavaScript is already cached, the initial load times for subsequent visits are considerably reduced. This situation leads to a reconsideration of whether the actual time savings afforded by resumability are as significant as one might initially expect.
Concerning the actual improvement in interaction time—from the first page load to the first user interaction—I find myself questioning its real-world significance. Although Qwik offers an innovative approach, it is crucial to assess the incremental benefits it provides compared to hybrid hydration models that do not require JavaScript and still achieve rapid TTI. These developments could reduce the relevance of resumability as hydration methods continue to advance and gain efficiency.
Additionally, the bleeding-edge nature of the resumability concept must be emphasized. As a new approach, its robustness and effectiveness in real-world applications are not yet fully proven. This introduces a level of risk and warrants careful consideration for its adoption, particularly in terms of its long-term viability and its potential to enhance web application performance effectively.
So ultimately I find myself questioning the necessity of adopting such a approach. Do we truly need the resumability pattern, or could we achieve similar or even better outcomes by continuing to refine and enhance existing hybrid hydration models?
Further Reading:
Is JavaScript bad?
Here we are, we’ve come full circle. From the early days of publishing HTML pages with accompanying JavaScript, through the era where JavaScript dynamically generated entire HTML content at runtime, to server-side generation of HTML that is then hydrated upon reaching the user’s browser, and finally back to delivering pre-generated bare HTML with JavaScript shipped separately—we’ve seen it all. And one thing stands out. We are pushing JavaScript back and forth, trying to make it smaller, trying to outsource it, using it only when we need it - which begs the question: is JavaScript bad?
JavaScript has been an integral part of the web for nearly three decades, as fundamental as HTML and CSS. The idea of eliminating JavaScript entirely might seem appealing for certain applications, yet in reality, JavaScript is essential for creating a good, accessible, and performant user experience. The criticism often directed at JavaScript usually stems from its overuse, leading to bloated and slow applications. However, the complete removal of JavaScript from web contexts is not the solution, in my opinion. Instead, developers should aim to eliminate unnecessary JavaScript and ensure that every script loaded enhances the user experience.
Discussions around React Server Components and the “Islands Architecture” have demonstrated ways to reduce the amount of JavaScript sent without compromising user experience. These approaches leverage the benefits of JavaScript judiciously, transmitting minimal code without sacrificing functionality.
Avoiding the use of JavaScript should perhaps not be the primary goal. Rather, the focus should be on efficiency in how JavaScript is used to improve performance while maintaining accessibility and interactivity that modern web applications require. The pursuit of a 0kb JavaScript overlooks the real benefits this powerful tool offers. It’s akin to suggesting building a website without CSS: technically possible, but hardly practical if you aim to provide an engaging and functional user interface.
Indeed, there are use cases where you might not need JavaScript—for example, simple landing pages with a form might not require complex form validation libraries or a full-blown framework. Absolutely. But there are also many benefits and features that depend not just on the individual project but on broader requirements such as analytics, crash reporting, or more specific features like client-side routing, smoother transitions, faster loading of dynamic content, better interactivity, and even improved accessibility.
The mindset should not be about using as little JavaScript as possible but about creating the best possible user experience, even though performance is a significant contribution to this (and of course, if relevant to the use case, enabling the best possible SEO).
The challenge lies not in eliminating JavaScript entirely but in making wise decisions about when and how it is used. By employing techniques like code-splitting and lazy loading, developers can reduce the initial load size while still enabling dynamic and rich interactions. Modern frameworks take a lot of this work off our hands and provide us with amazing tools to deal with this challenge. As is so often the case, our task is to choose the right tool for the job.
I hope this article provides an initial overview of the range of tools available and lays the foundation for further research.
Further Reading:
Is 0kb of JavaScript in your Future?