post-thumb

The Superpowers of JavaScript Proxies

One of the most powerful and underused features of JavaScript has to be Proxies. While it has already been put in use in some critical parts of web frameworks such as reactivity systems, it is still a feature that is not well-known by JavaScript developers. It is also often considered quite complicated to apprehend and likely to cause footgun shots.

In this article, I would like to highlight their features and showcase some of their potential scope of use. But first !

What are Proxies ?

Objects play a central role in the JavaScript language. Besides the primitive types like booleans, numbers and strings, everything you manipulate in JavaScript is an object. That includes arrays, functions, classes, constructors… Which is why functions in JavaScript are sometimes referred to as first-class citizens, because they share all the same characteristics than other objects like the ability to be passed as arguments to other functions, or returned by other functions, or being created and assigned dynamically. This fact makes the first contact with JavaScript a bit surprising but over time feels very consistent and elegant.

Not only objects are omni-present in JavaScript codebase, but they also are very feature-rich. Objects are collections of properties, with each property having descriptors that describe if the property is enumerable or writable. Each property can also have a custom getter or setter logic described in a function ran at every read or write of the property value. Finally, objects have a prototype, which is how JavaScript adds inheritance relationships between objects ; or more precisely, delegation of properties.

That list of features already makes objects a very convenient and powerful tool to hold all kinds of data and behavior. But their power didn’t stop there. In 2015, a new major version of the JavaScript language was released, ES6, which introduced many new features for JavaScript objects. A new primitive type, the Symbol, was introduced to create unique property keys and expose built-in behavior of objects to developers to be overridden at their leisure. For example, Symbol.iterator paved the way for the creation of new iterable object structures, like linked lists, binary trees, graphs… and Symbol.toPrimitive allowed developers to define how an object should be coerced to a primitive type like a string or a number, which ended up quite useful in some data manipulation libraries.

But the most powerful feature introduced in ES6 for objects, and the one that interests us today, is the Proxy object. Proxies allow you to create a wrapper for an object, which can intercept and redefine all the operations done on that object. And I need to insist on the all part, because it is what makes Proxies so powerful and opens their scope of use to many domains.

Here is an example of a basic Proxy object that intercepts all the read and write operations on a target object:

const target = {
    name: 'Sylvain',
    age: 30
};

const handler = {
    get: function(target, prop, receiver) {
        console.log(`Getting property ${prop}`);
        return Reflect.get(target, prop, receiver);
    },
    set: function(target, prop, value, receiver) {
        console.log(`Setting property ${prop} to ${value}`);
        return Reflect.set(target, prop, value, receiver);
    }
};

const proxy = new Proxy(target, handler);

proxy.name; // logs "Getting property name"
proxy.name = 'John'; // logs "Setting property name to John"

The handler contains a list of traps, which are predefined methods that matches with the various operations that can be done on an object. You have seen get and set, but there are several others like has, deleteProperty, apply, construct, getPrototypeOf, setPrototypeOf, ownKeys, defineProperty, getOwnPropertyDescriptor… Each of these traps can be defined in the handler object to intercept the corresponding operation on the target object.

By default, a Proxy that does not define any traps will forward the operation to the target object. That makes a Proxy without trap completely indiscriminate from the target object, that you can use as a drop-in replacement for the target object. They will have different references, so proxy !== target, but they will behave exactly the same way.

Notice the Reflect API used in the handler methods, and how it mimics the traps signature. The Reflect API is a set of methods that contain and expose the default behavior of JavaScript operations on objects. So you could think of it like the default values for all Proxy traps. As a developer, we are going to use the Reflect API as a safety belt, something to hold on to as we seek to transform the behavior of our objects. Indeed, it is quite easy to get lost in tweaking Proxy traps and end up breaking expected behavior for your objects, or cause infinite loops. When you’re dealing with such powerful features, such a safety net is very welcome.

Now that you got a proper introduction to Proxies, let’s answer the question that is on everyone’s mind: what can we do with them ?

Defensive programming

A common complaint about JavaScript is how permissive the language is. You can assign any arbitrary property to any object, even those you don’t own like standard built-in API, and overriding existing behavior and breaking everything is surprisingly easy. But Proxies can help you to enforce some rules on your objects and prevent some undesired interference. Here are some examples:

Readonly “deep-freeze” objects

The Object.freeze method can be used to make an object properties immutable, but it only works on the first level of properties. The writable property descriptor can be used to make a property read-only, but it only works on a per-property basis. So developers came up with custom deepFreeze functions that deep-traverse the object properties to set those descriptors everywhere, but that is tedious and still does not work with the newly added properties afterwards.

It turns out that deep-freezing an object with a Proxy is very easy:

const readonly = obj => new Proxy(obj, {
  get(target, prop) {
    const value = Reflect.get(target, prop);    
    return value instanceof Object ? readonly(value) : value;
  },
    set(target, prop, value) {
        throw new Error(`Cannot set property ${prop} on readonly object`);
    }
});

const obj = readonly({
    name: 'Sylvain',
    age: 30,
    address: {
        street: '123 Main St',
        city: 'Anytown'
    }
});

obj.address.street = '456 Elm'; // Uncaught Error: Cannot set property street on readonly object

Recursively calling the readonly function is done only on property access, so you don’t have to deep-traverse the entire object, and you can preserve the existing property descriptors.

Preventing reading or writing undefined properties

This Proxy prevents reading or assigning undefined properties on an object, throwing an exception instead:

const noUndefined = obj => new Proxy(obj, {
    get(target, prop) {
        if (Reflect.has(target, prop)) {
            return Reflect.get(target, prop);
        } else {
            throw new Error(`Property ${prop} is not defined`);
        }
    },
    set(target, prop, value) {
        if (Reflect.has(target, prop)) {
          return Reflect.set(target, prop, value);
        } else {
            throw new Error(`Cannot set undefined property ${prop}`);
        }
    }
});

const obj = noUndefined({
    name: 'Sylvain',
    age: 30
});

console.log(obj.address); // Uncaught Error: Property address is not defined
obj.address = '123 Main St'; // Uncaught Error: Cannot set undefined property address

Some may consider this behavior too strict, but it can help you to spot potential bugs in your code early, and prevent the creation of unexpected properties that could lead to hard-to-debug issues.

True private properties

JavaScript recently introduced the concept of private properties in classes, but we don’t have a way to create private properties in plain objects. That’s why many JavaScript developers rely on naming conventions like _myPrivateProp prefixed with an underscore to indicate that a property should be considered private and should not be used. We all know how well that works in practice…

Another trick that has been used to hide some properties in a scope is to make use of closures, the capacity of a function to hold a reference to its outer scope while not exposing this scope directly to the function caller. But that’s not very convenient and closures have a bad reputation of leaking memory or causing unexpected side effects.

With Proxies, you can create a Proxy that only exposes a subset of the properties of an object, and hides the rest. For example, following the _underscorePrefixedProps convention, but this time hiding and protecting these fields for real

const privateProps = obj => {
  const overriddenPrivates = {}
  return new Proxy(obj, {
    has(target, prop) {
        return Reflect.has(prop.startsWith("_") ? overriddenPrivates : target, prop);
    },
    get(target, prop) {
        return Reflect.get(prop.startsWith("_") ? overriddenPrivates : target, prop);
    },
    set(target, prop, value) {
        return Reflect.set(prop.startsWith("_") ? overriddenPrivates : target, prop, value);
    },
    ownKeys(target) {
        return Reflect.ownKeys(target).filter(key => !key.startsWith("_")).concat(Reflect.ownKeys(overriddenPrivates));
    }
    // simplified example, should also cover deleteProperty, defineProperty and getOwnPropertyDescriptor traps
  });
}

const internalObj = {
    name: 'Sylvain',
    _password: "carrotsoup",
    matchesPassword: function(password) {
        return internalObj._password === password;
    }
}
const obj = privateProps(internalObj);

console.log(obj.name); // Sylvain
console.log(obj._password); // undefined
obj._password = "somethingelse"; // won't break
console.log(obj.matchesPassword("carrotsoup")); // true
console.log(Object.keys(obj)); // ["name", "matchesPassword"]

Exposing a Proxy with such defensive behaviors can be a way to simplify your job as a library author, by providing a clean and safe API to your users, and ensure they can’t break it by accident.

Data validation and dynamic type-checking

Another common criticism of JavaScript is its weak typing. You can assign any type of value to any property of an object, and the language won’t complain until you try to use the value in a way that is incompatible with its type.

The solution that JavaScript developers came up with is to globally adopt TypeScript , a superset of JavaScript that adds static typing to the language. TypeScript is a great tool and has convinced almost everyone about the importance of statically typed analysis. But it’s important to remember than TypeScript is a compile-time tool, it doesn’t check the types of your objects at runtime. Therefore, runtime errors may still occur for objects that are determined at runtime. That includes data from server responses, data locally stored in the browser, data from user inputs, third party libraries and APIs, browser and system specifics…

Currently, developers are tackling this issue by writing a lot of validation code, either manually or with the help of libraries like ArkType or Zod that have great interoperability with TypeScript. They are data validation libraries that check on the data, and throw an error if the data doesn’t match the expected schema. But this is a one-time check, and you have to remember to call the validation function every time you receive or manipulate the data. You can picture it as customs officers that check the data at the border, but once the data is in, you lost control.

Proxies can help you to enforce automatic and persistent type-checking at runtime, by intercepting all the read and write operations on an object and checking the data against a schema. Here is an example of a basic Proxy that checks the data against a schema using only the typeof operator:

const schema = {
    name: 'string',
    age: 'number'
};

const typechecked = schema => obj => {
    const proxy = new Proxy(obj, {
        get(target, prop) {
            const expectedType = schema[prop];
            const value = Reflect.get(target, prop);
            if (typeof value !== expectedType) {
                throw new TypeError(`Expected type '${expectedType}' for property '${prop}', but got '${typeof value}'`);
            }
            return value;
        },
        set(target, prop, value) {
            const expectedType = schema[prop];
            if (typeof value !== expectedType) {
                throw new TypeError(`Expected type '${expectedType}' for property '${prop}', but got '${typeof value}'`);
            }
            return Reflect.set(target, prop, value);
        }
    });
    Object.assign(proxy, obj); // used to immediately check the existing data
    return proxy;
}

const user = typechecked(schema)({
    name: 'Sylvain',
    age: 30
});

// then later, at any moment and place in your code
user.age = '30'; // Uncaught TypeError: Expected type 'number' for property 'age', but got 'string'

This is a very basic example, but you can imagine how you could extend this Proxy to check for more complex types like arrays, nested objects, dates, functions, or custom types. You could also add more complex validation logic like testing a string against a regular expression, if a number is an interger, or allow to provide custom validation logic in the schema. The possibilities are endless and we are quickly moving beyond simple type-checking.

This is what I did with my open source project ObjectModel , an attempt to use Proxies to add strong dynamic type-checking to JavaScript. It can be an interesting alternative to TypeScript if you work on an application that is very dynamic and mostly use data known at runtime only.

Monitoring & debugging

Let’s move to another use case. As a developer, you may be familiar with this situation: there’s a bug in your application and some object ends up with an unexpected value. You have no idea how this value got there, as the object is manipulated in many places in your codebase. So you could either add a lot of console.log statements at every origin, or add a breakpoint and check the value and call stack every time until you find the issue.

Or, you can replace your object by a Proxy that logs all the operations done on an object, and even start the debugger when reaching a certain condition:

const monitor = (obj, debugPredicate= () => false) => new Proxy(obj, {
    get(target, prop) {
        console.trace(`Getting property ${prop}`);
        return Reflect.get(target, prop);
    },
    set(target, prop, value) {
        console.trace(`Setting property ${prop} to ${value}`);
        if(debugPredicate(prop, value)) {
            debugger;
        }
        return Reflect.set(target, prop, value);
    }
    // add more traps to cover all the operations you want to monitor
});

Performance optimization

Now let’s discuss about functions. As stated in the intro, functions are first-class citizens in JavaScript, so they can be proxied too. This opens the door to possible performance optimizations like memoization. Proxies can intercept function calls and implement caching mechanisms for previously computed results:

const memoize = fn => {
    const cache = new Map();
    return new Proxy(fn, {
        apply(target, thisArg, args) {
            const key = args.join();
            if (cache.has(key)) {
                return cache.get(key);
            } else {
                const result = Reflect.apply(target, thisArg, args);
                cache.set(key, result);
                return result;
            }
        }
    });
};

It’s important to remember that Proxies also incur a performance cost . So before applying memoization everywhere, you shall check if the performance gain is worth the cost of the Proxy. To measure that, how about using a Proxy again ?


const collectMemoizationGain = fn => {
  const memoized = memoize(fn)
  let calls = 0
  return new Proxy(fn, {
    apply(fn, thisArg, args) {
      calls++
      let t0 = performance.now()
      const out = Reflect.apply(fn, thisArg, args)
      let t1 = performance.now()
      let dt = t1 - t0

      t0 = performance.now()
      const memoizedOut = Reflect.apply(memoized, thisArg, args)
      t1 = performance.now()
      let memoizedDt = t1 - t0

      if(memoizedDt < dt) {
        console.log(`Memoization gain for ${fn.name}: ${dt - memoizedDt} ms after ${calls} calls`)
      }
      
      return out
    }
  })
}

const memoizedIsPrime = collectMemoizationGain(function isPrime(n){
  if(n < 2) return false
  for(let i = 2; i <= Math.sqrt(n); i++){
    if(n % i === 0) return false
  }
  return true
})

memoizedIsPrime(100000000003)
memoizedIsPrime(100000000003) // Memoization gain for isPrime: 3.6 ms after 2 calls

Reactivity systems

Finally, let’s talk about reactivity systems. This is currently one of the most prevalent applications of Proxies in the JavaScript ecosystem, despite many developers not being aware of it. Reactivity systems are used in web frameworks to automatically update the UI of an application when the data changes. They are a key feature of modern web frameworks like Vue.js, Svelte, or Angular. The most notable adopter of Proxies in web frameworks land has been Vue.js , which uses them as the core of its reactivity system since Vue.js 3.0. Previously, they were using getter/setter property descriptors, which has the major drawback of not being able to properly detect property addition or deletion. We also have seen it used in other frameworks like MobX or Alpine.js .

The main concept binding Proxies and reactivity systems is the Observer pattern. Let’s call the part that needs to be updated the view, and the part that holds the data the state. The view is an observer of the state, and the data is an observable that can notify its observers when it changes. We could update the entire view when any part of the state changes but that’s very suboptimal. Instead, we’re going to update only the part of the view that relies on the part of the state that changed. This is called fine-tuned reactivity.

illustration of a reactivity system

To identify which part of the view relies on which part of the data, and make that mapping between observers and observed properties, another concept is used, the Dependency tracking. It consists of tracking which data properties are read by the view when it is computed. This is where the get trap of the Proxy comes into play:

const tracker = {
  observers: new Map(),
  trackedProps: new Set(),
  track(compute, observer) {
    this.trackedProps.clear();
    compute()
    this.trackedProps.forEach(prop => {
      this.observers.set(prop, [...(this.observers.get(prop) ?? []), observer]);
    })
  },
  notify(propChanged) {
    this.observers.get(propChanged)?.forEach(observer => observer());
  }
}

const observable = initialState => {
  return new Proxy(initialState, {
    get(target, prop) {
      tracker.trackedProps.add(prop);
      return Reflect.get(target, prop);
    },
    set(target, prop, value) {
      const returnValue = Reflect.set(target, prop, value);
      tracker.notify(prop);
      return returnValue
    }
  })
};

The observable function makes the object passed as argument observable, and will allow the tracker object to collect the list of properties that have been read. Now, we have everything needed to make a reactivity system. Let’s make a very simple one, where the view is just a computed projection of the state:

const computed = (getter) => {
  return {
    observe: observer => tracker.track(getter, observer),
    get value() { return getter() }
  };
}

const state = observable({ name: "Alice Cooper", age: 30 });
const initials = computed(() => state.name.split(" ").map(s => s[0]).join(""));

initials.observe(() => console.log("Initials changed to", initials.value));

state.name = "Bob Marley"; // logs "Initials changed to BM"
state.age = 31 // doesn't trigger the observer

If you replace the console log observer by something more sophisticated like rendering and updating DOM elements, and use the computed function in a template parser, you have the basis of a web framework reactivity system.

What’s the next use case ?

Proxies in JavaScript have opened the door to numerous possibilities, and I believe we have only scratched the surface. It’s interesting to look at how people use proxies in the wild. I want to conclude this article by showcasing random ideas made possible with proxies:

  • Chainable REST API calls: api.users[1].posts[2].comments[3].delete() calling DELETE /api/users/1/posts/2/comments/3
  • Real-time data synchronization: syncedData = sync(data, "ws://myserver.com") automatic data synchronization between server and client over WebSocket
  • Undo/Redo system: state = undoable(initialState); state.undo() // revert last modification allowing to undo and redo changes on a data object
  • Revokable references: const ref = ref(obj); ref.revoke() allowing to revoke a reference to an object and prevent further access to it
  • Dynamic behavior based on method names: dbProxy.findByNameAndCity("John", "New York") calling db.find({name: "John", city: "New York"})

A collection of projects around Proxies is available here: awesome-es2015-proxy

And you, what innovative ideas do you have for using Proxies?