In recent years, JavaScript has grown considerably in size. This blog post explores what’s still missing.

Notes:

  1. I’m only listing the missing features that I find most important. Many others are useful, but there is also a risk of adding too much.
  2. My choices are subjective.
  3. Almost everything mentioned in this blog post is on TC39’s radar. That is, it also serves as a preview of a possible future JavaScript.

For more thoughts on the first two issues, see the section on language design.

Values

Comparing objects by value

At the moment, JavaScript only compares primitive values such as strings by value (by looking at their contents):

> 'abc' === 'abc'
true

In contrast, objects are compared by identity (an object is only strictly equal to itself):

> {x: 1, y: 4} === {x: 1, y: 4}
false

It would be nice if there were a way to create objects that are compared by value:

> #{x: 1, y: 4} === #{x: 1, y: 4}
true

Another possibility is to introduce a new kind of class (with the exact details to be determined):

@[ValueType]
class Point {
  // ···
}

Aside: The decorator-like syntax for marking the class as a value type is based on a draft proposal.

Putting objects into data structures

As objects are compared by identity, it rarely makes sense to put them into (non-weak) ECMAScript data structures such as Maps:

const m = new Map();
m.set({x: 1, y: 4}, 1);
m.set({x: 1, y: 4}, 2);
assert.equal(m.size, 2);

This problem can be fixed via custom value types. Alternatively, the management of Set elements and Map keys could become customizable. For example:

  • Map via hash table: requires one operation for checking equality and another operation for creating hash codes. If you work with hash codes, you want your objects to be immutable. Otherwise, it’s too easy to break the data structure.
  • Map via sorted tree: requires an operation for comparing two values, to manage the values it stores.

Large integers

JavaScript numbers are always 64-bit (double), which gives you 53 bits plus sign for integers. That means that beyond 53 bits, you can’t represent every number, anymore:

> 2 ** 53
9007199254740992
> (2 ** 53) + 1  // can’t be represented
9007199254740992
> (2 ** 53) + 2
9007199254740994

This is a considerable restriction for some use cases. There is now a proposal for BigInts, real integers whose precision grows as necessary:

> 2n ** 53n
9007199254740992n
> (2n ** 53n) + 1n
9007199254740993n

BigInts also support casting, which gives you values with a fixed number of bits:

const int64a = BigInt.asUintN(64, 12345n);
const int64b = BigInt.asUintN(64, 67890n);
const result = BigInt.asUintN(64, int64a * int64b);

Decimal computations

JavaScript’s numbers are 64-bit floating point numbers (doubles), based on the IEEE 754 standard. Given that their representation is base-2, you can get rounding errors when dealing with decimal fractions:

> 0.1 + 0.2
0.30000000000000004

That is especially a problem in scientific computing and financial technology (fintech). A proposal for base-10 numbers is currently at stage 0. They may end up being used like this (note the suffix m for decimal numbers):

> 0.1m + 0.2m
0.3m

Categorizing values

At the moment, categorizing values is quite cumbersome in JavaScript:

  • First, you have to decide whether to use typeof or instanceof.
  • Second, typeof has the well-known quirk of categorizing null as 'object'. I’d also consider functions being categorized as 'function' a quirk.
    > typeof null
    'object'
    > typeof function () {}
    'function'
    > typeof []
    'object'
    
  • Third, instanceof does not work for objects from other realms (frames etc.).

It may be possible to fix this via a library (I’ll create a proof of concept, once I have time).

Functional programming

More expressions

C-style languages make an unfortunate distinction between expressions and statements:

// Conditional expression
let str1 = someBool ? 'yes' : 'no';

// Conditional statement
let str2;
if (someBool) {
  str2 = 'yes';
} else {
  str2 = 'no';
}

Especially in functional languages, everything is an expression. Do-expressionslet you use statements in all expression contexts:

let str3 = do {
  if (someBool) {
    'yes'
  } else {
    'no'
  }
};

The following code is a more realistic example. Without do-expressions, you need an immediately invoked arrow function to hide the variable result inside a scope:

const func = (() => {
  let result; // cache
  return () => {
    if (result === undefined) {
      result = someComputation();
    }
    return result;
  }
})();

With a do-expression, you can write this code more elegantly:

const func = do {
  let result;
  () => {
    if (result === undefined) {
      result = someComputation();
    }
    return result;
  };
};

Matching: a destructuring switch

JavaScript makes it easy to work directly with objects. However, there is no built-in way of switching over cases, based on the structure of an object. That could look as follows (example from proposal):

const resource = await fetch(jsonService);
case (resource) {
  when {status: 200, headers: {'Content-Length': s}} -> {
    console.log(`size is ${s}`);
  }
  when {status: 404} -> {
    console.log('JSON not found');
  }
  when {status} if (status >= 400) -> {
    throw new RequestError(res);
  }
}

As you can see, the new case statement is similar to switch in some ways, but uses destructuring to pick cases. This kind of functionality is useful when whenever one works with nested data structures (e.g. in compilers). The proposal for pattern matching is currently at stage 1.

Pipeline operator

There are currently two competing proposals for the pipeline operator. Here, we are looking at Smart Pipelines (the other proposal is called F# Pipelines).

The basic idea of the pipeline operator is as follow. Consider the following nested function calls.

const y = h(g(f(x)));

However, this notation usually does not reflect how we think about the computational steps. Intuitively, we’d describe them as:

  • Start with the value x.
  • Then apply f() to it.
  • Then apply g() to the result.
  • Then apply h() to the result.
  • Then assign the result to y.

The pipeline operator lets us express this intuition better:

const y = x |> f |> g |> h;

In other words, the following two expressions are equivalent.

f(123)
123 |> f

Additionally, the pipeline operator supports partial application (similar to the method .bind() of functions): The following two expressions are equivalent.

123 |> f(#)
123 |> (x => f(x))

One important benefit of the pipeline operator is that you can use functions as if they were methods – without changing any prototypes:

import {map} from 'array-tools';
const result = arr |> map(#, x => x * 2);

To conclude, let’s look at a longer example (taken from the proposal and slightly edited):

promise
|> await #
|> # || throw new TypeError(
  `Invalid value from ${promise}`)
|> capitalize // function call
|> # + '!'
|> new User.Message(#)
|> await stream.write(#)
|> console.log // method call
;

Concurrency

JavaScript has always had limited support for concurrency. The de-facto standard for concurrent processes is the Worker API, which is available in web browsers and Node.js (without a flag in v11.7 and later).

Using it from Node.js looks as follows.

const {
  Worker, isMainThread, parentPort, workerData
} = require('worker_threads');

if (isMainThread) {
  const worker = new Worker(__filename, {
    workerData: 'the-data.json'
  });
  worker.on('message', result => console.log(result));
  worker.on('error', err => console.error(err));
  worker.on('exit', code => {
    if (code !== 0) {
      console.error('ERROR: ' + code);
    }
  });
} else {
  const {readFileSync} = require('fs');
  const fileName = workerData;
  const text = readFileSync(fileName, {encoding: 'utf8'});
  const json = JSON.parse(text);
  parentPort.postMessage(json);
}

Alas, Workers are relatively heavyweight – each one comes with its own realm (global variables etc.). I’d like to see a more lightweight construct in the future.

Standard library

One area where JavaScript is still clearly behind other languages is its standard library. It does make sense to keep it minimal, as external libraries are easier to evolve and adapt. However, there are a few core features that would be useful.

Modules instead of namespace objects

JavaScript’s standard library was created before the language had modules. Therefore, functions were put in namespace objects such as ObjectReflectMath and JSON:

  • Object.keys()
  • Reflect.ownKeys()
  • Math.sign()
  • JSON.parse()

It would be great if this functionality could be put in modules. It would have to be accessed via special URLs, e.g. with the pseudo-protocol std:

// Old:
assert.deepEqual(
  Object.keys({a: 1, b: 2}),
  ['a', 'b']);

// New:
import {keys} from 'std:object';
assert.deepEqual(
  keys({a: 1, b: 2}),
  ['a', 'b']);

The benefits are:

  • JavaScript would become more modular (which could speed up startup times and reduce memory consumption).
  • Calling an imported function is faster than calling a function stored in an object.

Helpers for iterables (sync and async)

Benefits of iterables include on-demand computation of values and support for many data sources. However, JavaScript currently comes with very few tools for working with iterables. For example, if you want to filter, map or reduce an iterable, you have to convert it to an Array:

const iterable = new Set([-1, 0, -2, 3]);
const filteredArray = [...iterable].filter(x => x >= 0);
assert.deepEqual(filteredArray, [0, 3]);

If JavaScript had tool functions for iterables, you could filter iterables directly:

const filteredIterable = filter(iterable, x => x >= 0);
assert.deepEqual(
  // We only convert the iterable to an Array, so we can
  // check what’s in it:
  [...filteredIterable], [0, 3]);

These are a few more examples of tool functions for iterables:

// Count elements in an iterable
assert.equal(count(iterable), 4);

// Create an iterable over a part of an existing iterable
assert.deepEqual(
  [...slice(iterable, 2)],
  [-1, 0]);

// Number the elements of an iterable
// (producing another – possibly infinite – iterable)
for (const [i,x] of zip(range(0), iterable)) {
  console.log(i, x);
}
// Output:
// 0, -1
// 1, 0
// 2, -2
// 3, 3

Notes:

  • Consult Python’s itertools for examples of tool functions for iterators.
  • For JavaScript, each tool function for iterables should come in two versions: one for synchronous iterables and one for asynchronous iterables.

Immutable data

It would be nice to have more support for non-destructively transforming data. Two relevant libraries are:

  • Immer is relatively lightweight and works with normal objects and Arrays.
  • Immutable.js is more powerful and heavyweight and comes with its own data structures.

Better support for date times

JavaScript’s built-in support for date times has many quirks. That’s why the current recommendation is to use libraries for all but the most basic tasks.

Thankfully, work on temporal, a better date time API, is ongoing:

const dateTime = new CivilDateTime(2000, 12, 31, 23, 59);
const instantInChicago = dateTime.withZone('America/Chicago');

Features that may not be needed

The pros and cons of optional chaining

One proposed feature that is relatively popular is optional chaining. The following two expressions are equivalent.

obj?.prop
(obj === undefined || obj === null) ? undefined : obj.prop

This feature is especially convenient for chains of properties:

obj?.foo?.bar?.baz

However, this feature has downsides:

  • Deeply nested structures are more difficult to manage.
  • Being so forgiving when accessing data hides problems that will surface much later and are then harder to debug.

An alternative to optional chaining is to extract the information once, in a single location:

  • You can either write a helper function that extracts the data.
  • Or you can write a function whose input is deeply nested data and whose output is simpler, normalized data.

With either approach, it is possible to perform checks and to fail early if there are problems.

Further reading:

Do we need operator overloading?

Early work is currently being done for operator overloading, but infix function application may be enough (there currently is no proposal for it, though):

import {BigDecimal, plus} from 'big-decimal';
const bd1 = new BigDecimal('0.1');
const bd2 = new BigDecimal('0.2');
const bd3 = bd1 @plus bd2; // plus(bd1, bd2)

The benefits of infix function application are:

  • You can create operators other than those that are already supported by JavaScript.
  • Compared to normal function application, nested expressions remain readable.

This is an example of a nested expression:

a @​plus b @​minus c @​times d
times(minus(plus(a, b), c), d)

Interestingly, the pipeline operator also helps with readability:

plus(a, b)
  |> minus(#, c)
  |> times(#, d)

Various smaller things

These are a few things that I’m occasionally missing, but that I don’t consider as essential as what I’ve mentioned previously:

  • Chained exceptions: enable you to catch an error, wrap additional information around it and throw it again.
    new ChainedError(msg, origError)
    
  • Composable regular expressions:
    re`/^${RE_YEAR}-${RE_MONTH}-${RE_DAY}$/u`
    
  • Escaping text for regular expressions (important for .replace()):
    > const re = new RegExp(RegExp.escape(':-)'), 'ug');
    > ':-) :-) :-)'.replace(re, '🙂')
    '🙂 🙂 🙂'
    
  • Array.prototype.get() that supports negative indices:
    > ['a', 'b'].get(-1)
    'b'
    
  • As-patterns for matching and destructuring (proposal by Kat Marchán):
    function f(...[x, y] as args) {
      if (args.length !== 2) {
        throw new Error();
      }
      // ···
    }
    
  • Checking deep equality for objects (maybe: optionally parameterize with a predicate, to support custom data structures):
    assert.equal(
      {foo: ['a', 'b']} === {foo: ['a', 'b']},
      false);
    assert.equal(
      deepEqual({foo: ['a', 'b']}, {foo: ['a', 'b']}),
      true);
    
  • Enums: One benefit of adding enums to JavaScript is that that would close a gap with TypeScript – which already has enums. There are currently two draft proposals (which aren’t at a formal stage, yet). One is by Rick Waldronthe other one is by Ron Buckton. In both proposals, the simplest syntax looks like this:
    enum WeekendDay {
      Saturday, Sunday
    }
    const day = WeekendDay.Sunday;
    
  • Tagged collection literals (proposed – and withdrawn – by Kat Marchán): allow you to create Maps and Sets as follows:
    const myMap = Map!{1: 2, three: 4, [[5]]: 6}
      // new Map([1,2], ['three',4], [[5],6])
    
    const mySet = Set!['a', 'b', 'c'];
      // new Set(['a', 'b', 'c'])
    

FAQ: future JavaScript

Will JavaScript ever support static typing?

Not anytime soon! The current separation between static typing at development time (via TypeScript or Flow) and pure JavaScript at runtime, works well. So there is no immediate reason to change anything.

Why can’t we clean up JavaScript, by removing quirks and outdated features?

A key requirement for the web is to never break backward compatibility:

  • The downside is that the language has many legacy features.
  • But the upsides outweigh this downside: Large code bases remain homogeneous; migrating to a new version is simple; engines remain smaller (no need to support multiple versions); etc.

It is still possible to fix some mistakes, by introducing better versions of existing features.

For more information on this topic, consult “JavaScript for impatient programmers”.

Thoughts on language design

As a language designer, no matter what you do, you will always make some people happy and some people sad. Therefore, the main challenge for designing future JavaScript features is not to make everyone happy, but to keep the language as consistent as possible.

However, there is also disagreement on what “consistent” means. So, the best we can probably do is to establish a consistent “style”, conceived and enforced by a small group of people (up to three). That does not preclude them being advised and helped by many others, but they should set the general tone.

Quoting Fred Brooks:

A little retrospection shows that although many fine, useful software systems have been designed by committees and built as part of multipart projects, those software systems that have excited passionate fans are those that are the products of one or a few designing minds, great designers.

An important duty of these core designers would be to say “no” to features, to prevent JavaScript from becoming too big.

They would also need a robust support system, as language designers tend to be exposed to considerable abuse (because people care and don’t like to hear “no”). One recent example is Guido van Rossum quitting his job as chief Python language designer, due to the abuse he received.

Other ideas

These are ideas that may also help design and document JavaScript:

  • Creating a roadmap that describes a vision for what’s ahead for JavaScript. Such a roadmap can tell a story and connect many separate pieces into a coherent whole. The last such roadmap, that I’m aware of, is “Harmony Of My Dreams” by Brendan Eich.
  • Documenting design rationales. Right now, the ECMAScript specification documents how things work, but not why. One example: What is the purpose of enumerability?
  • A canonical interpreter. The semi-formal parts of the specification are already almost executable. It’d be great if they could be treated and run like a programming language. (You’d probably need a convention to distinguish normative code from non-normative helper functions.)

Everyone is angry about CSS again. I’m not even going to try to summarize the arguments. However it always seems to boil down to the fact that CSS is simultaneously too easy to bother with, yet so hard it needs to be wrapped up in a ball of JavaScript in case it scares the horses. You can read a more sensible take from Chris Coyier in The Great Divide.

These arguments about tools, frameworks and technologies happen throughout the stack. I have watched them go round and round during the 20 years I’ve been working on the front and backend of the web. The de facto standard technology has limitations, we hit up against problems, we want to solve the problems. So often, we decide to solve the problems by throwing everything away. The old stuff is terrible, invented when we knew no better! We can do a far better job now, with all of our knowledge. Let’s reinvent that wheel!

We see it in the world of data storage, people will do anything to avoid a relational database, despite the fact that a relational database is very often what you actually need.

We see it with a drive to static sites, conflating speed with the lack of a database, and ending up recreating the database in the filesystem or relying on a raft of third party services to plug the holes that would have been filled by a more traditional CMS.

In both of the above scenarios, there are situations where the RDBMS alternative is the right choice, the static site perfect for the type of content being published. These are good solutions for particular problems. However I’ve seen many situations where the desire to adopt the latest technology or technique leaves the project in a mess, and ultimately an expensive rebuild or refactor has to be made.

This constant wheel reinvention is something we seem to be wired to do. We can be optimistic and hope that good things fall out of it, but so often what is left is a mess. Teams find themselves with projects that no-one has the skills to fix, due to them being based on a toolchain that only a few people understand how to use. Businesses are handed websites by external agencies, using a technology that quickly falls out of favour, and when they want an update the next contractor looks at it, and suggests a rebuild.

However, when it comes to frameworks and approaches which build complexity around writing HTML and CSS, there is something deeper and more worrying than a company having to throw away a couple of years of work and rebuild because they can’t support a poorly chosen framework.

When we talk about HTML and CSS these discussions impact the entry point into this profession. Whether front or backend, many of us without a computer science background are here because of the ease of starting to write HTML and CSS. The magic of seeing our code do stuff on a real live webpage! We have already lost many of the entry points that we had. We don’t have the forums of parents teaching each other HTML and CSS, in order to make a family album. Those people now use Facebook, or perhaps run a blog on wordpress.com or SquareSpace with a standard template. We don’t have people customising their MySpace profile, or learning HTML via Neopets. We don’t have the people, usually women, entering the industry because they needed to learn HTML during that period when an organisation’s website was deemed part of the duties of the administrator.

As this amazing thread highlights, the entry point more recently for non-traditionally educated people has been the bootcamps. They are typically teaching a framework-heavy style of development which gets students as quickly as possible up to speed with the technologies most likely to get them a job. However, I see from the questions I get from those who have been through that type of training, the basics are often glossed over at best. If those new recruits then head into an environment where those gaps are never filled, or worse one where HTML and CSS is devalued and rubbished, we do them a huge disservice. I can feel comfortable about the way we build the web changing because of my HTML and CSS skills; I know from past experience they allow me to hold the tools built on top of them lightly, to learn quickly and switch easily.

There is something remarkable about the fact that, with everything we have created in the past 20 years or so, I can still take a complete beginner and teach them to build a simple webpage with HTML and CSS, in a day. We don’t need to talk about tools or frameworks, learn how to make a pull request or drag vast amounts of code onto our computer via npm to make that start. We just need a text editor and a few hours. This is how we make things show up on a webpage.

That’s the real entry point here and yes, in 2019 they are going to have to move on quickly to the tools and techniques that will make them employable, if that is their aim. However those tools output HTML and CSS in the end. It is the bedrock of everything that we do, which makes the devaluing of those with real deep skills in those areas so much more baffling.

If you have real-world knowledge of problems you have found when working with CSS, ideas as to how they might be solved, ways you have solved them using JavaScript, great! There are lots of us who work on CSS who want to hear your ideas. However if you start by telling me that you haven’t really bothered to learn CSS, before deciding how you are going to replace it; if you belittle me (or I see you doing that to other people) because I don’t know your framework of choice, I might be slightly less amenable to your suggestions. That said, what I will never do is help you build a world that someone like me would have never been able to enter.

DON’T mess with emojis or cheeseburgers — at least that’s the message Google received after being bombarded by a series of complaints about the tech company’s cheeseburger emoji design.

The slice of cheese on the burger is in the wrong position according to fans of the fast food — an error Google CEO Sunday Pichai has promised on Twitter to address promptly.

The debate ignited when Mr Pichai responded to a tweet by Thomas Baekdal, who had written: “I think we need to have a discussion about how Google’s burger emoji is placing the cheese underneath the burger, while Apple puts it on top.”

In response, one user suggested the emoji was designed by someone who has never eaten a cheeseburger, while others insisted it wasn’t a cheeseburger at all but rather a hamburger.

Mr Pichai responded to the controversy by saying that Google would drop everything to sort out the cheese dilemma on Monday.

View image on Twitter
According to Time magazine, Google is the only tech company to place the cheese underneath the meat patty. Apple, Microsoft, Samsung, LG, Whatsapp, Facebook and others all place the cheese on top — a position which is considered optimal for the cheese to melt on the burger.

I don’t have to explain anything about this. I literally don’t even need to write this post. I could’ve saved a bunch of time just flood posting this everywhere I could possible find, but it deserves the respect of a proper feature post.

Dont remember what I was doing on Youtube before the advertisement came on for DRL, but I immediately abandoned it. I spend the next couple of hours watching and researching this new sport and couldn’t believe I hadn’t heard anything about it until now.

Rather than go crazy talking it up for a TL:DR post, heres some first person drone racing goodness:

Want a Fast Server Like Mine?

GET VULTR!