A couple days ago I noted that a commit to the Node.js development tree had removed the isolates feature that had been expected for Node.js 0.8.x. Over on the node-users mailing list Isaac Schleuter posted an explanation of "why".
Why was this feature planned?
The question at this point is whether Node should strive to be a do-everything be-everything platform, or whether it should focus on the thing it does best (extremely fast event driven I/O processing)?
If everyone coming to Node.js knows that long-running calculations require special handling, then is it a problem? For example people generally don't use a hammer to brush their teeth because everybody knows that hammers are for bashing things. In other words, you use the best tool for the job and what Node strives to do is be a tool for extremely fast event driven I/O processing.
Isaac responded to the above criticism saying you can still launch a child process to push the long-running calculation to another process. Nothing about child_process.fork has been changed other than its implementation with Isolates. The cost is to make spinning up a child process this way a bit more expensive.
Ben Noordhuis suggested this:
Why was this feature planned?
The Isolates feature was intended to make it possible to runWhy was it removed?
child_process.fork() in a thread, rather than a full process. The
justification was to make it cheaper to spin up new child node
instances, as well as allowing for fast message-passing using shared
memory in binary addons, while retaining the semantics of node's
child_process implementation by keeping them in completely isolated v8
instances.
ultimately turned out toOne of those disappointed saw this as justification for Isolates
cause too much instability in node's internal functionality to justify
continuing with it at this time. It requires a lot of complexity to
be added to libuv and node, and isn't likely to yield enough gains to
be worth the investment.
was going to makeLet's stop and explain this a bit because this is something I cover in my book, Node Web Development. A few months ago there was a blog post using the Fibonacci calculation (as I do in Node Web Development) to demonstrate the problem. Basically a long-running calculation blocks event execution preventing the Node.js process from doing its event processing job. In my book I described two ways to get around this: a) refactoring the algorithm to dispatch sub-calculations via the event dispatch mechanism, b) distribute the calculation to a back-end process
Node more able to do intense CPU-bound operations without blocking
everything else, a limitation that is one of Node's biggest criticisms.
The question at this point is whether Node should strive to be a do-everything be-everything platform, or whether it should focus on the thing it does best (extremely fast event driven I/O processing)?
If everyone coming to Node.js knows that long-running calculations require special handling, then is it a problem? For example people generally don't use a hammer to brush their teeth because everybody knows that hammers are for bashing things. In other words, you use the best tool for the job and what Node strives to do is be a tool for extremely fast event driven I/O processing.
Isaac responded to the above criticism saying you can still launch a child process to push the long-running calculation to another process. Nothing about child_process.fork has been changed other than its implementation with Isolates. The cost is to make spinning up a child process this way a bit more expensive.
Ben Noordhuis suggested this:
Retrofitting thread safety onto a code base that wasn't designed forAnd several others piped in saying "stability and debugging first".
it leaves a very wide margin for obscure bugs. Offset against the
potential benefits (which were questionable and probably not
bottlenecks to most people*) the choice was not hard to make.