It’s time to fly with V8

“I’m not going to bed,” I told my girlfriend to her great surprise. Children are never willing to go to bed, but adults like my girlfriend are dying to get together with her pillows and quilts right after dinner. “I’m not going to bed!” I repeated with such an impetus that my girlfriend understood that I was very, very excited.

I already wrote at the time a small introduction to the internals of V8. Let’s say that some time ago I started a document to collect everything that had to do with the internals of the JavaScript V8 engine. Out of pure curiosity and eagerness to learn.

That’s why my girlfriend understood that a new V8 optimizer compiler was enough reason not to go to sleep that night so soon.


I like well to be in the company of explorers.

J.M Barrie

Turbofan

Turbofan, that’s the name of the new V8 optimizing compiler. They could have called it “Saxophone” or even “Cocoon”, but decided to call it Turbofan.

What does Turbofan bring us?

A new optimizing compiler to (probably) replace the current crankshaft. Besides that, it has built-in support for asm.js, that’s what we call brutal performance of series in my town (although @mraleph doesn’t like this).

This new optimizing compiler is still very green, as far as I could investigate in the source code, there is still no OSR implemented or even deoptimization of the generated byte code, and the most basic optimization techniques are still not included in the new compiler.

And what do we have until now?

Well a new optimizer that automatically discards dead code from the time of generation of the AST, no more heuristics in the core of the engine to find dead code and a sea of ​​very large doubts to investigate.

A sea of ​​doubts


If I had 8 hours to cut a tree, I would use 6 to sharpen the axe.

With the arrival of Turbofan we went from a compiler (Hydrogen) based on SSA to a compiler based on a “sea of ​​nodes”, you can read a comparison of the operation of each one here.

The sea of ​​nodes of this new compiler is no more than an intermediate language between the AST generated based on a JavaScript code, and the final byte code executed by the virtual machine.

In this ‘sea of ​​nodes’ each node is represented as part of a graph. All nodes produce a value. For instance 1 + 2 = 3. In addition each node points to its operands (in this case 1 and 2) and there is no more information beyond this, simple and concise.

What are the Pros of the ‘sea of ​​nodes’ vs. other intermediate languages?

• Quick search for constants that are global or have a single reference to a node that represents a constant, e.g. number 2. Thanks to this feature we don’t duplicate nodes.

• Dead code search, any node that doesn’t have dependencies is automatically marked as dead code (code that doesn’t run because it does nothing), for example:

function test (num) {
    var a = 14 + num; // Dead code, because the result of its execution doesn’t produce anything.
    return 20 + 20;
}

• Optimization of global nodes. Insert a node in the most optimal place within the graph of the ‘sea of ​​nodes’, so in the generation of byte code it’s not necessary to go through closures until reaching the value of a node since it’s primitive is already in the right place.

But beware, the V8 team has not invented anything new, this ‘sea of ​​nodes’ technique has already been used for a while by the Java compiler Hotspot.

What do I have to do to prove it?

You have to walk before you run.

It didn´t take me long to start up Turbofan on my Macbook, these steps should help you if you want to try it.

Download Turbofan from the bleending_edge branch:

$ git clone -b bleeding_edge https://github.com/v8/v8.git

Install the necessary dependencies to work with V8 (you better go make dinner while doing this):

$ make dependencies

Compile V8:

$ make native

And once this is done, you can use the shell in out/native/d8, although I personally like to use Xcode because that way I can put breakpoints in the internals and debug. To generate an Xcode project execute:

$ build / gyp_v8 -Dtarget_arch = ia32

And in the build/folder you will have an all.xcodeproj ready to be used in Xcode.

If you want to investigate a little, these are some of the flags I have found to observe how Turbofan works:

turbo_filter - "optimization filter for TurboFan compiler"  
trace_turbo - "trace generated TurboFan IR"  
trace_turbo_types - "trace generated TurboFan types"  
trace_turbo_scheduler - "trace generated TurboFan scheduler"  
turbo_asm - "enable TurboFan for asm.js code"  
turbo_verify - "verify TurboFan graphs at each phase"  
turbo_stats - "print TurboFan statistics"  
turbo_types - "use typed lowering in TurboFan"  
turbo_source_positions - "track source code positions when building TurboFan IR"  
context_specialization - "enable context specialization in TurboFan"  
turbo_deoptimization - "enable deoptimization in TurboFan"  
turbo_inlining - "enable inlining in TurboFan"  
trace_turbo_inlining - "trace TurboFan inlining" 

And these are the flags that I used when I was investigating the turbofan internals:

--always-opt --turbo-stats --trace-turbo --turbo-types --turbo-filter = function_name

Replace function_name with the name of a function that you declare yourself.

Remember to always use -always-opt so that all executed code is optimized by Turbofan, if you don’t do this, only the code identified by the runtime profiler will be optimized (loops and things like that).

Don’t go wasting your job.

You always have to try to improve a bit, now is a good time to start reading about how JavaScript or anything that disturbs you works. Dedicate time to yourself, and your family, learn at once why the body of a function in JavaScript can slow down your program if it has a comment inside, read a book on recipes that will do you some good, and go back to buy that toy you loved as a child to spend time doing what you like.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.