Exploring React’s internal scheduler: A simple guide

I feel I’m not into JavaScript virtual machines anymore, I’ve been reading about and using them for more than 10 years and I think I need something else to learn about, anyway, one of the last things I’ve been learning about is the React’s internal scheduler.

As someone who’s spent years diving into how V8’s scheduler and Node.js’s worker pools work, and even talked about them in different events, I find the scheduler in React really worth a look.

React’s internal scheduler source code for your joy and happiness: https://github.com/facebook/react/blob/main/packages/scheduler/src/forks/Scheduler.js

Task prioritization: React’s secret sauce

This thing is like a finely tuned engine, sorting tasks from ImmediatePriority, UserBlockingPriority, NormalPriority, LowPriority and IdlePriority. The important point here is that some things just have to be done now, while others can wait. React’s got this figured out to keep your UI smooth and responsive.

React uses two main queues: the taskQueue for the now-stuff and the timerQueue for the later-stuff. It’s a smart way to manage what needs to happen immediately and what can be put on the back burner. This reminds me a lot of how Node handles tasks in its worker pool – efficient and clever.

To schedule tasks, React uses unstable_scheduleCallback for lining up tasks, considering their priority and timing. It’s all about balancing what needs to happen and when. This kind of strategic scheduling is super important, kind of like how V8 manages its tasks.

The work loop: React’s heartbeat
Now, let’s talk about the workLoop function. This function is where the magic happens. It processes tasks, knows when to take a break (yield to the browser), and keeps things running smoothly. This reminds me a lot of Node’s event loop – doing a lot but never stopping. Surprisingly, it uses a while statement to handle the loop.

React’s scheduler decides when to give the browser a moment, based on how things are running. This smart use of time is something I’ve always admired in JavaScript environments, and React nails it.

Here’s something cool for the tech nerds (like me!): React’s scheduler comes packed with tools for profiling and debugging. It’s crucial to have these in your toolkit when you’re dealing with a complex system.

The scheduler is open to integration with some neat APIs, giving us devs a peek under the hood and a way to work with it directly. It’s kind of like Node’s approach of its native API.

React’s scheduler, like V8 and Node, shows the awesome talent of the JavaScript ecosystem. It’s a reminder of how much I can’t learn because is just impossible to keep up.

See you at the GM Island.

🏝️

Deep research into JavaScript Virtual Machines: V8 and SpiderMonkey

Since 2013, I’ve been working a lot on V8 and Node.js native addons. This has allowed me to understand the ins and outs of JavaScript Virtual Machines (VMs). Both V8 by Google and SpiderMonkey by Mozilla have important features. Some are well-known, others not so much.

If we look at the typical ways these VMs optimize code, both use methods like getting rid of unused code and inlining functions. These are key in cutting down unnecessary work and making code run faster.

V8 also has something special with its use of hidden classes. This trick allows for quicker access to JavaScript object properties. It’s unique to V8 and it really helps performance.

Both V8 and SpiderMonkey’s Just-In-Time (JIT) compilers have a few tricks up their sleeves too. They make often-used code sections run faster by recompiling them on the fly. It’s not talked about much, but this plays a big part in making things run smoothly.

When it comes to dealing with numbers, both V8 and SpiderMonkey have an advantage. They can work with integers directly instead of changing them into floating-point numbers. This leads to quicker calculations.

In terms of handling the event loop, there are clear differences between V8 and SpiderMonkey. As one developer said, “V8 is slightly better at dealing with async functions and promises because of the way it uses a microtask queue.”

Personally, I’ve spent more time with V8, digging into its unique features like hidden classes and its efficient event loop. But that doesn’t mean SpiderMonkey isn’t just as good. They each have their own strong points, and you can choose which one to use based on your needs.

Using my experience with V8 and Node.js, I decided to make native bindings for Zydis. This is an open-source library for decoding and disassembling X86 & X86-64. It was tough but also a great chance to learn. Zydis is known for its accuracy, which comes from lots of manual checks and tests.

Plus, it supports all Intel and AMD’s ISA extensions, and it’s fast and doesn’t need any other libraries. It gives a lot of detail for every instruction and it’s small in size. All of these made Zydis a good fit to work with Node.js.

Thanks to my bindings, Zydis’ strengths can now be used in Node.js, allowing for greater possibilities in decoding and disassembling JavaScript applications.

The last ride to the Game Master Island

On June 8th, Exploration Reboot bid farewell to the fabled GM Island with a monumental 40+ man raid. This resulted in laughs, chatspam, and a whole lot of bans. Old faces and new came together to make exploration history. This movie is inspired in the amazing “The last wallwalk – The movie” made by Dopefish. “You may clip our wings, but we will always remember what it was like to fly”

Machinima Studio with pre-built binaries

Over the last years, I’ve been working on a tool called Machinima Studio. A tool that can be used to create in-game cinematics for Guild Wars 2.

It automatically updates offsets when a new version is found. It uses a binary pattern search inside the memory of the process to detect the position of the PTRs.

You can watch on of the videos that I made using this tool right here:

Grab the binaries here.

Swimmer in the Secret Sea

My favorite place in Cambrils

There are some that pack their suitcase for the beach thinking about which swimsuits they are going to wear, I—when I travel to Cambrils—pack my things thinking about what books I’m going to bring home from my favorite bookstore.

Isabel, from the Isabel de Bellart bookstore, is a free fairy. When you walk through the door of her house she already knows how you feel. If you are lucky, you will be able to see her through her bookstore with fairy wings that she usually puts on when a child walks in, one of those who still doesn’t spend time on grown-up things like paying bills because he is still a child.

From the outside, her bookstore may look like a store, but when you’ve been breathing the air several times, reading books and listening to the fairy Elizabeth talking about that place, then you start to feel that it’s your house. It’s the place you want to be, there is ice cream outside, and the sun and the beach, but you want to be there, next to the fairy, looking at books that wrap you. Then you understand that it’s not a store, it’s a very special place.

The Swimmer In The Secret Sea

There are books that have a very striking cover, with others simply the author is already a guarantee that you will like them.

Then there are those books that you neither know the author, nor the cover tells you anything. The Swimmer In The Secret Sea is one of those books that, if you look at its cover or its back, nothing will tell you what the book really is about.

I do not want to tell you what the book is about, so I will just say it’s a book that talks about life and death. In its 90 pages it tears you completely and leaves you marked forever. It may seem like a small book, but it’s so intense that many 500-page books do not even come close to the ability to leave a mark like this one.

If you really enjoy reading as much as I do, I recommend that the next book you read is this, buy 2 to be able to give one to a friend, because it’s really worth it.

To finish, I would like to say that I only know one other place as special as Isabel’s bookstore, it’s called Disneyland, and it’s in Paris.

I’m looking for a Foodbuddy in Madrid

There are two very important hobbies in my life, programming and cooking. There are other hobbies besides those but they are somewhat irrelevant.

I love to eat, I enjoy watching the ingredients on the plate. I try to spend a reasonable amount of time every week cooking something new –or different—to what I usually do.

And so the time has come for me to look for a foodbuddy, someone who wants to join me eating out there, and who wants to enjoy Madrid’s terrific food scene.

I’m looking for a Foodbuddy

What is a foodbuddy? Well, basically, it’s someone who tags along with you to eat in magnificent restaurants and who spends the whole week thinking that on Friday you will finally be able to try the broken eggs of Casa Lucio –for example.

My goal is to find someone who loves food as much as I do and who wants to eat-out in Madrid occasionally.

“We have to eat to live, not live to eat.”

Henry Fielding

Would you like to be my Foodbuddy?

If you like my proposal, you should at least meet these minimums for this “relationship” to work:

• You LOVE to eat.
• You are willing to go out at least once a month to eat/dine out there.
• You don’t mind spending € 60 ~ € 120 every now and then at Michelin star restaurant because the experience excites you.
• You are willing to eat Indian, Japanese, Spanish food, and anything surprising, with the flavors and nuances that are out there.
• You are a nice person and you don’t stare at infinity while you eat without saying a word.

If you think you meet the above mentioned requirements, and you want to be my foodbuddy, contact me now!

In return, I promise not to be one of those douchebags who ruins dinner (brother-in-law seal of approval) and I promise you that we will have great moments eating in Madrid.

If only I could.

A long time ago, there was a programmer who loved deeply what he was doing. Every day he would open Eclipse with the intention of improving whatever he had programmed the day before. His code lines were poems, the names of his methods were perfect. He did not have JShint installed. He did not need it.

He understood the types of languages that existed and the different purposes that each one pursued very well. He was so good that he did not lack customers, and everyone paid him on time at the end of the month. He was unique.

For a long time, he was very happy with what he had, and did not ask for anything else, since he felt complete and full of life.

One day, he was invited to an event about programming and innovation. AngularJS talks monopolized the tracks and attendees proudly wore their Android and Bower t-shirts.

Sitting in his seat, he watched with joy one of the talks about animations of cubes in CSS3. The speaker’s words were like magic to him, he was quickly fascinated by the use of CSS3 transformations. So much that he wished with all his soul to be an expert in CSS3 transformations, “oh, lord, if only I could be an expert in CSS3 transformations,” he thought to himself.

What this programmer did not know, is that the great Dennis Ritchie (Creator of C) was watching him from the sky, eager to make his dreams come true.

“Expert in CSS3 transformations you shall become,” said Dennis out loud.

And just like that, the humble Java programmer became an expert in front-end development.

The days passed, and the programmer invested his time in creating transformations of geometric primitives with CSS3. A cube, a sphere, a parallelogram, a triangle, the type of figure did not matter, he drew them all, with great mastery and control of the CSS prefixes.

A new event came to his city, JavaScript Fuckers it was called. Happy and with his head high he went to the event, thinking that his mastery of the CSS3 transformations would completely amaze the attendees of the event.

The first talk started, ‘WAT’ was its title. A talk on the ins and outs of JavaScript. He watched carefully as the speaker did not mentioned anything about CSS3, or even used the browser to program. The speaker made continuous references to a new and striking technology, called Nodejs.

“Oh lord!” He thought to himself. “If only I could be an expert on Node.js.”

And so it was. Dennis Ritchie listened again. ”Expert in Node.js you will be, my son.”

Back at his home, the programmer artfully compiled the source code of Node.js. V8 offered no resistance, and even implemented his own V8 debugger in JavaScript, called node-inspector.

He got so many stars in Github, that he was invited to the event of the year, in New York. Its name was Fosdem, and some of the best developers on the planet were attending.

Sitting on his seat, one of the closest to the stage, he listened in detail to all the speakers who were going up there. His head was filled with terms he did not know, Go, Spring, Hadoop, io.js, ionic, all these words were new. He was involved in a sea of ​​ignorance, neither his knowledge about transformations, nor his good knowledge about Node.js helped him to understand what they were talking about in that place, at first a comfortable place but ultimately hostile.

Sad, troubled, lost, he asked again with all his might for a wish.

“Oh lord!” He thought to himself, “If I only could be a wise, enlightened and intelligent programmer.”

And again Dennis granted his wish, “a wise programmer you will become.”

And so it was, as he returned home, he opened Eclipse with the intention of improving what he had programmed the day before. His lines of code were no longer poems, his methods names imperfect, he installed JShint and he started programming about what he was passionate, aware that he still had a lot to learn.

End

This little story tries to demonstrate the impossibility of a man to know everything. We must dedicate time to ourselves, and not to please other people or environments that give us nothing but comfort or discomfort, which doesn’t allow us to evolve as a person.

It’s time to fly with V8

“I’m not going to bed,” I told my girlfriend to her great surprise. Children are never willing to go to bed, but adults like my girlfriend are dying to get together with her pillows and quilts right after dinner. “I’m not going to bed!” I repeated with such an impetus that my girlfriend understood that I was very, very excited.

I already wrote at the time a small introduction to the internals of V8. Let’s say that some time ago I started a document to collect everything that had to do with the internals of the JavaScript V8 engine. Out of pure curiosity and eagerness to learn.

That’s why my girlfriend understood that a new V8 optimizer compiler was enough reason not to go to sleep that night so soon.


I like well to be in the company of explorers.

J.M Barrie

Turbofan

Turbofan, that’s the name of the new V8 optimizing compiler. They could have called it “Saxophone” or even “Cocoon”, but decided to call it Turbofan.

What does Turbofan bring us?

A new optimizing compiler to (probably) replace the current crankshaft. Besides that, it has built-in support for asm.js, that’s what we call brutal performance of series in my town (although @mraleph doesn’t like this).

This new optimizing compiler is still very green, as far as I could investigate in the source code, there is still no OSR implemented or even deoptimization of the generated byte code, and the most basic optimization techniques are still not included in the new compiler.

And what do we have until now?

Well a new optimizer that automatically discards dead code from the time of generation of the AST, no more heuristics in the core of the engine to find dead code and a sea of ​​very large doubts to investigate.

A sea of ​​doubts


If I had 8 hours to cut a tree, I would use 6 to sharpen the axe.

With the arrival of Turbofan we went from a compiler (Hydrogen) based on SSA to a compiler based on a “sea of ​​nodes”, you can read a comparison of the operation of each one here.

The sea of ​​nodes of this new compiler is no more than an intermediate language between the AST generated based on a JavaScript code, and the final byte code executed by the virtual machine.

In this ‘sea of ​​nodes’ each node is represented as part of a graph. All nodes produce a value. For instance 1 + 2 = 3. In addition each node points to its operands (in this case 1 and 2) and there is no more information beyond this, simple and concise.

What are the Pros of the ‘sea of ​​nodes’ vs. other intermediate languages?

• Quick search for constants that are global or have a single reference to a node that represents a constant, e.g. number 2. Thanks to this feature we don’t duplicate nodes.

• Dead code search, any node that doesn’t have dependencies is automatically marked as dead code (code that doesn’t run because it does nothing), for example:

function test (num) {
    var a = 14 + num; // Dead code, because the result of its execution doesn’t produce anything.
    return 20 + 20;
}

• Optimization of global nodes. Insert a node in the most optimal place within the graph of the ‘sea of ​​nodes’, so in the generation of byte code it’s not necessary to go through closures until reaching the value of a node since it’s primitive is already in the right place.

But beware, the V8 team has not invented anything new, this ‘sea of ​​nodes’ technique has already been used for a while by the Java compiler Hotspot.

What do I have to do to prove it?

You have to walk before you run.

It didn´t take me long to start up Turbofan on my Macbook, these steps should help you if you want to try it.

Download Turbofan from the bleending_edge branch:

$ git clone -b bleeding_edge https://github.com/v8/v8.git

Install the necessary dependencies to work with V8 (you better go make dinner while doing this):

$ make dependencies

Compile V8:

$ make native

And once this is done, you can use the shell in out/native/d8, although I personally like to use Xcode because that way I can put breakpoints in the internals and debug. To generate an Xcode project execute:

$ build / gyp_v8 -Dtarget_arch = ia32

And in the build/folder you will have an all.xcodeproj ready to be used in Xcode.

If you want to investigate a little, these are some of the flags I have found to observe how Turbofan works:

turbo_filter - "optimization filter for TurboFan compiler"  
trace_turbo - "trace generated TurboFan IR"  
trace_turbo_types - "trace generated TurboFan types"  
trace_turbo_scheduler - "trace generated TurboFan scheduler"  
turbo_asm - "enable TurboFan for asm.js code"  
turbo_verify - "verify TurboFan graphs at each phase"  
turbo_stats - "print TurboFan statistics"  
turbo_types - "use typed lowering in TurboFan"  
turbo_source_positions - "track source code positions when building TurboFan IR"  
context_specialization - "enable context specialization in TurboFan"  
turbo_deoptimization - "enable deoptimization in TurboFan"  
turbo_inlining - "enable inlining in TurboFan"  
trace_turbo_inlining - "trace TurboFan inlining" 

And these are the flags that I used when I was investigating the turbofan internals:

--always-opt --turbo-stats --trace-turbo --turbo-types --turbo-filter = function_name

Replace function_name with the name of a function that you declare yourself.

Remember to always use -always-opt so that all executed code is optimized by Turbofan, if you don’t do this, only the code identified by the runtime profiler will be optimized (loops and things like that).

Don’t go wasting your job.

You always have to try to improve a bit, now is a good time to start reading about how JavaScript or anything that disturbs you works. Dedicate time to yourself, and your family, learn at once why the body of a function in JavaScript can slow down your program if it has a comment inside, read a book on recipes that will do you some good, and go back to buy that toy you loved as a child to spend time doing what you like.

The blind programmer

A long time ago, in a world far from real life, there were six programmers who spent hours competing among themselves to see who the best software developer of all was.

To prove their knowledge, the programmers explained the most fantastic stories about algorithms they could think of and then decided among them who the best programmer was.

So, every afternoon they gathered around a table, and –very slowly—read the most voted debates of that week in StackOverflow while Jenkins’ last task compiled. The first of the programmers adopted a stern attitude and began to tell the story that, according to him, he had lived that day. Meanwhile, the others listened somewhere between incredulous and fascinated, trying to imagine the scenes that he described in great detail.

The story was about the way in which, finding himself free of occupations that morning, the programmer had decided to download the V8 source code, and cross-compile it for all the devices he had. The programmer said that suddenly, in the middle of a great surprise, Linus Torvals appeared to him, took a PC with Linux out of his backpack, and compiled with great mastery the Linux kernel at his side. Upon receiving praises from the programmer, Linus decided to grant him the power of the Clean Code, which according to him, made him one of the most talented programmers that existed.

When the first of the programmers finished his story, the second of the programmers stood up, and while moving his hands on a Leap Motion, he announced that he would talk about the day he had witnessed the famous presentation of the Swift language, the innovator and interactive Apple language. According to him, this happened when Tim Cook himself called him from the Cupertino offices to invite him to the most anticipated Apple event of the year, he said all this laughing while caressing his coffee cup with the JAVA logo printed on it.

To be up to the previous stories, the third programmer turned on his computer and showed its interface full of Node.js terminals and servers running in real time. After having tried Node.js in production, the programmer spent countless hours talking about the great response time that Node.js had against other platforms such as Apache’s httpd.

Next, it was the turn of the fourth programmer, then the fifth, and finally the sixth programmer was immersed in his story. In this way the six programmers spent the most entertaining hours while showing their ingenuity and intelligence to others.

However, the day came when the calm atmosphere was disturbed and turned into a confrontation between the programmers. They could not reach an agreement on the exact way to make a release to production. The positions were opposite and since none of them had ever made a release to production, they decided to create a document in which each one would think about how the release to production process should be, thus clearing the doubts.

As soon as the Google Apps administrator created the document, the programmers began to outline their ideas about it. It had not been long when suddenly, they noticed a change in the twitter timeline of a list that everyone was following. A tweet appeared on how Amazon made its releases to production. They quickly clicked on the article, since they had the Canary version of Google Chrome compiled by themselves, the article loaded in milliseconds.

The six programmers were full of joy, and congratulated each other on their luck. Finally they could solve the dilemma and decide what was the real way to make a release to production.

The first one, the most determined, put his reading glasses on to start without further delay a slow and quiet reading of the article. However, the rush caused him to accidentally stick a leg of his glasses in his eye, preventing him from reading the entire article.

“Oh, my colleagues!”  He exclaimed, “I’ll tell you that a release to production can be done perfectly with Amazon S3’s Java API.”

It was the turn of the second of the programmers, who read more carefully, he printed the article on some sheets and proceeded to read it with a dim light on his head, while he was lying in his favorite puff, with such bad luck, that he fell asleep reading the second page.

“Oh, my brothers!” He exclaimed when he woke up, “I’ll tell you that the best way to make a release, is to create a task in Jenkins that uploads the binaries to S3!”

The rest of the programmers could not avoid to mock him in a low voice, since none of them could believe what the other programmers were saying. The third programmer started reading the article on his Kindle. After a long afternoon reading, the device ran out of battery enough for the programmer to read the last 30 pages that were missing,

“Listen to, my dear colleagues, the Amazon release process can be done with S3cmd.”

The other programmers silently disagreed, since nothing resembled the release process that each one had in mind. It was the turn of the fourth programmer, who decided to telephone his friend who worked at Google. The programmer asked his friend what release to production process they were following at Google. He listened carefully to all the steps and wrote them down in his notebook to keep record of the effort he had invested in learning Google’s release process.

“I got it!” Said the programmer full of joy, “I’ll tell you the real way to make a release to production, you just have to relocate the servers and synchronize Jenkins tasks to deploy the latest commits of repos.”

The fifth of the programmers took over and decided not to read the article since it seemed irrelevant. He recalled that 15 years ago an IBM engineer published a book on good practices on deliverables in JAVA projects.

“None of you have found the way to make a release. The traditional method of making a release to production in serious companies, is to stop the services with a ‘maintenance’ message and deploy changes on all the necessary machines.”

The sixth programmer was the oldest of them all, founder of several consultancies in the country, he knew the ins and outs of production releases, or so he thought. He reviewed his personal email and found those documents on functional specifications that described every necessary step to make a release to production, even specifying the color of the buttons and the design of the error windows.

“Colleagues! Without a doubt now I have it clear. The correct process to make a release to production consists of bringing together the heads of the project and delegating the tasks to the programmers of each team.”

Now everyone had experienced the correct way to make a release, and they all believed the others were wrong. They satisfied their curiosity, returned to shake hands, wrote their point of view in the document, and sat back in their computer stations proud of the time they had invested in writing that document.

Once again sitting under the same office light, they resumed the discussion about the real way to make a release to production, certain that what each one of them had experienced was the right way to do it.

Probably all programmers were partially right, since somehow all the things they had experienced were true, but without a doubt, they were all wrong about the real way to make a release to production.