The working non-blocking I/O
In our non-blocking example, this is how we'll be building our Node applications.
Let's break this code example down line by line. First up, things start much the same way as we discussed in the blocking example. We'll start the getUser function for user1, which is exactly what we did earlier:
But we're not waiting, we're simply kicking off that event. This is all part of the event loop inside Node.js, which is something we'll be exploring in detail.
Notice it takes a little bit of time; we're just starting the request, we're not waiting for that data. The next thing we do might surprise you. We're not printing user1 to the screen because we're still waiting for that request to come back, instead we start the process of fetching our user2 with the ID of 321:
In this part of the code, we're kicking off another event, which takes just a little bit of time to do-it is not an I/O operation. Now, behind the scenes, the fetching of the database is I/O, but starting the event, calling this function is not, so it happens really quickly.
Next up, we print the sum. The sum doesn't care about either of the two user objects. They're basically unrelated, so there's no need to wait for the users to come back before I print that sum variable, as shown in the following screenshot:
What happens after we print the sum? Well, we have the dotted box, as shown in the following screenshot:
This box signifies the simulated time it takes for our event to get responded to. Now, this box is the exact same width as the box in the first part of the blocking example (waiting on user1), as shown here:
Using non-blocking doesn't make our I/O operations any faster, but what it does do is it lets us run more than one operation at the same time.
In the non-blocking example, we start two I/O operations before the half second mark, and in between three and a half seconds, both come back, as shown in the following screenshot:
Now, the result here is that the entire application finishes much quicker. If you compare the time taken in executing both the files, the non-blocking version finishes in just over three seconds, while the blocking version takes just over six seconds. A difference of 50%. This 50% comes from the fact that in blocking, we have two requests each taking three seconds, and in non-blocking, we have two requests each taking three seconds, but they run at the same time.
Using the non-blocking model, we can still do stuff like printing the sum without having to wait for our database to respond. Now, this is the big difference between the two; blocking, everything happens in order, and in non-blocking we start events, attaching callbacks, and these callbacks get fired later. We're still printing out user1 and user2, we're just doing it when the data comes back, because the data doesn't come back right away.
Inside Node.js, the event loop attaches a listener for the event to finish, in this case for that database to respond back. When it does, it calls the callback you pass in the non-blocking case, and then we print it to the screen.
In a blocking context, we could handle two requests on two separate threads, but that doesn't really scale well, because for each request we have to beef up the amount of CPU and RAM resources that we're using for the application, and this sucks because those threads, are still sitting idle. Just because we can spin up other threads doesn't mean we should, we're wasting resources that are doing nothing.
In the non-blocking case, instead of wasting resources by creating multiple threads, we're doing everything on one thread. When a request comes in, the I/O is non-blocking so we're not taking up any more resources than we would be if it never happened at all.