Threading and async Hyperlambda programming
This tutorial covers the following parts of Magic and Hyperlambda.
- Creating multiple threads using Hyperlambda
- Waiting for multiple threads to finish
- Synchronizing access to shared resources
- Basic async theory and why it scales better than synchronised programming
Although Hyperlambda is a super high level programming language, it’s got very good support for threading, and due to that it’s implicitly async in nature, it also scales well. In this article we will walk you through some of the concepts related to threading, and explain how threading is simplified in Hyperlambda, eliminating an entire axiom of problems related to multithreaded programming.
Slots related to threading
The following slots are the most common slots that are related to threading in Hyperlambda.
- [fork] - Creates a new thread, which by default will be a “fire and forget” thread.
- [join] - Joins multiple threads, effectively waiting for all “children” threads to finish before continuing execution.
- [semaphore] - Synchronizing access to some piece of lambda such that only one thread can enter at the same time.
- [sleep] - Suspends the active thread for x number of milliseconds.
The most important slot of course is the one that creates a new thread and executes it. Consider the following Hyperlambda.
fork http.get:"https://servergardens.com" fork http.get:"https://gaiasoul.com" fork http.get:"https://dzone.com"
The above code creates 3 “fire and forget” threads. Each thread retrieves the documents found at its specified URL, and returns without waiting for the thread to finish. Notice, if you execute the above code in Magic’s “Eval” menu item, it returns instantly, and you will not see the resulting HTML documents found at the specified URLs in your output. This is because by simply invoking [fork] the way we do above, we are creating “fire and forget” threads, where we don’t care about the result of our threads. Sometimes you would want to create multiple threads executing in parallel, where you want the result of the invocation of all of your threads before continuing execution. This can be accomplished with the [join] slot, that will wait for all children [fork] invocations before continuing execution. Consider the following Hyperlambda.
join fork http.get:"https://servergardens.com" fork http.get:"https://gaiasoul.com" fork http.get:"https://dzone.com"
If you execute the above Hyperlambda in the “Eval” menu item, you will see that first of all it requires more time to finish, probably some 1 second, or maybe even more. This is because Hyperlambda will wait for all 3 threads to finish before moving beyond our [join] invocation. However, if you measure its time, and you execute each of the above HTTP GET invocations synchronously, you’ll probably realise it’s 3 times as slow. This is because the above snippet executes all 3 GET invocations in parallel. Below is the slower version for comparisons.
http.get:"https://servergardens.com" http.get:"https://gaiasoul.com" http.get:"https://dzone.com"
Hence, with the first example above, we can do multiple long lasting jobs in parallel, on 3 separate threads, speeding up our application. Since our code doesn’t need to execute 3 HTTP GET invocations consecutively, but can execute all of these in parallel on different threads, our app becomes much faster. Hence, the [join]/[fork] version would probably on average be almost 3 times as fast as the last version. This is typically quite useful when we’re waiting for IO data, such as waiting for HTTP invocations, reading or writing to the file system, or executing SQL towards our database. Notice, multithreading does not make CPU intensive tasks faster for the record, quite the contrary in fact, since it requires context switching at the CPU level, and often multiple synchronization objects further reducing your execution speed. Do not abuse multithreading.
Hyperlambda and async
A problem that is common with multithreading is “thread pool exhaustion”. This occurs when your operating system is asked to create more threads then it has resources for. What Hyperlambda will do though, is to release your threads as it is waiting for IO data. This significantly increases your application’s scalability and increases the amount of simultaneous users it can handle before your web server, and/or operating system, literally crashes because of “thread pool starvation”. This is referred to as “async programming”, and is a core feature in any modern framework, and/or programming language, allowing your code to scale much better and handle more requests simultaneously. An async programming language is said to having increased your application’s “throughput”.
What this implies for Hyperlambda specifically, is that after all 3 threads above are created, and we’re waiting for IO data from our URLs, there are actually zero threads being consumed by our application, since all 3 threads are released back to the thread pool, and only when the network driver has data to a specific [http.get] invocation, your code is “re-animated”, brought back to life, and given a thread to continue its execution on.
From a scalability perspective, this results in that an async application is typically several orders of magnitudes better at scaling than a synchronous application. However, since async programming is extremely complex, a lot of things can go wrong as you try to implement it in your own code. Hyperlambda is async by default, and there is no “special syntax” required to understand these parts of it. This makes async programming much easier to implement with Hyperlambda compared to other more low level programming languages.
Synchronizing access to lambda objects
Sometimes you need synchronized access to a shared resource. This can be for instance a file or some other resource, that is shared amongst multiple threads. For such scenarios you’ve got the [semaphore] slot. This slot takes one argument, in addition to a lambda object, ensuring that only one thread is able to execute its named lambda object at the same time. To understand this concept, realise that such a semaphore is often referred to as “toilet threading mode”, since typically only one person is allowed into the same toilet at the same time. A semaphore is kind of like the “lock” on the toilet door, ensuring only one person is getting access, and that the person needs to leave before the next person is granted access to the room. The “room” here is your lambda object. Consider the following Hyperlambda.
join fork semaphore:foo http.get:"https://servergardens.com" fork semaphore:foo http.get:"https://gaiasoul.com" fork semaphore:foo http.get:"https://dzone.com"
If you measure the above Hyperlambda’s execution speed, you will see that it’s at least as slow as the synchronous version, probably slower due to the threading overhead. This is because our [semaphore] invocations basically ensures that only one HTTP GET invocation is able to execute at the same time. The first thread to execute the semaphore slot, becomes the first thread allowed to execute its HTTP GET invocation, while the 2 remaining threads needs to wait for the first thread to finish before they’re allowed access to their lambda object. The above example is not a very good example of using semaphores, since none of the above threads actually need a semaphore. A better example can be found below.
io.file.save:/foo.md .:Initial data join fork semaphore:foo io.file.load:/foo.md strings.concat get-value:x:@io.file.load .:"\r\nThread 1" io.file.save:/foo.md get-value:x:@strings.concat fork semaphore:foo io.file.load:/foo.md strings.concat get-value:x:@io.file.load .:"\r\nThread 2" io.file.save:/foo.md get-value:x:@strings.concat fork semaphore:foo io.file.load:/foo.md strings.concat get-value:x:@io.file.load .:"\r\nThread 3" io.file.save:/foo.md get-value:x:@strings.concat io.file.load:/foo.md
The above Hyperlambda is more relevant, since multiple threads are accessing the same shared resource simultaneously, being our “foo.md” file. By using a [semaphore] above, we ensure that only one thread is allowed to read and write to the file at the same time. Without the above [semaphore] invocations, we’d run the risk of having multiple threads writing to the file simultaneously, resulting in what is commonly referred to as a “race condition”. Read more about Hyperlambda’s threading capabilities here.
- Continue with Interceptors and Exception handlers