magic.lambda.threading - Thread support in Hyperlambda
This project contains all thread related slots for Hyperlambda. Threading in software development implies doing multiple things concurrently, scheduling CPU time for each of your threads, creating the illusion of having your computer doing multiple times concurrently. This concept is often referred to as “multi tasking” and is crucial for any modern operating system, and/or programming language. Hyperlambda contains several multi tasking related slots.
How to use [fork]
Forks the given lambda into a new thread of execution, using a thread from the thread pool. This slot is useful for creating “fire and forget” lambda objects, where you don’t need to wait for the result of the execution before continuing executing the current scope.
info.log:I was invoked from another thread
To understand how [fork] works, you can imagine your computer’s CPU as a single river, running down hill, and at some point the river divides into two equally large rivers. This is referred to as “a fork”. The analogy of the river becomes important for another reason, which is the understanding of that the total amount of water is still the same, it’s only parted into two smaller rivers - Implying you cannot “do more” with multi tasking, you can only equally share the same amount of resources as you had before between two different tasks. Multi threading does not make your CPU faster, it only schedules your CPU’s time on multiple things, doing these things concurrently. However, if you have multiple tasks where each individual task needs to wait for IO data, threading typically speeds up your application, since it can make multiple requests for IO simultaneously, and have other machines, and/or processes working in parallel.
How to use [join]
Joins all child [fork] invocations, implying the slot will wait until all forks directly below it has finished executing, and automatically copy the result of the [fork] into the original node.
As an analogy for what occurs above, imagine the two rivers from our above [fork] analogy that forked from one larger river into two smaller rivers, for then again to join up and becoming one large river again further down.
How to use [semaphore]
Creates a named semaphore, where only one thread will be allowed to evaluate the same semaphore at the same time. Notice, the semaphore to use is defined through its value, implying you can use the same semaphore multiple places, by using the same value of your [semaphore] invocation.
* Only one thread will be allowed entrance into this piece of
* code at the same time, ensuring synchronized access, for cases
* where you cannot allow more than one thread to enter at the
* same time.
In the above semaphore “foo-bar” becomes the name of your semaphore. If you invoke [semaphore] in any other parts of your Hyperlambda code, with “foo-bar” as the value, only one of your lambda objects will be allowed to execute at the same time. This allows you to “synchronize access” to shared resources, where only one thread should be allowed to access the shared resource at the same time. Such shared resources might be for instance files, or other things shared between multiuple threads, where it’s crucial that only one thread is allowed to access the shared resource at the same time.
How to use [sleep]
This slot will sleep the current thread for x number of milliseconds, where x is an integer value, expected to be passed in as its main value.
// Sleeps the main thread for 1 second, or 1000 milliseconds.
Notice - This slot is typically releasing the thread back to the operating system, implying as
the current thread is “sleeping”, it will not be a blocking call, and require ZERO physical operating
system threads while it is sleeping. This is true because of Hyperlambda’s 100% perfectly
Project website for magic.lambda.threading
The source code for this repository can be found at github.com/polterguy/magic.lambda.threading, and you can provide feedback, provide bug reports, etc at the same place.
Copyright and maintenance
The projects is copyright of Aista, Ltd 2021 - 2023