ThreadingCreating a threadThe following example starts five threads: for (var Int i) 1 5 Variables ('i' in the sample) are automatically copied to the new thread. There are several possible ways to have some variables shared by the original and the master thread: var Int i := 1 var Int i In both samples, the program will display 'i = 12'. 'share' allow compact writting since: share i is the same as: { share i ; i } so that the previous sample could be written as: var Int i := 1 Protecting data access and low level synchronizationThe previous sample is not granted to work since in case of incredible load on the server, it would theorically be possible to have 'console' instruction executed before the 'i := 12' one. We can grant proper execution through: var Int i := 12 A more simple writing would have been: var Int i := 12 but then we might have a problem is the initial thread destroys the 's' semaphore variable before the execution end of the called process. Creating a real Pliant object accessed by a link will grant that the 's' object will continue to exist as long as either of the two threads need it because the link will be copied to the new thread incrementing the reference count as a result. A more compact, yet fully valid notation would be: var Int i := 12 please remind that: ovar Sem s is the equivalent of: var Link:Sem s Up to now, we have used semaphores only to do end of parallel execution synchronization. This is not the main usage, and could have been easier achieved through using 'parallel' high level control that we will see later. var Str txt Here is the valid version: var Str txt SemaphoresAPIRequest a semphore: provides exclusive access to the protected ressource. Var Sem s Release it: s release Request semaphore for readonly access: several thread can get readonly access to the protected ressource at the same time, but no thread can get exclusive access while others get readonly access. In other words, you should use 'request' when you will change the protected ressource content, and 'rd_request' when you will just read it: s rd_request s rd_release Please notice that if you aquired access through 'rd_request', you must release it through 'rd_release', not 'release'. It is possible to wait for the semaphore content, and provide a message, so that the execution monitor be abble to display waiting threads in case of dead lock: s request "Wait for my app semaphore n°3" which is a short version of: part wait "Wait for my app semaphore n°3" Same for read access: s rd_request "Wait for my app semaphore n°3" It is also possible to aquire a semaphore without locking in case the ressource is already assigned: if s:nowait_request Same for read access: if s:nowait_rd_request It is also possible to request a semaphore for a limited amout of time (specified in seconds): if (s request 5) Same for read access: if (s rd_request 5) Various kind of semaphoresDepending on the plateform capabilities, the effective implementation of 'Sem' will be selected in module /pliant/language/schedule/sem.pli as either:
Please notice that providing a semaphore implementation that work optimally in any usage pattern is close to impossible because performances can drastically vary with just any detail change both at Pliant semaphore implementation level or at underlying kernel scheduler implementation. Fast semaphores are intended to provide shorter aquire and release time, at the expense of wasting more time through continuously restarting the waiting threads if the ressource is held for long or many threads are fighting for the same ressource. var FastSem s Nested semaphores allow one thread to aquire several time the same ressource without deadlocking with itself. Also the number of 'release' calls must match the number of 'request' calls in the end. Current implementation has quite long aquire time. var NestedSem s Resource allocation semaphores are not intended to provide exclusive access, but rather 'no more than n at once' regulation. As a result, they use a slighly different API: var ResourceSem s Here is a usage scenario. Imagine that your database application provides long to compute reports. If too many are requested at once, the server might start crowling, then users not receiving the result after decent amount time would start reissue the same request making the situation even worse until the server get's completely unsable even for answering quire simple queries. A resource semaphore is a simple way to prevent it: (gvar ResourceSem big_queries) configure 4 # no more than 4 at once High level parallel controlparallel The general idea of parallel control is to run several tasks in parallel, then sync in the end. A tasks queue is created, and while the parallel instruction body executes, each 'task' instruction will add an entry in the tasks queue. The system will execute several tasks at once using several threads that will last just the 'parallel' bloc life time. Parallel instruction accepts several options: parallel threads 4 mini 8 maxi 16 active true balance true Most of the time, the best is to provide no option, or just 'threads' if the tasks are performing network connections instead of heavy computations as an example. Here is a real sample from /pliant/util/crypto/rsa.pli using parallel to speed up RSA ciphering: Standard version: var Intn in # clear message Parallel version: var Intn in # clear message |