Meta programming

The outstanding part of Pliant language is that it does not answer the classical question:
What nice feature should my new language contain ?
But rather:
What is the proper way to encode a program ?
The good news is that when proper encodings of the program are provided, all good properties (efficiency, high expressiveness, no nasty side effects between various extensions) come fairly straight forward.

In the end, the meta programming introduced below, not to be mistaken with syntactical meta programming introduced long ago by LISP language, is nothing less than a generalization of most good language features such as functional or object programming.

This article explains how Pliant meta programming works, from a high level point of view, and the consequences on software development, then illustrates presented concepts through an explained working sample.

Pliant 4 stages representation of the program

While compiling, a Pliant program exists under four different representations:

Stage 1: The program as a text file

Here is a sample:

function fact x -> y
  arg Int x y
  y := shunt x=0 1 x*(fact x-1)

Stage 2: The program as a tree

Here is a sample (this sample is oversimplified and does not match the sample provided at stage 1; see demonstration link bellow to get real examples):

Each node contains either an identifier, or a constant value.

Stage 3: The program as a list of instructions

It is bit like a C program, with no nested functions, and gotos instead of high level controls. Here is a sample:

1: copy 1 y
2: = x 0 c1
3: jump_if c1 => 9
4: * x y y
7: - x 1 x
8: jump => 2
9: do_nothing

For C fluent users, the above notation has the same meaning as this unusually written but (probably) valid C code:

int x,y,c1;
y = 1;
l2: c1 = x == 0;
if(c1) goto l9;
y = x * y;
x = x - 1;
goto l2;

In the list of instructions, '0' '1' 'x' 'y'  'c1' are what I call arguments. They are typed.
Each instruction consists of an instruction (line) number, a function reference, a set of arguments, and the number of the target instruction if the function is a (conditional) jump.

Stage 4: The program is encoded as a native processor instructions set

The program is just a set of bytes encoding native processor instructions.



The transition from stage 1 to stage 2 is named "the parser".
Pliant parser works with two entities. Token filters recognize tokens in the source code, and turn them to a list of nodes. Then folding functions turn the list to the stage 2 tree.


The transition from stage 2 to stage 3 is named "compilin"g, or "meta compiling".
A meta compiling function receives the stage 2 subtree it has to compile as a parameter, and is responsible to attach two things to it: a list of instructions as specified in stage 3, and the argument that will be the result.

Code generator

Transition from stage 3 to stage 4 is named code generation or code optimization.
A set of rewriting filters are called that modify the instructions list until all instructions are processor specific ones and all arguments are atomic constants or processor registers.

Understanding the Pliant language central concept

The last part of the Pliant compiler machinery is the global dictionary. It is just a dictionary associating definitions to identifiers. One identifier can have several definitions, each definition is associated to a module, and each program only sees definitions associated with modules it imported using 'module' instruction.

Now that I have presented all key pieces of the Pliant machinery, let me explain the stage 2 to stage 3 transition (compiler) step by step.
Let's assume that the node we want to compile is the node with identifier 'C' in the stage 2 introduction sample. Definitions for 'C' identifier (keyword) in the global dictionary will be scanned one after the other.
If the definition is a meta programming function, it is just run with the subtree with root node 'C' provided as the single parameter. Either this meta understands the subtree meaning, and it will attach to the subtree root node a set of instructions, and a result argument. Or the meta does not understand the subtree meaning, and it just returns without attaching a result argument.
If the definition is a function, then if the parameters are matching, the instructions list will be the concatenation of all the instructions lists attached to the parameters nodes followed by one more instruction calling the function, and the result will be a local variable argument with the result of the function. If parameters are not matching, the definition is just skipped.
Now, if the node we want to compile does not contain an identifier, but a constant, such as '34' in the initial example, then the value of the constant will just be encapsulated in a constant argument, and attached as the result argument, with an empty instructions list.

As a summary, successfully compiling a sub expression in the stage 2 tree exactly means attaching a stage 3 set of instructions and a result argument. Be sure to properly understand this because it is the central concept of the Pliant language.
When this central concept is properly understood, you will notice that all other properties of Pliant (dynamic compiling, expendable parser and code generator) are just logical consequences.

An interactive tool is provided that enables you to view a program at various stages in order to help understand how Pliant compiler machinery works.
It's available online at
For security reasons, the online version will not let you type in any code you like, just select among a few samples. On the other hand, if you install Pliant locally, you will be able to test it on any program you like.

In the demonstration tool, when you click on a stage 2 node, the stage 3 section bellow will be updated to display the instructions list and result argument attached to this node as a result of compile transition. An extra very interesting information is provided under 'Successfully compiled by' label: it is a link in the online source code browser to the meta or function that did the attachment.
In the 'stage 3' section, instructions of the instructions list inherited through copying a son instructions list are displayed in gray, and instructions added at this node level are displayed in black.
Indirect arguments (an indirect argument means that the location of the data is provided by another argument value plus a constant offset) are noted with brackets.

Still in the demonstration tool, the 'Stage 3 to 4 transition' final section is intended to help understand how the Pliant code generation transition is performed. Each of the rewriting filter is listed in small letters, and when, if it changed something in the instructions list, it is followed by the instructions list with old instructions displayed in red and new ones in green. Some extra informations reported directly by the filter are displayed in light brown.
The very last instructions list contains only native processor instructions, and line numbers are the offsets of the instructions beginnings in the executable sequence of bytes. An instruction with just the name of a Pliant function is assumed to be a processor 'call' instruction. 'i386_mov' native processor instruction and others are furthermore assumed to follow Pliant standard arguments convention with the first argument being the source and the last one the destination.


Here is a small sample to help you better understand the tree first represtentations of the program, and the stage 2 to stage 3 transition which is the core of Pliant compiler machinery.

Stage 1 : the source code

var Int i := 3
if i=2+1
  console "ok"

Stage 2 : the tree

Stage 3 : a list of intructions and a result argument has been attached to each node

Please notice that this drawing is ignoring Pliant constants evaluation mechanism. I mean, at the '+' node, the real Pliant compiler would notice that '1' is constant, '2' is a constant, '+' is a function with no side effect, so it would immediately execute 1+2 and attach an empty instructions list and constant '3' as the result of the node, instead of attaching instruction '+ 1 2 t1' (which you can read as t1 := 1+2) as displayed on this drawing.

More samples

Here is a link to a small interractive graphic tool that might also help you better understand the stage2 to stage 3, then stage 3 to stage 4 stransition.

Meta programmation illustration

When executing online, you can only use the tool only on a small set of predefined programs (use select box at the top). When executing Pliant locally, you can apply the tool to any small program you type in.
In the interrractive graphic tool, you can clic on any node of the stage 2 expression tree to display the stage 3 instructions list and result argment associated with this peticular node.
The stage 3 to stage 4 is presented as the instructions list after several of the main code generator rewriting functions.

Design choices discussion

In very few words, the core of Pliant design is:


the program goes through four fixed stages (text file, expressions tree, instructions list, processor native code),


the transitions are fully customizable at application level.

Let's study how necessary each of the stages is, and how much design flexibility is available when defining it.

Stage 1: The program as a text file

Some programming systems exist that start directly at stage 2 tree because the program is edited using a graphic schema editor. The advantage of directly editing a graphic schema is that the programmer can be assisted easierly though a set of contextual menus, and the representation is cleaner. The advantage of plain text is that it's a lot more compact.
The conclusion is fairly obvious: direct schema editing can be seductive in the early days of the learning curve, but as soon as the project grows, the better compactness of a plain text file will make reading much easier.

About flexibility, the question is: should the source file be a standard plain text file just like 40 years ago, or should it have styling enhancements such as HTML or the old RTF (rich text format) ?
In many languages, the on disk file is pure text file, and styling enhancements are just added on the fly by language syntax aware text editors. This is convenient because the underlying file is untouched, but might reveal hard to maintain on a language with easy syntax expansion such as Pliant.
Pliant uses two extensions: One is UTF8 encoding, which is a now standard extension of ASCII that enables using international non ASCII characters though a standard (content independent) extension. Adopting this extension seemed obvious to me at some point because it's both simple, and now widely spread.
The other is 'document' keyword introducing embedded desktop documents enabling good looking embedded documentation. You can see the result using the source code browser on this web site. It is a sane choice because editing the document with a standard text editor is still possible: the document encoding is simple and plain text compatible, just like XML. I could have selected embedded XML but just preferred a more Pliant oriented syntax: the standard Pliant parser turns the embedded document tree to a valid and usable stage 2 tree with 'document' identifier as it's root node. Another reason for not using XML is that since I selected indentation for specifying instructions blocs in Pliant language default syntax, as opposed to using {} that requires a end tag as in C language, it makes sense to also select indentation in embedded documents as opposed to XML that also requires an explicit bloc close tag.
In the end, I just see no terrific bonus at the moment that could justify getting rid of the ability to edit the source code using any (preferably UTF8 aware) text editor; as a result, Pliant extensions are non disruptive.

My general conclusion about stage 1 is that it's necessary because it is the best encoding for the programmer to read the program, and there is no significant design choice to make here.

Stage 2: The program as a tree

It is the encoding that is convenient for implementing language features because it's the easiest to travel using a program. So, it exists in nearly any language as the output of the parser. In many languages such as C or Java, the application cannot access it, as opposed to LISP where it can.

About flexibility, even more than for stage 1, the lack of innovation comes from the lack of needs. When you have a tree where each node can be either an identifier or a constant, then all possible features fit nicely on the model, so why make it more complex ?

Stage 4: The program encoded as a native processor instructions set

This stage also exists and is 100% constrained because the hardware says so, so there is not much to discuss here. (1)
Please just notice that Pliant code generator filters responsible for the transition between stage 3 and stage 4 work on a data encoding which is the stage 3 one. Transition to effective stage 4 encoding is performed by an ultimate mostly built in transition.

Stage 3: The program as a list of instructions

Let's start by the easy part about stage 3 encoding. What is it useful for ? First of all, it is an encoding that is very convenient for algorithms that extract properties about the program. The reason is that the structure is simple to travel (a list of instructions) and the semantic of each instruction is also simple, as opposed to stage 2 where the tree is simple to travel but the semantic is mostly undefined.

Extracting properties about the program can have three different usage:


Providing informations to help the meta functions that perform the stage 2 to stage 3 transition make decisions,


Applying automatic optimization rewrites,


Applying proof of program techniques (see works from Patrick and Radhia Cousot).

I'm now ready to express the second important notion required to understand the Pliant language design layout. Stage 2 (program as a tree) is the semantic view, whereas stage 3 (program as an instructions list) is the procedural (execution) view. What it means is that when you have the program as a tree, all freedom is possible through defining new meta functions. You can change anything, so you can get maximum expressivity, but the drawback is that you can just say nothing about the program because you cannot be aware of all the meta others have or might define. On the other hand, the semantic of the stage 3 instructions list is completely fixed because it is just a high level abstraction of the real hardware, so that it will not be disturbed by nice extensions other guys you are not aware of might have added through meta programming.

Now, we get the central point justifying the overall Pliant design.
Let's get back to the initial stage 2 sample tree:

At time you write the code that defines a meta function for 'C' identifier, the two important properties you expect are:


be able to define your extension without understanding the semantic extensions used in the parameters (the son nodes, I mean the node with 'D' identifier in the trivial sample),


have your extension 'C' raise no nasty side effect on parent nodes extensions (the node with 'A' identifier in the trivial sample).

This is achieved in Pliant through not defining a meta function as a function that turns a stage 2 tree to another stage 2 tree just as in LISP, but as function that turns a stage 2 tree as a stage 3 instructions list.
So in the tiny sample, when 'C' extension is at work, it can query it's son node that uses 'D' extension to compile, check the result argument, maybe scan the instructions list. All this is not effected by any semantic extension 'D' definition might bring.
The same reason will prevent 'C' extension to disturb 'A' because 'A' will see 'C' effect only as an argument and an instructions list with the stage 3 well defined fixed semantic.

In the end, through saying that all transitions (parser between stage 1 and stage 2, meta programming between stage 2 and stage 3, code generator between stage 3 and stage 4) can be customized, Pliant brings close to unlimited flexibility at expressivity level.
It is possible because Pliant is a dynamic compiler, so just like an interpreter, it can execute code in the middle of the compiling process.

Then, avoiding nasty side effects between various extensions, and keeping tight control on final execution (predictable and mostly optimal execution speed) is provided through introducing stage 3 that has well defined fixed semantic, and stating that any semantic extension is a custom transition from stage 2 tree to stage 3 instructions list.
In other words, a Pliant meta programming function does not only extend the language semantic, but also teaches the compiler how to generate efficient low level code code for it, instead of relying only on the built in capabilities of the compiler.

As a final summary of Pliant language core design, through introducing stage 3 instructions list that have fixed semantic, and defining meta programming as a transition from stage 2 expressions tree to stage 3 instructions list, Pliant enables meta programming to provide mostly unrestricted semantic extensions as usual, but also two more important properties:


efficient code generation for these extensions,


no nasty side effects between various extensions.

I hope to have convinced you about the necessity for stage 3 instructions list, so that you would now be able to understand why, in languages history, I position Pliant as the connection point between LISP and C:

What I have not covered yet in this discussion is: what design flexibility do we have for stage 3 definition ? Once again, the design choice in Pliant is minimality. Also I don't claim proved minimality. I just tried to design it so, and what is important in the end is not that it would be absolutely minimal or not, but that it's simple and it brought the two just explained extra properties (efficiency and side effects control).

Just for information, the only feature I see that could be removed in Pliant stage 3 instructions list is the offset in indirect arguments. In Pliant, an indirect argument is a data the address of is provided by another argument value, plus an offset to apply on the address argument value. This 'offset' extra field can always be avoided through computing the effective address in a temporary variable. I mean:

1: foo [a+3]

is the same as:

1: + a 3 t1
2: foo [t1]

The choice here has been to say that when we have a variable containing the address of a record, we want to be able to access the fields without creating temporary address variables. Anyway, since I'm entering probably too specific issues for an introduction about Pliant meta programming, I'll now rather switch to practice than try to be exhaustive at design level.

Please just notice before that the subtle casting machinery details are also just left on the side of this introduction article. All of it is hidden under the high level 'cast' method that we will see in the practical section.

Using meta programming

Let's study meta programming through a simple sample: the 'max' function.

First, in C, we would define it as a macro:

#define max(x,y) ((x)>(y) ? (x) : (y))

It has two limits:


It does not work for an arbitrary number of arguments; only for two


It's an entry level yet not nasty side effect safe extension.

Here is a well known side effect example. If we write:

int x,y;
y = max(x++,6);

we get x being sometime incremented twice.

Here is a possible Pliant implementation:

module "/pliant/language/compiler.pli"

meta max e
  if e:size<3 # P1
  part find
    for (var Int i) 0 e:size-1
      e:i compile ?
      var Pointer:Type t :> e:i:result:type real_data_type # P2
      part try
        for (var Int j) 0 e:size-1
          if not (e:j cast t) # P3
            leave try
        # P4
        var Pointer:Arrow c :> e:module first "compare"
        while c<>null and { var Link:Function compare :> c map Function ; c=null or entry_type:c<>Function or compare:nb_args_with_result<>3 or (compare arg 0):type<>t or (compare arg 1):type<>t or (compare arg 2):type<>Int}
          c :> e:module next "compare" c
        if c<>null  
          leave find
  var Link:Argument adr :> argument local Address
  var Link:Argument result :> argument indirect t adr 0 # P5
  e suckup e:0 # P6
  e add (instruction (the_function 'address Universal' Universal -> Address) e:0:result adr) # P7
  var Link:Argument comp :> argument local Int
  var Link:Argument mode :> argument constant Int compare_superior
  var Link:Argument cond :> argument local CBool
  for (var Int i) 1 e:size-1 # P8
    e suckup e:i # P9
    e add (instruction compare e:i:result result comp) # P10
    e add (instruction (the_function 'compare apply mode' Int Int -> CBool) comp mode cond) # P11
    var Link:Instruction skip :> instruction the_function:'do nothing'
    e add (instruction (the_function 'jump if not' CBool) cond jump skip) # P12
    e add (instruction (the_function 'address Universal' Universal -> Address) e:i:result adr) # P13
    e add skip
  e set_result result access_read # P14

console (max 3 6.2 -8) eol

Let's now explain this listing step by step.

A Pliant meta function is a function that will receive an expressions tree as a parameter, and attach an instructions list and a result argument to it.

Defining a meta function just require using 'meta' keyword:

meta max e

'e' is used to provide the name of the variable that will be used to access the expressions tree.

Now the general explanation of the listing is: first we test that we have at least three arguments (P1), because 'max' with two arguments is already defined and we don't want to conflict with it. Then in the 'find' part, we try to find the type 't' of the result. We will try the type of each of the arguments one after the other (P2), and check if other can cast to it (P3). In the example at the end of the listing, the first argument is 3 and has type uInt, the second is 6.2 and has type Float and the third is -8 and has type Int, so that the final type can only be Float.
When we have found a candidate type 't', we check that a 'compare' function exists on such a type (P4).
If we found such a function, then we are ready to create the instructions list and the result.
Creating the list and the result is everything after the 'find' bloc.
We create the result argument as a pointer 'result' to type 't' (P5),
then add instructions to compute argument 0 (P6), followed by an instruction to set the 'result' pointer to map it (P7).
Then, for each next argument (P8), we add instructions to compute it (P9), then add the instruction to call the 'compare' function, then the instruction to test of the result of the compare has superior flag set (P11). If it's not, we just skip the argument (P12). If the superior flag was set, we set 'result' pointer to map this argument (P13).
Lastly, we attach the result argument (P14) meaning that the meta succeeded to compile the expression.

Here are more detailed explanations about various meta programming features involved:

Scanning the expressions tree

The number of sons of a node is obtained through calling 'size' method:

if e:size<3

Accessing one son is obtained through calling the empty method with the index of the son as a parameter, with as usual in computing field the first son having index 0:

var Pointer:Expression ei :> e 0

The address of value associated with a node is provided by 'value' field:

var Address a := ei value
console "First son value has type: " entry_type:a:name eol

Please note that if the node is an identifier, it will have type 'Ident':

if (entry_type ei:value)=Ident
  console "Node identifier is '" (cast (ei:value map Ident) Str) "'" eol

Scanning definitions in the global dictionary

The following sample scans all definitions associated with 'compare' keyword, that are seen by the current module, and that are functions:

var Pointer:Arrow c :> e:module first "compare"
while c<>null
  if entry_type:c=Function
  c :> e:module next "compare" c

Scanning all definitions without applying current module visibility would be:

var Pointer:Arrow c :> pliant_general_dictionary "compare"
while c<>null
  if entry_type:c=Function
  c :> pliant_general_dictionary next "compare" c

So, in the sample meta functions, the not easy to read following sequence:

var Pointer:Arrow c :> e:module first "compare"
while c<>null and { var Link:Function compare :> c map Function ; c=null or entry_type:c<>Function or compare:nb_args_with_result<>3 or (compare arg 0):type<>t or (compare arg 1):type<>t or (compare arg 2):type<>Int}
  c :> e:module next "compare" c
if c<>null  
  leave find

does a bit the same as:

var Link:Function compare :> the_function compare t t -> Int
if exists:compare
  leave find

with the significant difference that it works whereas the simpler version does not because 'the_function' meta expects constant arguments whereas 't' is not a constant.

Compiling arguments

Assuming that 'ei' is a pointer to an (sub)expression, we can try to compile if through calling 'compile' method:

ei compile

What is strange about 'compile' is that it will either succeed or ... raise an error, instead of returning a success boolean. In the Pliant early days, I had troubles deciding between each function or method returning a status result or try/catch style errors handling style. 'compile' is some reminds from this hesitations, so that most of the time, you will use:

ei compile ?

which is just the compact version of:

ei compile
if iserror

Anyway, most of the time, you will not use 'compile' method, but 'cast' that returns a boolean specifying if the node could be compiled and casted to the specified data type:

if not (ei cast Str)

Building the instructions list

Before we can use any sub expression result, we have to copy to the instructions list all instructions necessary to compute the sub expression result. This is achieved through calling 'suckup' method:

suckup e:0

Here is the high level function to create a temporary variable with type address:

var Link:Argument adr :> argument local Address

And the high level function to create a stage 3 instruction:

var Link:Instruction instr :> instruction (the_function 'address Universal' Universal -> Address) e:0:result adr

the number of arguments provided to 'instruction' meta must be consistent with the number of arguments in the function prototype, including the result.

A jumping instruction (the are only three of them that are 'jump anyway', 'jump if' and 'jump if not') will get two more parameters; the first one is 'jump' keyword, and the second one is the target instruction:

var Link:Argument cond :> argument local CBool
var Link:Instruction skip :> instruction the_function:'do nothing'
var Link:Instruction instr :> instruction (the_function 'jump if' CBool) cond jump skip

Adding the instruction at the end of the instructions list of the node:

add instr

Attaching the result

var Link:Argument r
set_result r access_read

The second parameter specifies the kind of access the calling function is allowed on the result. Most of the time, it is 'access_read'. Sometime it can be 'access_read+access_write' just like in 'ovar' meta. 'access_object' flag can also be added to specify that what is returned is a Pliant object so that generic methods can be called directly.

When the function returns no result, just use:


Please remind that in Pliant, attaching the result argument to the expression is the synonym of successfully compiling it.

Advanced meta programming

With LISP, we would get the ability to define 'max' accepting any number of arguments and the ability to avoid nasty side effects through creating temporary variables, so this first example was not sufficient to demonstrate why Pliant meta programming engine enables to go beyond LISP.

A more advanced example is 'thread' instruction defined in /pliant/language/schedule/thread.pli
Here is a simplified version of it:

function thread_must_succeed ok
  arg CBool ok
  if not ok
    error_fatal error_id_starvation null "Failed to run another thread" (null map List)

meta thread e
  if e:size<>1
  var Link:List expressions :> new List
  var Link:List byaddress :> new List
  expressions append (addressof e:0)
  var Pointer:Module module :> e module
  var Address mark := module mark
  module define "pliant shared" addressof:byaddress
  module define "share" addressof:(the_meta 'pliant share arguments')
  var List functions ; var Link:Type type
  e freeze expressions byaddress functions type
  module rewind mark ?
  var Link:Argument ok :> argument local CBool
  e add (instruction (the_function run_thread DelayedAction Int -> CBool ) e:0:result (argument constant Int 0) ok)
  e add (instruction (the_function thread_must_succeed CBool) ok)
  e set_void_result

Let's start through explaining how to use 'thread':

function test
  var Int a := 1
  var Int b := 2
  var Int c := 3
    share b
    console "a = " a eol
    b := 4
  sleep 1
  console "b = " b eol

would display (even if it's a over simplistic buggy code):

a = 1
b = 4

What is interesting about it is that 'thread' instructions automatically detected that 'a' and 'b' local variables are used by the thread code, so that they must and will be copied to the new thread. Moreover 'share b' enables to force the 'b' variable to be passed by address rather than being copied so that modifying it in the thread body change the value in the main thread.

So, the resulting code will roughly be:

type TestEnv
  field Int a
  field Pointer:Int b

function test_thread args
  arg TestEnv env
  console "a = " env:a eol
  env b := 4

function test
  var Int a := 1
  var Int b := 2
  var Int c := 3
  var TestEnv env
  env a := a
  env b :> b
  run_thread test_thread env
  sleep 1
  console "b = " b eol

and this is basically what a C programmer would have to write.

So, back to the proposed implementation of 'thread' meta, the end of it is using the same set of features as introduced with 'max' meta:

var Link:Argument ok :> argument local CBool
e add (instruction (the_function run_thread DelayedAction Int -> CBool ) e:0:result (argument constant Int 0) ok)
e add (instruction (the_function thread_must_succeed CBool) ok)
e set_void_result

The stange part of the 'thread' meta is:

var Link:List expressions :> new List
var Link:List byaddress :> new List
expressions append (addressof e:0)
var Pointer:Module module :> e module
var Address mark := module mark
module define "pliant shared" addressof:byaddress
module define "share" addressof:(the_meta 'pliant share arguments')
var List functions ; var Link:Type type
e freeze expressions byaddress functions type
module rewind mark ?

Let's start through explaining 'mark' 'rewind':

var Address mark := module mark
module define "pliant shared" addressof:byaddress
module define "share" addressof:(the_meta 'pliant share arguments')
module rewind mark

'mark / rewind' is just a way to add temporary definitions to the general dictionary. We call 'mark' to specify when temporary definitions start, then 'define' to provide each new definition, and finally 'rewind' to discard them.

Now, the few remaining unexplained lines of the 'thread' meta are:

var Link:List expressions :> new List
var Link:List byaddress :> new List
expressions append (addressof e:0)
var List functions ; var Link:Type type
freeze expressions byaddress functions type

From a very high level point of view, you can think at 'freeze' as a mechanism for introducing lazy evaluation.
At implementation level now, simple 'freeze' would receive a stage 2 subexpression as the argument (the body under the 'thread' instruction), and return a function ('test_thread' in the sample usage translation above) and a type ('TestEnv' in the sample usage translation above).
Now, the real 'freeze' does the same, but is just a bit more general, so it receives a set of expressions and returns a set of functions.
Real 'freeze' code is implemented in /pliant/language/compiler/expression/freeze.pli

The code is probably too complicated to fully explain as an introduction to meta programming, but it is still interesting to point out the following section of it:

var Pointer:Arrow c :> intern_locals first
while c<>null
  check (addressof entry_type:c)=addressof:LocalVariable
  var Link:LocalVariable l2 :> c map LocalVariable
  var Link:LocalVariable l :> (originals query addressof:l2 null) map LocalVariable
  if addressof:l<>null and (e uses l2:body)

Basically, the meaning is: for each local variable, we call 'uses' method to test if the variable is used in the provided bloc. I mean, in the usage sample above, it is 'uses' that determine that 'a' and 'b' are used in the new thread code, whereas 'c' is not.
And the center and only real objective of this second fairly complicated sample was to show that 'uses' scans the stage 3 instructions list, not the stage 2 expressions tree.

method e uses a -> answer
  arg Expression e ; arg Argument a ; arg CBool answer
  var Pointer:Arrow c :> e:instructions first 
  while c<>null
    var Pointer:Instruction instr :> c map Instruction
    for (var Int i) 0 instr:size-1
      var Pointer:Argument arg :> instr i
      if addressof:arg=addressof:a
        return true
      while arg:where=argument_indirect
        var Pointer:Argument arg :> arg pointer
        if addressof:arg=addressof:a
          return true
    c :> e:instructions next c
  answer := false

Through scanning the stage 3 instructions list instead of stage 2 expressions tree, we get the very important 'controlled extensions side effects' properties expressed in the end of the design choice discussion. Basically, 'thread' does not care about extensions used in it's body because it will scan the result as stage 3 instructions list where the semantic is fixed and simple.

Please notice that expression freezing notion introduce by this simplified 'thread' implementation is intensively used by Pliant UI to implement 'button' and 'link' instructions.

Lastly, for programming experts, please also notice that the meta programming features presented in this second sample with 'freeze' show you that Pliant meta programming is powerfull enough to implement a (logical) programming feature like closure.


You found even the first 'max' sample meta function too complex ? Ok, here is a more LISP like version:

meta max e
  if e:size>=3
    e compile_as (expression immediat (max first (max remain)) substitute first e:0 substitute remain (e 1 e:size-1))

It will just do syntactical rewrite, so turn:

max a b c d


max a (max b (max c d))

The goal of this article was not to list all possibilities of Pliant meta programming, but just to explain precisely, then illustrate on a not trivial sample, how Pliant engine design enables to get three interesting properties all at once, resulting in a language with much larger decent usage scope:


high expressivity


avoid syntactical rewriting nasty side effects


efficient execution




Some compilers don't implement it because they rely on an external 'assembly' tool.
As a result, the language compiler output is a second text file, that will be the input of the assembly language compiler. From the language design point of view, it just changes nothing; the impact is only at compiler software design.