Pliant language overview

One paragraph version

Pliant is competing in the 'select one single language and do everything with it' category.

In this category, C++ is the unchallenged leader. Pliant attempts to surpass it (1) through introducing a clean model for automatic code generation and language extension named meta programming (then cleaning a lot of parts such as syntax, objects model and memory allocation).
Anyway, Pliant is a also a dynamic compiler, which is good if you plan to write home targeted applications or free softwares, but prevents releasing closed applications (2).

Let's explain all this step by step.

Programming early days

With very primitive computers, the program is entered directly though providing the code (number) of each instruction. The instructions set is the one supported by the hardware.

On the second generation, instead of using numbers for each processor instruction, some identifier is used (as well as for registers). The program is easier to read, and this is Assembly.
The problem with Assembly is that it's still tightly connected to the hardware, so it changes with each new generation of computers, and requires to upgrade the programs.

So, the third step is to introduce a language less tightly connected to processor instructions set. Here is Basic (3).

Then, at some point, the complexity of programs grows so much that the machine is no more the only limiting factor: the programmer start to have trouble maintaining the code. This is where, from my point of view, true programming languages start.
Said with different words, a programming language is an attempt to optimize at once both computer hardware usage and human mind usage.

Procedural versus logical

There have been two completely different and often blindly opposed way to try to deal with the complexity issue.

procedural programming

What I will name procedural programming (the correct name might be imperative), sticks with an instruction set which is independent from the processor details, but not too far from it. It makes translating the code to processor instructions a not too complex process so that the efficiency of the result is mostly granted unless the programmer did some bad high level choices.

The problem with Basic (4) was that the many goto instructions in a long program made it look very much like a tangled string, so really hard to read. Moreover, there is the variables collapse issue, I mean the same variable is used for two different purposes on two different parts of the program and the programmer did not notice it.

The first method introduced in procedural programming languages by Pascal (3) in order help cope with large programs has been the function notion (and it's still by far the single invention that brought the largest gain). The program is no more a flat set of instructions, but a set of functions. The programmer writes small functions, then use them in bigger ones, and so on. Moreover, a function has local variables that are private, so cannot conflict with any other variable in another function.

The second method is typing. Typing is the best example of properly coping with both main languages issues at once: it helps the programmer because it prevents to call a function with an argument that have different type than the expected one, so silently get an unpredictable result (because the function programmer nearly always implicitly assumes some type for the arguments even in an untyped language), and it also helps the compiler because efficient implementation of tiny functions requires to know the exact underlying encoding of the data (which is also defined by it's type).

Then the need for more genericity (an algorithm can be handled on different kind of data provided they have the requested properties) in order to avoid code duplication was first very badly solved with the preprocessor notion, which is a purely syntactical not scalable mechanism, then evolved to object programming with Smalltalk (3) and later C++.
The general idea of object programming is to focus on classes interface (a class defines a kind of objects that all have more or less the same logical structure, and the interface defines the properties of these objects), so that some algorithm can be applied to an object of any class that provides the required interface (5).
The promise was better code reuse.
The limit of object programming is that it tends to spread the effective code on many classes methods so that, in extreme situations, it can bring back the tangled string syndrome with Basic gotos being just replaced by nested methods calls and classes inheritance.

logical programming

Logical programming, on the other hand tries to make the language instructions set higher level.

Logical programming is a generic name under which we find two very different approaches:


LISP (3) introduced the notion of meta programming (the program is a data that can be processed) and was later refined as functional programming languages (OCAML, Erlang).


Prolog introduced more abstract computational model through unification (with a lot of over simplification, unification means dealing on math formula as opposed to numbers) and I will call it logical programming because it also hides the sequential nature of procedural programming.

I will just ignore the meta programming branch at the moment since I'll resume to it when introducing functional programming then Pliant design.

Languages using a more abstract computational model (Prolog) are very appealing in two areas, and it might explain the strong positive opinion many researchers have about them:


They enable to write very elegant solutions to some problems.


The underlying computing model has more interesting theoretical properties.

Elegance can be misleading since there is no relation between the fact that a language enables a nice looking solution for a simple carefully chosen problem and the fact that it fits nicely for any non trivial project.

About the underlying computing model, procedural languages are based on what is called side effects: there are some variables, and the code execution changes their value over time. The variables are sometime said to be mutable. The underlying math model is very much disliked by researchers, so generally described as dirty or bad, just because at the moment nobody found any interesting theory to based on it (6).

Logical programming rather works the other way round: it starts from a nice computational theory, and tries to implement it with as few as possible purity versus efficiency compromise. The common part of logical computing is the variables behavior. As opposed to procedural programming model, in a logical programming model, just like in math, a variable value will either be undefined or have it's final value. In other words, a variable value is not changing over time. So, in a pure logical programming language (pure means strictly applying the computing model), the program ends when all variables have found their value.

If we get back to the initial introduction saying that a good language is one that both optimize the programmer mind and the computer efficiency, logical programming just falls on the two aspects at once !

On efficient human mind usage, in general situation, the programmer has to be more clever to solve the problem using a logical computation model than a procedural one, because the model is more abstract and abstraction capabilities (beyond fuzzy matching) is a very limited resource on any living creature. So, even if you are personally a guy with outstanding abstraction capabilities, so have produced more elegant solutions using logical programming, then when the problems get more complex, so closer to your personal limits, then extra abstraction of logical models get's a milestone around your neck.

Now, on efficiency side, there have been for years public statements that a given logical computing language now reached the point where it's raw computing efficiency is now close to the one of a procedural one. Good looking numbers are generally provided, and the trick is just that the fact it works for a given program does not prove that it works for all programs.
The second populism level statement in this area is that logical computing models are better for parallelism because variables value don't change over time. That's true, but it's just ignoring that a given problem has to be handled differently if the computing model is changed, so that the complexity may well significantly change as a result. In other words, the real problem has been shifted, not solved.
From a high level point of view, since hardware is side effect model based, efficient implementation of any significantly different logical programming model means automatic translation of the logical computing model to the side effect model at function, algorithm, or even application level instead of single instruction level. From the research difficulty point of view, the general case is basically in the same range as the automatic theorem prover, so assuming it will be solved any time soon looks to me as a joke more than anything else (7).

Anyway, let's end this paragraph about logical programming with talking about the single very successful logical programming language: SQL.
The reason for it's success is double:


database queries are mostly trivial so the human brain will not be overflowed and the bonus of elegant solution will show up in the end (8),


the end result code will be efficient because the problem to solve is so specialized that translation from one model to the other enables the code generator to select the right transposition most of the time. Since the right execution path for the same query can change over time because the database structure or main usage changed over time, the ability to change transparently without requiring the developer to upgrade the code is the key bonus in the end (9).

functional programming

Depending on the point of view, functional programming languages (LISP, then OCAML, Haskell or Erlang) can be included in logical programming or not.

The guts of functional programming is favoring a more abstract computational model, also with better math properties and potential better looking solutions, named lambda calculus, that focuses on functions as opposed to data.

Now, the big question about a functional programming language is: does it accepts overwriting variables values (mutable variables) ?

If not (Erlang and mostly Haskell), I would classify it with logical programming languages, and what I already wrote about logical programming languages will apply.

On the other hand, if the language enables overwriting variables values (LISP, then OCAML), nothing prevents (from the theoretical point of view since in practice all functional programming language are using a garbage collector for memory allocation with serious performances impact) the language to be a superset of C, with functional programming as an advanced extra feature.

Back to functional programming computational model, I will in facts also not elaborate a lot about it because I just see it as some potential use of Pliant meta programming that I will now briefly describe. As a result, in the end, I tend to classify functional programming as a language feature, just like object programming, rather than a language class that procedural or logical programming are.

Pliant design: meta programming

Earlier in this document, as the conclusion of programming early days, I asserted that a modern programming language is an attempt to optimize at once both:


computer hardware usage,


human mind usage.

Then I have argued that logical programming does not answer such constrains when it forces to write using only no side effect programming paradigms, because:


it is helpful only in special fields when the elegance it enables brings more than the abstraction overweight it puts on the human brain,


there is no possible efficient translation to the hardware side effect based model in the general case.

It enables me to settle a first very important conclusion: a modern language has to be a superset of C language. I mean, it has to:


enable to use directly all the abstracted hardware instruction set (in order to grant efficient hardware usage, as partially introduced with Basic earlier in this document),


provide the very effective function notion (in order to enable basic modularity on large applications, as introduced with Pascal earlier in this document).

So, the next question is: what nice high level extension are we going to add on top of C ?

The answer is driven by the following remark: as soon as one gets high level, the hardware design is not driving and lowering as a result your possible choices, so that the number of possible interesting extensions is very large and very much depending on what the application truly has to do. Logical programming is mostly motivated by connecting with interesting math properties, but fields like application user interface or easy existing application customization can also greatly benefit from language extra features, object programming being one of them. Moreover, efficient extensions tend to be fairly specialized (10), so increase the number of possibilities.

As the result, the greatest choice in Pliant design is to focus on a clean extension engine rather than a nice set of extensions. It means enable extensions to be provided as libraries rather than all in the core of the language, and avoid nasty side effects between each other extension.

That is where Pliant connects back to language history, and more particularly to LISP. Any language starts by parsing the source, and turns it to a tree of keywords and scalar constant values (integers, strings, etc), and there is nothing magic here. The entry level meta programming introduced by LISP enables the program to do rewrites on the tree in order to turn it to something the core language understands. This is more powerful than C preprocessor macros since the rewriting code can check the identifiers and constant values in all parts of the tree, and generate accordingly.
On the other hand, two important features are still missing:


The rewriting code cannot test the type of arguments, and typing proved to be a terrifically important property.


Rewriting will end as some LISP built in instructions set which is not efficient since too high level.

Pliant brings meta programming one step further than LISP through correcting these issues. Here is how it's achieved:
The tree produced by the  source code parser is named an 'Expression' in Pliant.
An expression can be a identifier or a constant of any type, with a set of subexpressions, just like in LISP.
On the other hand, meta compiling it will mean not rewriting it to another tree that is understood by the language built in features, but rather attaching to it a set of low level instructions, and a result argument.
A low level instruction can either be a call to function, with some arguments provided, or an elementary instruction (the ones that translate directly to one hardware instruction, I mean the ones of C).
An argument can be a constant, a local variable, or can be indirect (a local variable is providing the address of the argument instead of it's value).
In other words, the listing that is attached to an expression by some Pliant meta programming code looks very much like Basic as introduced in this document languages history part, with functions call, but no nested call (11).
Please notice that the Pliant meta programming model (as opposed to LISP one) is typed, because the type of an expression is defined as the type of the result argument attached to it.
See meta programming article for more step by step explanation on the subject and consequences on software development.

So, the invention of Pliant is to say: a mechanism is provided enabling applications to add new features to the language, but the code adding these features is also responsible for providing the way it will be translated to low level code, and it solves both the potential nasty side effects between extensions, and the efficiency issues at once.

Of course, several parts of the language have been designed very precisely in order to make the all thing consistent. Here is a short tour:


The parser is not based on grammars and automatons as usual, but just a set of token filters and folding rules. This makes it slower (nobody cares) but easier to extend.


The default Pliant syntax has been selected to be very simple in order to make programs easier to read and enable easy tiny extensions for special applications (12).


The code generator is a set of instructions (as described earlier) rewriting rules that turn the hardware abstracted instructions to hardware supported ones (through splitting function calls to the corresponding assembly sequence, mapping variables to processor registers, and so on). It makes it very easy to add extra optimizing rules when providing an extension.


The compiler is a dynamic compiler so it can test the execution environment at compile time (because compile and execution time are the same) so produce fully customized code (with C or C++, you have to provide all execution platform details at compile time, so you have to recompile your all Linux distribution to get the optimum, and very few people are crazy enough to try such a thing).

Pliant built in high level features

Pliant provides a few built in high level (higher than C) features:


'build' and 'destroy' methods automatically called on local and temporary variables,


generic functions support (named virtual methods in C++),


references count based memory allocation (as well as C like explicit allocation).

All of them could have been provided as libraries thanks to Pliant meta programming capabilities, but then I would have an chicken and eggs problem because the Pliant dynamic compiler meta programming engine uses them.

This is so much true that I can even say that the features provided built in Pliant are just the ones needed by the dynamic compiler meta programming capable engine. As an example, floating point numbers are not built in.

The limits

The single serious limit (from my biased point of few) is that Pliant cannot have a garbage collector (GC) with the possibility to have all standard data types handled by it at wish. The reason for meta programming not being enough to enable it is that there is no way to implement a GC without serious constrains on pointers usage, so it cannot coexist peacefully with low level code parts dealing freely and dirtily with pointers (13).
We might support Pliant data allocated with a GC at some point, but they will have to be GC only classes so that all methods dealing with them obey the extra rules required to enable the GC engine to not disturb partially executed functions at garbage (moving data) time.

The other limit is that mastering meta programming is not something an entry level programmer can do. There used to be two stages in the programming learning curve: at stage one we find using high level features, and at stage two there are the pointers. Now, there is stage three with meta programming. Anyway, this is not a very serious limit because you can spread the developing effort among several programmers with different programming skills and even entry level programmers will benefit from existing libraries making use of meta programming features. In facts, it just says that reaching the limits of the language will take more time.

Using several languages

Several nowadays applications rely on two, or even more languages. As an example, a web site might rely on C and C++ written low level libraries and web engine, use PHP written scripts on server side and Javascript written scripts on client side, and even SQL written database queries, or a desktop application might be written in Python with the underlying user interface libraries being written in C or C++.
There are a lot a constrains with such a development model:


the learning curve length is increasing drastically because each language provides it's own set of tools, way of going things, and constrains.


many application level features start as being implemented in one of the languages, and soon hit the limits of it when growing since each language (except C and C++) tends to have a tiny decent usage field.


sharing complex data among several languages is not straightforward.


the ability to adapt underlying components is mostly lost because the overall thing it too big.

So, in the end, the drawbacks of inconsistencies among various languages reduces and even often outweighs the benefit of extra features each of them can bring.

In very few words, improving C++ and correcting it's design issues (14) through providing a better set of extensions beyond C is a better solution, from my experimented point of view, than adding another scripting language on top of it (also detailed explication of it is beyond this article).


In the end, I have classified programming language in only two categories:


procedural programming languages, where I include functional languages that enable overwriting variables content,


logical programming languages

I stated that only procedural programming language can be good general purpose languages, assuming that 'good' means optimizing both computer hardware and human mind usage at once.

Then, I classified the nice features a language can provide:


functions (the minimal any modern language will bring, and the only C brings),


functional (easier advanced use of functions),


object oriented (easier use of object oriented programming style),


meta programming (code generation capabilities).

Please notice that functional and object oriented are opposed, yet work at the same level: in object oriented programming, the object is passed as a parameter and brings the environment variables (object fields) and the functions (named methods), whereas in functional programming the function is passed as a parameter and brings the environment variables (closure) and the function.
The meta programming feature is more general than object or functional programming since it enables to implement them at application level, but it is also less straight forward to use since it's an extension mechanism instead of a ready to use extension.

Finally, I concluded that my vision about a good general purpose language is:


maximum efficiency as the sane foundation that must be lost at no price,


meta programming as the most general way to improve expressiveness because it enables best semantic consistency on the long run, so smooth evolution at application level.

This second conclusion might just raise two more questions:


How does this point view stand when tested on the real ?


Why is this vision not so widespread ?

About tests in the real, this is just what FullPliant is, and the good news is that if you compare the very first Pliant release with the latest one that can run the all FullPliant overall computing system, you will notice that the changes at language level are mostly non existing, so it's a very good sign that initial concepts that took 17 years to mature, and is now available for 10 years, where really strong.

Now, about this language vision not to be widespread, I could provide the same answer as for the previous question: testing it on real through providing a complete computing system is an order of magnitude more work than releasing an half finished language, so, as we'll see it at the end of the next section, it does not happen frequently, and this means nothing less than recent languages evaluation and opinions are mostly unconnected to facts.
Anyway, I find the sociological level answer more pertinent.
On the academic side first, if you read publications about computing, you might notice that they tend to contain a great deal of math, and relying on a clear math theory is seen as a great value. I tend to see it more as out of the subject since the real subject in computing science is managing the complexity, but it just shows that on the academic side, emancipation of the young computing science from the old math one has not happen yet. Computing field could probably benefit from tighter connection with fields such as town planning (18).
Now, on private companies side, most skilled guys stop programming very soon in favor of managing teams, so that they never get the many years of practice required to top programming art (programming learning curve is not that different from let's say playing a music instrument). Then, the few remaining ones tend to focus on very specialized value added subjects so that the overall system complexity is out of their scope just like the guys from the early days. As a result the language is mostly irrelevant for them beyond being efficient.
So, my two cents sociological level conclusion is just ... Who is really working on languages nowadays ?
Please notice that these sociological level arguments might also explain why the first attempt to connect C and LISP happen so late and isolated in language history.

Languages panorama

Now that I have explained (well not much more than enumerated in facts) various concepts about programming languages, I can provide my personal (that I hope helpful) classification:

First, let's notice that the main branch in languages is assembly, C, C++.
Then, let's notice that among the hundreds of available languages, there have been only two really new ones: LISP and Prolog.

Form the C++ end of the main branch, start a lot of fairly successful languages.

First, we have Python, Java, C# that mostly just replace C/C++ explicit memory allocation with a garbage collector. Most of them started as using bytecode based execution then evolved through providing JIT true compilers.

Then we have a set of languages such as Perl or Ruby that have been designed with only nice features in mind, but no consideration on the execution model, so end as being interpreted, so slow. Perl favors compact dirty source code, Ruby favors nice features set. The problem is that when you reach the limits of the language, your application is generally already written ...

In order to end the C branch alternatives, let me just enumerate PL/1 (3), ADA and Eiffel that now mostly disappeared. Compared to C, they where attempts to provide cleaner source code, but they don't provide new computational model or extra advanced features.

Prolog branch led to SQL, which is not a general purpose language.

From LISP branch, we have Scheme which is just a cleanup of the huge Common LISP.
Then, we have OCAML that cleans the functional programming but mostly keep procedural execution model, and Haskell that introduces lazy evaluation and mostly unmutable variables that brings it one step further towards logical programming though making procedural programming style using it close to impossible.
Both OCAML and Haskell are written by skilled guys and researchers in an open environment, so evolve a lot (or are the result of a lot of evolution, as an example ML, CAML, OCAML). As a result, it is very interesting to see where they get stuck because it's a sign that they hit a hard limit.

For Haskell, the attempt to make the IO (input output) operations fit the clean functional model led to using terrific tricks such as monads. From the theoretical point of view, they are not at all tricks since they are assumed to be based on advanced math theory, but from the practical point of view, they are. So Haskell is the perfect illustration of a language that does not optimize human mind usage. Some people will like it very much, because solving any non trivial problem with it is a challenge, so people might be happy in the end, but for dubious reasons from my point of view until I see an Haskell overall computing system (15).

What is interesting to see is that, through reintroducing typing, OCAML succeeds to get back to better language performances (when compared with LISP), but through keeping a garbage collector, it sucks at execution environment level: either you use the default fast garbage collector and it cannot do multithreading, or you use a multithread enabled garbage collector and the impact on low level operations performances is high. This is the perfect example illustrating my point of view about garbage collectors expressed in the storage introduction article, and it also applies to Java or C#.

The OCAML issue with garbage collection extends as a very general conclusion about most recent languages, let's say Java, Python, OCAML, Haskell, Ruby (some more than others). As soon as the application get's not trivial, the engineering cost to check that it still carries the load (which implies using a lot of dirty tricks to get speed and decent memory use) just outweigh the initial benefit of garbage collector or higher computational model. Moreover, the most computationally efficient implementation of the language most of the time only accepts a subset of it in the end.

Let's end this panorama at a very high view level. Any mainstream nowadays computer is a C/C++ driven system, whatever the application language might be, because most of the system (the operating system kernel, the graphical user interface, the database engine, the huge desktop suite, etc) is written in C/C++. So, all new language proposals, unless they provide some complete system replacement, are just, from the overall computing system point of view, scripting languages on top of C/C++ core. I tend to believe that in such a situation, on the long run, languages with very close to C/C++ syntax and semantic such as Java or C# and further C++ extensions will be the only successful ones, with Cobol to Java as the only huge (and very slow) transition in the first half of the 21th century.

On the opposite side, the main problem to handle in modern computing systems remains managing the complexity. Language is a small part of it, applications design being the larger one (16). That said, among the large number of available languages, proposals for a single consistent language to help increase overall computing system consistency are few. Ignoring assembly early days, we have C with Unix and Windows, had LISP with LISP dedicated machines, and Pliant with FullPliant is just the third one in computing systems history (17).
As a result, it is not a that surprising coincidence in the end that the third one is just based on the first language that is the connection point of the two previous ones.


Pliant is a modest language in a sense, because it does not try to provide an ultimate built in small set of high level features (or computing paradigm) that would work nicely in most situations.
On the other hand, when introducing the ability for features to be added as libraries on top of the very efficient C execution paradigm in order to just improve language expressiveness so enable global software complexity reduction, I took great care to not provide it as just a few dirty hooks in the compiler internals, but rather as an overall very carefully designed and clean extension engine.

In just a few marketing words: Provided releasing closed code is not targeted, Pliant is the language with the largest decent usage scope, so the best candidate to bring down development and maintaining cost through enabling to do everything comfortably in one single language, and on top of a not too huge and inconsistent pile of underlying tools. This is proved/illustrated by the FullPliant overall computing system.



From my point of view, the fact that I could write so many applications with it is the only serious justification since they are mostly published material so available for reviewing. See note 10 bellow.


Unless Pliant is used as a C code generator, just like many high level languages. Also please notice that a Pliant -> C translator do exists, and is used in order to release closed source applications developped in Pliant as standard executables, but it is not available for free.


Please notice that I'm not an historian, nor old enough to have followed theses events, so I provide general picture about how things evolved. Basic, Pascal or Smalltalk are probably not be the ones that invented the concepts. They are just good illustrations of the these concepts.
If you expect more precise history report, look in Wikipedia.
My introduction is also biased the other way round, when I say Basic as introducing abstract hardware instructions set, it's false because it does not provide pointers handling. I should have said C, but C also contains functions, so comes after Pascal. Basic just introduce incomplete abstract hardware instructions set, and C introduces the complete one.
LISP is also older than C and Basic, so is PL/1, as opposed to this document presentation.
Lastly, I chose to just not name Fortran and Cobol that have been so important in practice. This is because they are in the middle of Basic and C from the conceptual point of view (even if Fortran and Cobol appeared long before Basic and C and Fortran still provides better optimization informations to the compiler than C/C++).


I'm talking about early days Basic language, not the ones people are using nowadays that included Pascal functions, and even more.


As an example, let's say sorting can be applied on any set data that provides a compare function.


Beyond Turing machines theory and proofs of program.


For people more fluent with computing complexity, the complexity of a working computational model translator is basically the same as the complexity of an automatic theorem prover. Put the theorem as an input ... get the demonstration as an output. The nice news is that we know a reasonably simple solution to it ... with the only tiny issue that it requires a computing power growing exponentially with the size of the proof.


This is not a counter example of the logical programming minimizing human brain capabilities, since in the rare cases of very complex queries, the non procedural and hardly predictable nature of requests make programmer try to get back to procedural model through using cursors that just brings side effects model back.


Silently optimizing a request written using a procedural side effect based model would also be possible in facts. It would just be harder because the optimizing engine would have to recognize the general query pattern, and it also would lead to less elegant trivial queries that are the most frequent ones.


A language designer dream is always to find the single extension that will work nicely for nearly everything, just like others looked for the holy grail in other times, or physicists currently seek the single theory that would unify all existing ones.
When we add on top of that that evaluating a language is something that cannot be done easily on the paper (it has to be tested on field on several large projects just like early days cars reliability has been tested through crossing Africa), so that language evaluation cannot be done easily at all, then we have all ingredients for many language designers to overestimate the proper usage scope of the extension they designed, and slide to populism or marketing arguments.


Or if you prefer, like a bloc of C code that would use gotos instead of 'while' and 'for' higher level controls, and would do only one operation per instruction. I mean implement a = b+c*foo(d) as t1 = foo(d); t2 = c*t1; a = b+t2;


This is probably the point that has the strongest effect to prevent Pliant spreading.
Most people with existing programming skills expect a C like syntax, because it's what they already know. From my point of view, learning a new syntax is not a big deal, but resistance to it is far beyond what I would have expected. C syntax is awful (12a), and it goes against the principle to maximize human mind usage, but people tend to see more the effort learning the new syntax require than the penalty of using a poor one brings to their capabilities.
The only people that like Pliant syntax are the ones that have no existing programming nor math skills, because it is easier to learn.


As an illustration, the fact that in C++, calling a method and reading a class field is not the same syntax is a huge design mistake, because it prevents to transparently turn a field to a method at a later point.


I know: plenty of people will say that modern GCs behave nicely, and maybe even better than references count. This is the same kind of argument as raw efficiency of logical programming languages: tests are performed in a special environment.


As an example, the all objects model of C++ is just crap:
. returned object are copied instead of being provided by reference by the calling function,
. the constructor notion is wrong (should use 'build' and 'destroy' methods at function startup and exit)
So, I look at C++ as both a valuable set of extensions over C since it truly makes high level writing easier, and a so flowed one that it has been impossible for Pliant to stack on top of C++ features set as opposed to C one.
This is something very frequent: when a tool such as C hits it's limits, either you go the long proper design way, or you go the shorter tricks way. Market tend to favor the tricks way, with opposition to progress as a long term consequence, but this is another story.


A human being tend to be happy when it solves problems that are not too easy (not rewarding) and not too hard (no success). As a consequence, using a language with a lot of constrains can be at some point interesting because it can raise some too easy problems such as writing an entry level web server to more interesting ones. In computing science, doing so is double win because naturally interesting problems tend to be mostly in the managing complexity of huge softwares category, so require years of effort.


That's half true only. The language is more a limiting factor because it's under the application. It means that if it starts to collapse under the load, the application will not be able to compensate.


If you are aware about another single language based overall computing system, please drop me a mail.
As far as I can understand it (not much in facts) LISP machines where AI dedicated, so probably did not implement basic functionalities such as database. I also don't know if the operating system kernel was written in LISP or C.
About FullPliant, the only missing part is the kernel, and the reason not to try to provide a Pliant written kernel is double: first the Unix API that evolved to Posix is very low level and reasonably sane, so a rigid system at that level is something much easier to cope with than a rigid API on higher level layers,
then there is the drivers issue. The nowadays mainstream hardware is continuously changing with a lot of subtle differences between various machines so that writing and maintaining the device drivers layer that is necessary to make it appear consistent for applications would be a daunting task with small bonus in the end.


In the 60th and 70th, when the computers power was still low so that complexity issue was not yet the main one, academic research produced the great concepts most nowadays computing systems are still based on (Unix, database relational model, Windowing user interface), but on desktop applications that emerge in the 80th and 90th, academic research contribution seems comparatively weak.
Moreover, academic computing field publications also apply very neutral and polite tone inherited from mathematics. This tone is well suited for exact sciences but not that suited for social sciences computing is more related to in facts. Being so, the gap between academic publications and commercial arguments is just too huge so that academic publications fail to regulate marketing wordings in the end. This site tries to be more in the middle.