Pliant graphical stack machinery layout

General picture

Pliant graphic stack has three layers:




Vector drawing



In image layer, the document is a two dimensions grid of pixels. PNG, JPEG, TIFF are some well known standard for coding images.

In vector drawing layer, the document is a set of drawing instructions. Windows GDI, X11, Postscript and PDF are some well known vector drawing standards.

In positioning layer, the document is a set of logical elements such as paragraphs or tables. HTML and (La)Tex are some well known positioning standards. Anyway, most desktop applications such as word processors include their own positioning engine and access directly the vector layer instead of relying on a standard positioning layer.


On one hand, computers processing power used to double every 18 months. On the other hand, most features tend to require a comparatively fixed amount of power. As a result, each feature becomes available at a date mostly derived from processing power growth. In facts, it becomes available twice: once using specialized hardware, then later using no specialized hardware.

For desktop 2D drawing, required processing power is around 1 Ghz. So, graphical desktop appeared once around 1990 using and requiring hardware accelerated graphic cards, but can now be decently achieved through software only solutions (1).
Anyway, in 2D desktop drawing, the consequences of history are still very strong since, except HTML, all mainstream graphical stacks are still modeled on being a vector drawing hardware abstraction layer rather than a complete graphical stack.

Application development dilemma: the positioning duality

On one hand, it is very convenient for an application to use a graphical stack with positioning capabilities, for two reasons:


it enables to use dynamic positioning according to the content, so it makes RAD more efficient, reduces the costs related to adding or removing input fields, automatically handles issues related to text being longer in one language than the other, etc.


on a client server model, it reduces the power consumed on the server since the client carries not only vector drawing, but also the very power consuming positioning computations.

These two arguments are among the main ones that drives database centric applications from historical graphical toolkits to HTML/Javascript.

On the other hand, designing the image layer prototype is really easy (assuming color coding and color conversions are excluded), designing a good (simple yet fairly complete) vector layer is still possible, but designing a reasonably complete positioning engine is a terrific challenge.

Providing document dilemma: the electronic paper question

If you want to send a document, should you encode it at positioning layer level as a set of logical elements (HTML), at vector drawing level as a set of drawing instructions (Postscript or PDF), or at image level as a grid of pixels (PNG or JPEG or TIFF-IT).

The answer depends on several factors:


how complex is the document


how important is it to get the exact same drawing in the end


how much storage or bandwidth constrains do you have


does the recipient need to modify the document

Pliant graphic stack design choices

Pliant graphic stack has been designed with architecture consistency and printing in mind as opposed to hardware abstraction. There are several consequences. Positives will now be listed, negatives will be listed in the 'Limits' next section.

I would like to first point out how tightly related the image layer and the vector layer are in facts. Not only the image is receiving the output of the vector drawing, but image and vector drawing also have opposite difficult issues.
I mean, an image can encode any drawing: only the resolution is an issue. So, the hard part of the image layer is to efficiently encode reasonably simple documents at very high resolution. On the other hand, there will never be a set of vector drawing functions that enable to encode any document without relying on embedded images. So, the problem of the vector drawing layer is just the opposite of the one of the image drawing layer: it is encoding complex documents that is hard.
From these remarks, we can draw a very important conclusion: the obvious solution to start with is using the best of both words: provide only a reasonably simple set of vector drawing functions in order to ease reliable implementations, then rely on an efficient image layer for carrying any complex drawing function.

On hardware abstraction oriented stacks, the image layer does not really exists. The graphic card can handle images, and even some operations can be applied very fast on the images using dedicated hardware, but they must be handled at reasonably small resolution because no compression is supported by the hardware.
On the other hand, Pliant image layer has been designed as a full one, with two results:


a complex document that will be printed in the end can be stored as an image. It means that images at 2400 dpi or more must be efficiently handled. This has been achieved through providing in memory PACK4 compression (a two dimensional advanced run length) support (2).


any unsupported vector drawing instruction (such as shading) will be processed externally then handled internally as an image, so the resulting image must not only support high resolution if necessary, but also full vector drawing compatible transparency, which means several alpha channels (3).

Next design choice has been to spend a lot of time in designing the vector drawing layer to be minimal. The huge problem with well known standard vector drawing formats such as PDF is that they are so huge (the specification exceeds 1000 pages) that nobody can write a fully working reader so it does not help resolve the electronic paper dilemma. On the other hand Pliant vector drawing uses only four functions: image, fill, text and clip.


Pliant image and vector layers are top quality. Sure, but ...
They are publishing oriented, so if you do CAD drawing, the number of lines drawn per second is important so that Pliant graphical stack will not be able to compete with any hardware accelerated graphical stack (4).
Then PDF 1.4 transparency model (providing operators such as min and max and a more consistent group level transparency instead of per drawing instruction transparency) does not fit nicely with Pliant vector drawing model which is a superset of PDF 1.3 or Separated Postscript. So these advanced transparency operations have been dirtily added as 'flat_play' function in display list support module /pliant/graphic/draw/displaylist.pli

Moreover, the positioning layer is still very limited. Among the most lacking features, the possibility to have text (and table cells) flow through several boxes (and also pages) is planned, but not implemented yet.

Lastly, the power of main processors is still not enough to do proper anti-aliasing on the fly for desktop applications (it consumes 16 times more computing power), so that the UI currently defaults to a partial quality versus speed anti-aliasing compromise (unless the UI client right control key or right mouse button is pressed).


Back to the two dilemma we exposed earlier, Pliant graphical stack answer is ... no choice. This is achieved through carefully designing and exposing the three layers, and let the user or developer decide on a per usage base.

What currently prevents HTML/HTTP/Javascript browsers to become the universal clients is largely the lack of proper handling of the image layer (embedded VNC boxes) and vector drawing layer (embedded PDF boxes) as a result of too much focusing on the positioning layer (as well as relying on a completely unsuited for interactive applications HTTP protocol) within already unreasonable complexity.
Pliant UI has been designed at a time where the web was already very developed and it's limits fairly obvious, so it can be seen as just a from scratch restart with the same target (5).

In the end, the Pliant graphic stack is the first one really suited for carrying the same document on screen and on paper (6). This has been achieved through introducing color models properly handling all printing constrains, then successfully using the same engine as a RIP in Helio printing industry (complex PDF documents) and as the foundation of Pliant UI client (interactivity). However, it is currently implemented without hardware acceleration, so excludes some CAD like (lots of elements) and 3D applications as well as some fast (since hardware accelerated) image editing operations.



On the other hand 3D drawing requires a amount of computing power which makes dedicated hardware still mandatory as I'm writing (2008).


High resolution bitmap have been supported for long in the printing industry as Scitex Handshake LW+CT then TIFF-IT. They generally consist of one file for the images at 300 dpi, and a run length encoded overlay at 2400 dpi containing vector drawings and text, but the Pliant solution is much better because it does it with a single file and does not end with text at 300 dpi in complex situations.
The other huge problem being that many Postscript and PDF reading tools will not support high resolution run length encoded images because they will try to uncompress them ... and surrender.


Professional printing documents, particularly in Helio printing, can use extra inks (for cost reason when some color is covering a large surface, in order to better cope with positioning issues, or to provide more vivid color than CMYK allow)
So, each drawing instruction is applying to some of the inks, but has to keep untouched other ones, and this it achieved through one transparency channel per ink. The other solution is to do one drawing per ink, then group all plans in the end. This is what has been done many years long using separated Postscript.


Pliant drawing prototype in /pliant/graphic/draw/prototype.pli does not prevent implementing an instance with hardware acceleration.
Line drawing would probably have to be added as first class citizen, and maybe a second lower level 'fill' instruction.
Then, the big problem is: how compatible with the pure software version is the hardware accelerated version supposed to be, and how restricted should it be (clipping, more than four color per pixel, multiple alpha channels) ?
In other words, should is stand on top of some existing hardware accelerated graphic stack, either through restricting to RGB or working with one image per dimension, or should it used a modified graphic card firmware to better cope with Pliant drawing model.


Well, to be fair, universal user interface was not the initial target of the web, so the initial designers cannot be blamed for the fact achieving it through smooth evolution does not work well.
Whatever it's limits are, HTML/HTTP has been a real revolution in user interfaces history: I just could not have designed Pliant UI without the help of HTML/HTTP ancestor experience.


This used to be a promise of Display Postscript (then maybe PDF and maybe also of Wysiwyg word processors) but has never been achieved due to lack of proper color management and mostly uncontrolled vector drawing instructions set that can hardly be avoided in applications focusing on reliability because of poor high resolution images handling.