Language

Using Pliant databases advanced features

Using PML encoded storage objects instead of XML like encoded

In the gentile introduction to using Pliant databases article, we defined a database store in file:/pliant_data/my_corp/shop/shop.pdb (and file:/pliant_data/my_corp/shop/shop.log if infinite log is activated). These files are ASCII files (except if compression is requested) that use XML like encoding.
We are now going to change the encoding to use Pliant PML encoding in Pliant storage objects.
In order to do so, we first have to change:

module "/pliant/storage/database.pli"

with:

module "/pliant/storage/database_pml.pli"

then change:

(gvar Database:Shop shop_database) load "data:/my_corp/shop/shop.pdb" log "data:/my_corp/shop/shop.log" mount "/my_corp/shop"

with:

module "/pliant/language/unsafe.pli"
module "/pliant/storage/ground/object.pli"
gvar (Link StorageDatabase:Shop) shop_database :> storage_link "/my_corp/shop" "" StorageDatabase:Shop

Now, the database content will be stored in file:/pliant_data/my_org/shop/log

One of the advantages of storing the database in a PML encoded file is that loading and saving is much faster: I noticed a 6 fold factor, but it probably highly depends on the database content.

The single huge disadvantage of storing the database in a PML encoded file is that you cannot edit it with a standard text editor anymore.
That's why I think the optimum is to store huge complex databases in PML encoded (split as described bellow) files, and keep simple databases that store configuration informations (such as file:/pliant_security/this_computer.pdb) as ASCII (XML like encoded) files.

Splitting the database over several objects

Let's assume that our shopping business has been terrifically successful, so that we don't want to keep all orders in memory because it's too many after a few years.

Each order will now be stored in an individual storage object. At operating system, a Pliant storage database object is mostly a directory with just a 'log' file inside.
In this example, we decide that an order with id '12345' will be stored in file:/pliant_data/my_corp/shop/order/45/12345/ directory.
The fact that we dispatch orders in a two levels set of directories is related to operating system constrains: Linux does not like a directory with thousands of subdirectories. If you plan millions of orders, you might want to use three levels.

function shop_order_subpath id -> path
  arg Str id path
  path := "/my_corp/shop/order/"+(id (max id:len-2 0) id:len)+"/"+id

function shop_create_order id
  arg Str id
  file_tree_create "data:"+shop_order_subpath:id+"/" # this is assuming that data:/ is file:/pliant_data/

function shop_order id -> order
  arg Str id ; arg Data:Order order
  if id="" or (id search "/" -1)<>(-1)
    order :> "/null" pmap Order
    return
  var (Link StorageDatabase:Order) db :> storage_link shop_order_subpath:id "" StorageDatabase:Order
  if not exists:db
    order :> "/null" pmap Order
    return
  order :> db data

The main line in this example is the one calling 'storage_link' that checks if the object is already in Pliant global cache, and if not, load it from disk. The detail of how 'storage_link' works is explained in the 'The storage machinery layout' article.

When a database is split among several objects, you loose the ability to scan all records through a simple 'each' control. So, you have to create some database objects that will contain ids of orders.
Here is an example:

type ShopIndex
  field Set:Void order

function shop_index id -> index
  arg Str id ; arg Data:ShopIndex index
  if id="" or (id search "/" -1)<>(-1)
    index :> "/null" pmap Order
    return
  var (Link StorageDatabase:ShopInex) db :> storage_link "/my_corp/shop/index/"+id "" StorageDatabase:ShopIndex
  if not exists:db
    index :> "/null" pmap Order
    return
  index :> db data

Now, you can create a shop index with all your running orders, one per customer with his orders history, and so on, so that scanning subset of orders is possible with a code like this:

var Data:ShopIndex running :> shop_index "running"
each r running:order
  var Data:Oder order :> shop_order keyof:r
  ...

Of course, instead of 'Set:Void', you could use 'Set:Str' or 'Set:Summary' and store a few informations about the order, so that you can implement a search that does not load each order.

What should be clear now is that as soon as you split your database over several objects, you are much lower level than with an SQL based relational database engine, so the end result will depend more on how your data layout has been properly matched to the application.
So, my two cents advise is: take a lot of time to think about your data layout, because the huge mistakes (and the limits if no huge mistakes have been done) will append here. A relational model could make you think that you can start fast because it handles everything transparently, but this is an illusion: it's true that it will handle implementation transparently, but it will not handle transparently what kind of data you decide to put in your database.
Said the other way round: if you design carefully your database at content level, then you will have spent enough time on the data layout so that usage will be perfectly clear also, so will be the splitting on Pliant objects.

Sharing and replicating the database among several servers

From a high level point of view, there are three ways to do objects replication:

   •   

Automatic push replication: the object content is immediately forwarded to the replication servers.

   •   

Automatic pull replication: the object content is automatically downloaded by the replication servers just before using it.

   •   

Manual replication: the object content is manually downloaded through calling 'sync' method at application level.

At implementation level, the application has to define a new data type that provides, through implementing some of the StorageControlShare virtual methods set, the sharing rules it wants the storage engine to apply. In the sample provided bellow, the new data type is 'MyControlShare'.
Then the application implements a resolving function and record it through calling 'resolve_domain'. In the sample provided bellow, it is 'my_resolver'.
See 'Resolving' paragraph in 'The storage machinery layout' article for an overall explanation of the involved machinery.

type MyControlShare
  void
StorageControlShare maybe MyControlShare

method s path id -> path
  oarg_rw MyControlShare s ; arg Str id ; arg Str path
  if (id eparse "/my_corp/my_app/" any:(var Str sub))
    path := "file:/pliant_data/my_corp/my_app/"+(sub (max sub:len-2 0) 2)+"/"+sub+"/"
  else
    path := "data:"+id+"/"

method s master -> host
  oarg_rw MyControlShare s ; arg Str host
  host := "my_master.my_domain.org"

method s copy -> hosts
  oarg_rw MyControlShare s ; arg List:Str hosts
  hosts := var List:Str empty_list
  hosts += "my_backup1.my_domain.org"
  hosts += "my_backup2.my_domain.org"

method s auto_sync -> auto
  oarg_rw MyControlShare s ; arg CBool auto
  auto := "my_third.my_domain.org"

method s allowed class id write -> allowed
  oarg_rw MyControlShare s ; arg Str class id ; arg CBool write allowed
  allowed := computer_fullname="my_master.my_domain.org" and class="host" and (id="my_backup1.my_domain.org" or id="my_backup2.my_domain.org" or id="my_fourth.my_domain.org" and not write or id="my_third.my_domain.org" and not write)

function my_resolver class id adr t -> status
  arg Str class id ; arg Address adr ; arg Type t ; arg Status status
  if class="share" and t=Link:StorageControlShare and (id eparse "/my_corp/my_app/" any)
    var Link:MyControlShare share :> new MyControlShare
    adr map Link:MyControlShare :> share
    status := success
  else
    status := failure

resolve_domain 1 (the_function my_resolver Str Str Address Type -> Status)

Here is the explaination of the semantic of each method and function in the example:

   •   

'path' method specifies that all concerned storage objects will not be stored in a single directory, but in a two levels set of directories. If we have 1000000 objects, instead of a directory with 1000000 entries, we will get a first level with something like 1000 entries (the two last characters of each object ID), then roughly 1000 entries per subdirectory. It might be easier if you need to browse the filesystem.

   •   

'master' method specifies that the master server will be 'my_master.my_domain.org'

   •   

'copy' method specifies that all changes will be automatically forwarded to 'my_back1.my_domain.org' and 'my_backup2.my_domain.org'. This is called automatic push replication.

   •   

'auto_sync' method specify that each time the application will access an object through calling 'storage_object' method on 'my_third.my_domain.org' computer, the machine will silently try to connect to 'my_master.my_domain.org' in order to update the local copy. This is called automatic pull replication.

   •   

'allowed' specifies what machine is allowed to read and modify the objects.
Here we specify that 'my_master.my_domain.org', 'my_back1.my_domain.org' and 'my_backup2.my_domain.org' are allowed to modify, and that 'my_third.my_domain.org' and 'my_fourth.my_domain.org' is only allowed to read the content.
As a result 'my_fourth.my_domain.org' will be allowed to do manual replication through calling 'sync' method as described in the 'The storage machinery layout' article.

   •   

my_resolver' function specifies that the sharing rules will apply to objects starting by /my_corp/my_app/

Requesting you to write a function (several ones as shown is the previous code) just to specify the name of the master and secondary server might be heavy poor design. For simple configurations where the database is stored in a single or few objects, we could write a user interface tool enabling to fill the PML content of the 'control' file associated with the object then extend the 'update' method for 'StorageControl' data type, so that no more code would be required. Exercise left to more than brave readers.

Please don't forget that objects replication requires the 'Storage objects server' and 'Network port multiplexer' Pliant service to be running (Follow 'Dashboard' 'Service' from FullPliant main menu). Please also notice that each server must know how to securely connect to each other, so must have the proper set of hosts definition providing IP address, TCP port and public key of other servers (Follow 'Dashboard' 'Accounts' 'Hosts' from FullPliant main menu).

Pliant storage replication works both way, so you can modify data on any of the three servers, and even modify some data on one, and some others on another one at the same time, so that Pliant storage replication could be used as a load balancing mechanism.
Each server is expected to continue working if the connection with others is down and synchronization should resume nicely when the connection is back with changes performed while the connection was down properly forwarded to all servers at that time.
After synchronization, it is very likely that the data on all three servers will be consistent, but it's not granted.
Anyway, multi server operation has not been seriously tested yet on production, so make your own tests before going to production.