In our previous article we looked at how to generate a CRUD API using Magic. In our next article we will analyse the result, but first we'll need to explain some theory, and why Hyperlambda exists in the first place.
In order to have automation processes intelligently generating code the way Magic works, you'll need programming constructs that fulfils 3 criteria.
The machine needs to understand the code, such that it can semantically traverse the code and understand what the code does. The machine needs to be able to dynamically assemble code by combining pre-existing snippets of code into larger snippets of code and programs. The machine needs to be able to take pre-existing snippets of code and modify these according to its needs.
The above results in what we refer to as "meta programming", implying that instead of a human being directly creating the code, the computer itself assembles the code, with instructions given by the human. To achieve this we need a meta programming language. As far as we know there only exists one such meta programming language today, which of course is Hyperlambda. This might sound daunting if you've never imagined such ideas before, but it's really quite easy to understand.
Hyperlambda as a tree object
First of all a snippet of Hyperlambda code is only the representation of a tree object in text format. In such a regard it is similar to HTML, XML, JSON and YAML. Look at the following code for instance.
while
lt
get-value:x:@.no
.:int:20
.lambda
log.info:Howdy from while
math.increment:x:@.no
The above Hyperlambda snippet results in a graph object, or a tree object when parsed. This tree object has two root nodes, being [.no] and [while]. The while node again has two children nodes of itself called [lt] and [.lambda], and so on. A colon separates the node's name from its value, and 3 spaces declares a child node.
As the Hyperlambda is parsed and transformed into a graph object, this graph object just so happens to be possible to execute. Your graph object is said to be "Turing complete". At Aista we call such objects for "execution trees" or "lambda objects", because internally it's a tree structure, where each node is referencing a method in the underlying platform, such as [while], [lt], and [get-value], and the object as a whole can be executed.
Code DOM
What makes Hyperlambda unique, is that the text representation of its code transforms into such tree structures as illustrated above, and then this tree structure is preserved as is, and the tree itself is what is executed. If you've done any dynamic HTML rendering in your professional life, you might already at this point realise the similarities between Hyperlambda and HTML, and the tree execution object and the DOM resulting from having your browser parse the HTML.
Your browser doesn't render and display HTML, it renders and displays DOM objects. However, DOM objects are described using HTML. The same can be said about Hyperlambda, implying the machine doesn't execute Hyperlambda, it executes code DOM objects, where such code DOM objects are described using Hyperlambda.
This relationship between Hyperlambda as a text format describing a tree structure, and its resulting code DOM, implies that we can dynamically modify the code DOM the same way we can use jQuery to modify the browser's DOM. Basically, we can inject, modify, and prune existing graph objects, resulting in completely new graph objects after the process is finished.
The code is no longer a static dead thing, but a living changeable thing, mutating either directly by itself, or as a consequence of having other Hyperlambda snippets mutating it. Consider the following snippet.
log.error:We have an error
set-name:x:@.lambda/**/log.error
.:log.info
set-value:x:@.lambda/**/log.info
.:We have success
eval:x:@.lambda
As the above snippet starts executing its [.lambda] object is logging an error. However, before it is executed at the last line of code, it is no longer an error, but an info log invocation and its content has changed. To understand what happens, it might be beneficial for you to use the "Evaluator" in Magic and see the before and after result. If you do, you will see the result after the above execution being transformed into the following.
log.info:We have success
set-name:x:@.lambda/**/log.error
.:log.info
set-value:x:@.lambda/**/log.info
.:We have success
eval:x:@.lambda
The CRUD generator
At its core the CRUD Generator in Magic is only combinations of constructs such as illustrated above, where pre-existing snippets of Hyperlambda code is dynamically assembling and modifying other pre-existing snippets of Hyperlambda code, before transforming these back into Hyperlambda files, that it saves on your disc. Let's look at one of these files to understand how it works.
* Template for HTTP GET CRUD requests.
* This file was automatically generated using Magic's CRUDifier.
*/
.arguments
limit:long
offset:long
order:string
direction:string
operator:string
Album.AlbumId.eq:long
Album.Title.like:string
Album.Title.eq:string
Album.ArtistId.eq:long
ArtistId.Name.eq:string
ArtistId.Name.like:string
.description:Returns items from your Album table in your [generic|chinook] database according to the specified arguments
.type:crud-read
// Sanity checking invocation.
validators.enum:x:@.arguments/*/operator
.:or
.:and
/*
* Checking if user supplied an [operator] argument, and if so
* changing the boolean operator for comparison operations.
*/
if
exists:x:@.arguments/*/operator
.lambda
// User provided a boolean comparison [operator] argument.
set-name:x:../*/data.connect/*/data.read/*/where/*
get-value:x:@.arguments/*/operator
// Opening up our database connection.
data.connect:[generic|chinook]
database-type:sqlite
// Parametrising our read invocation.
add:x:./*/data.read
get-nodes:x:@.arguments/*/limit
get-nodes:x:@.arguments/*/offset
get-nodes:x:@.arguments/*/order
get-nodes:x:@.arguments/*/direction
remove-nodes:x:@.arguments/*/operator
remove-nodes:x:@.arguments/*/limit
remove-nodes:x:@.arguments/*/offset
remove-nodes:x:@.arguments/*/order
remove-nodes:x:@.arguments/*/direction
add:x:./*/data.read/*/where/*
get-nodes:x:@.arguments/*
// Reading data from database.
data.read
database-type:sqlite
table:Album
join:Artist
as:ArtistId
type:left
on
and
Album.ArtistId:ArtistId.ArtistId
columns
Album.AlbumId
Album.Title
Album.ArtistId
ArtistId.Name
as:ArtistId.Name
where
and
// Returning result of above read invocation to caller.
return-nodes:x:@data.read/*
The above file was in its entirety dynamically assembled and created by the CRUD generator itself, by taking pre-existing template files, reading metadata from your database, and applying the relevant metadata into the template file at the correct place, resulting in dynamic code DOM abilities. Or to use our slogan to explain it better ...
Where the machine creates the code
Whether or not you understand Hyperlambda is really quite irrelevant, since the point is that the computer understands Hyperlambda, and the computer itself can assemble your code automatically for you. You only need to understand Hyperlambda if you want to modify the result of its CRUD Generator, or create something else that is not based upon CRUD, or create snippets of Hyperlambda that creates snippets of Hyperlambda. And of course the last parts sums up what we at Aista are doing on a daily basis, since what we do is to create small snippets of code that you can automatically inject into your own code, without even needing to understand what they do, for then to abstract out these as visual elements on your screen, resulting in checkboxes and such saying; "Log", "reCAPTCHA", "SignalR", etc. And there doesn't even exist a theoretical upper limit as to what we can achieve with such ideas.
The first thing everybody asks me about is; "Why Hyperlambda? Why didn't you create this in PHP?" - Well, it's not possible to create without a meta programming language is my answer, and as far as I know Hyperlambda is the only meta programming language on the planet.
If you want to play around with Hyperlambda yourself, you can register at Aista and get a private Hyperlambda backend server in a couple of minutes, allowing you to play with Hyperlambda any ways you see fit. Please notice that we're currently in BETA, and there might exist some glitches here and there, so please be patient as you try out things.