The Technology Works of the NuChwezi

by NuChwezi Hackers

Published by NuScribes (nuscribes.com) on Tue 04 Oct, 2016 |

Book Cover Art

This book chronicles the growing involvement and interests of the Nu Chwezi, in technology, especially those ideas bordering on bringing the future, quickly, into the present. Many of the ideas presented here are already being explored in our labs, and some are things we only plan or hope to tackle eventually. Ultimately, this is a repository of the ideas behind much of the technology research we desire to contribute to, as our solid contribution to the building of a Nu Africa, a Nu World, a Nu Humanity... for the better.

It is our responsibility as technologists and creators, to shape and manifest the world others only dream of.

APPIAT: a program that can produce ready-to-use apps, based on sample data

The first expression of this concept went something like this:

APPIAT: a compiler that takes in data, and produces a program for generating that kind of data.

Would Appiat not obsolete some programmers or at least make it possible for end-users to produce their desired apps without calling on techies to write code? Yes, and there's a good reason for that... As technologists, we should focus our attention on empowering users, and then use our time to do even better, more rewarding things beyond the mundane...

 


Background

  • Classical compilers are generally of the kind that take in source code (typically, which is data, but which is of the special type - algorithms/instructions), and then produce programs (binaries or not - themselves data), which then execute actions, which actions might contain tasks for collecting other types of data.
  • We can generally say, all software is just data acting on data. The data which manipulates or prompts other data, are what we call instructions or algorithms, and then the data acted upon by those algorithms are what we might call "input data" or just "data".
  • There are high-level generators/compilers, at many levels; some compilers can take grammars of languages, and output compilers or interpreters for those languages - we might call these higher-order compilers. But, these compilers output programs to generate other programs, that can then operate on arbitrary forms of data. We are not concerned with higher order compilers here though.

The APPIAT Concept
 

Instead, what we are proposing here, is a type of compiler, that can take in what other programs would have used as "input data", and then work backwards, to generate the kind of program that could produce such *kind* of data - not necessarily programs that would spit out the same exact instance of data as what was used to define the generated program, but which programs can offer a method of outputing the same kind/structure of data - possibly preserving data types, field semantics and the syntax of the data. Oh definitely, for more advanced scenarios (like where the input sample is to be kept small, yet particular constraints on the structure need be enforced, we could directly jump to the use of such existing open standards as JSON Schema as the input for our compiler.

So, with this type of compiler (a "reverse compiler"?), we can for example start with a basic JSON input such as this:

        {
            name: "John Dee",
            birthdate: "25 Dec, 1900",
            occuputations: ["MAGICIAN", "ASTROLOGER", "MATHEMATICIAN"],
            address: "33 Left, Dark Lane, NoWhere"
        }

and have this reverse compiler generate the sufficient code for a commandline program or script that can prompt for these fields, and output similar JSON data as a result. We can even get more ambitious, and have the output of this compiler more remarkable - with a plugin architecture that would allow it to be extended to be able to support taking these inputs, and spitting out code or binaries for mobile apps, web-apps or apps for not-yet-known platforms of the future. Definitely, the one using the compiler should have control over what sort of output and to what level the generation goes - intermediate outputs might better meet the needs of power users (who merely need the bootstrapping) for example.

Many tools current exist for bootstrapping apps - web, mobile and desktop. Some use Code-First approaches, others use Data/Model-First approaches. This is not entirely novel stuff at all. What is perhaps new or different from all these tools, is that they are mostly tied to a particular framework, language and platform, and most aren't easily extensible to other scenarios not originally intended by their makers. What we want here, is something akin to a General Purpose compiler - just like good ol' GCC. A compiler that's extensible, and one that's not tied to a particular platform or framework, but the same time, one that doesn't use source-code as such, but instead data, as the input. So, think of a GCC that instead of expecting source files, takes in data files, and then produces software.

Enter Machine Learning... Yes, in the most general case, or thinking forward, instead of having someone hand-craft the input specifications for data from which the compiler should infer what software to produce, it might be better to use a learning approach, where it's possible to give the compiler a ton of sample data (possibly not only different instances of the same structure, but even slight, essential structural differences in some instances). The compiler would then have to learn over this data - perhaps producing an intermediate schema representative of all the cases in the training sample, from which the necessary output program can then be generated.

Writing such a compiler for any concievable kind of training data is definitely near impossible (some forms of data, such as this writeup, are so arbitrary, it might be very difficult for a program to learn its structure, or at worst, all the generated programs would be simple text editors as a final catch-all!). But, imagine the amount of hustle such a tool would take out of such basic, very common and yet crucials tasks as writing data collection tools or survey tools or form apps etc. These kinds of basic apps meet a real need in today's growing world of increasing process and business automation, data-surveys, e-governance, etc. Such an invention, even if a bit mechanistic at first, would immediately solve many usecases, and potentially disrupt or advance the apps-creation efforts of many, current developers.

Taken to its logical conclusion, if we allow for an ability to use Machine Learning (or AI) to do this task, then it should be possible that such a program, a "Programmer AI", helps, augements or entirely can replace the need for human computer programmers in some cases.

Should we dream on? So, for example, we can then concieve of a time when a client can just talk to Programmer AI; simply describe to it the kind of data they want to collect from users - or for what purpose they need an app, and then this intelligent automaton should comply by taking that description (perhaps even as vague as what we currently have to deal with from business clients), and then transform that into a specification of sorts, from which it can then generate a program, to capture the necessary data from users, and persist that data somewhere. Does it have to beging with code or data in this case, perhaps not, or it won't matter anymore. 

Let's even dream further... Perhaps, this can be extended by other intelligent automatons, that the client can then tell what kind of insights they wish to see generated from this data, at which point these AI-analysts can either generate analysis tools based on the data from earlier phases, or do this kind of analysis on the data their peers are generating, themselves.

Sounds like too much science-fiction? Not really, the power of software, and what we as humans can do with it, is not fully appreciated at the moment. But, we are optimistic, such tools will soon become a necessity (if not already), and where there is a need, there is a call to action to satisfy it. Though, we need to start somewhere... a journey of 50 miles starts with 2 steps.


Getting Practical

Ok, enough of the dreaming, let's now see what we can do with these concepts already...

So, to quickly start protyping these ideas of fully automating the software production process (beyond mere bootstrapping that is), let's start by imposing some limits within which we can test these idease practically ahead of attempting more ambitious implementations. Let's assume that:

  • Our final app can be a basic commandline program, such as a script or simple binary the use can invoke via a shell
  • For ease of demonstration, let's assume our sample input is of the form similar to what we've seen in the earlier example - just a basic JSON dictionary without complex nesting or binary values.

With just these two constraints, let's see what algorithm we could use to realize our "reverse compiler":  

The Algorithm

  1. Take the input data
  2. Check for what form of data structure it is.
  3. Extract or infer all meta-data about the kind of data storable in the given data-structure
  4. Generate or obtain a skeleton program into which the instructions for prompting and persisting the given type of data can be injected.
  5. Based on the learned data structure and the meta-data from it, generate and inject instructions into the skeleton program, which instructions are for prompting from the user inputs corresponding to fields in the data structure.
  6. Generate and inject instructions for transforming the captured user inputs into a data-structure of kind similar to what was detected, using what is known of the fields and the meta-data about them
  7. If the skeleton program doesn't already contain instructions for persisting or outputing the encoded data, then add this too.
  8. Take the generated program, and output it somewhere it can then be used.
  9. [optional] make this generated program executable or ready for use by the end user.

Does it Work? Show me some Code Please... HERE is our first implementation of this concept, as a Python Appiat that can spit out commandline, python apps based off of JSON input. 

 


PROJECT OKOT (Project-O)

A suite of technologies ochestrated to make the building, evolution and distribution of data collection apps the easiest they can possibly be (given what we can achieve with technology at the moment). This project aims at taking the avoidable hustle out of programming, especially for typical, data-collection apps, which have become an important aspect of the modern revolution in business and process automation, as well as the ever increasing need for conducting data surveys and such... The world is increasingly running on data, and the easier it is to build tools that can make the collection, processing and analysis of that data easier, the better for all humanity. The project aims at reducing or eliminating what are currently repetitious, boring, and often unnecessarily tedious tasks, to simple, straight-forward and perhaps fun, opera-like experiences - creating should be fun once again, right? 

Okay, we shall be somewhat, literally introducing some art and play into programming, making building and using technology more fun and creative...

The project is named in honor of one of the greatest writers, poets and thinkers to ever come out of Africa - Okot p'Bitek, born in Uganda, and who died in 1982.

Enough of introductions, let's go to the opera, shall we?


 

TERMINOLOGY USED/INTRODUCED in this project

A Histrion: this is any client app (irrespective of platform) capable of changing not only its look, but functionality, based on a dynamic specification it is loaded with. Because these apps aren't like any traditional apps that are programmed to be one thing throughout their lifetime (and no, updates typically don't count, as they replace the entire app), we can think of these special kind of apps as "actors", in the typical sense of the word that is - thus the "histrion" metaphor, or as shape-shifters (yep, shape-shifting apps... more on this later). In more conventional sense, the histrion is like a browser of sorts - in the same vein as web browsers, game consoles, classic sim-app browsers, etc, though it serves a new purpose as we'll try to demonstrate.

A Persona: this is the specification (mostly declarative, and not necessarily programming language code), which when loaded into a Histrion, changes its looks and behavior. So, a histrion loaded with a POS persona will act as a POS app, and the same histrion, if loaded with a Fitness-Tracking persona instead, will become a fitness-tracking app!

A Studio: this is the environment in which a developer or other user defines the persona - sort of an IDE of personas, that is to be later utilized to configure how histrions should look and act. In the studio, personas can be designed, edited and can be published.

An Act: this is the data produced by a histrion in accordance with their persona. So, for example, a histrion loaded with the POS persona, will produce sales acts, order acts, etc depending on what the persona allows or demands of it to do or be. 

A Theatre: this is the place where acts (data), submitted from one or more histrions (clients), is recieved for storage and or further processing or analysis. The theatre would typically be a remote server (in the cloud perhaps?), but could be something else entirely... (another Histrion?). Wow, so we now have theatres in the clouds... :-)

A Diviner: this is later addition to the original concept. It is where you would go, to not only view the acts submitted from your histrions, but also where you can potentially analyse them and mine for any insightful patterns in the collected data, in very advanced ways, using a very simple interface. 

NOTE: in the rest of this document, and perhaps everywhere in Project-O, expect to see the above terminology generously intermixed with the conventional technical terms, as we allow everyone to get used to this slight, buy joyful paradigm shift.


 

The Whole Concept in Brief

Think of the histrions as terminals or general-purpose apps, whose user-interface and functionality can change based on the currently loaded persona, and which can then be used to capture user-provided data in a manner aggreeable with that persona, and then be able to submit that data to a designated destination for aggregation and further process or analysis. No rocket science,  and no advanced arts here really...

The idea is to speed up the software development process, while making the apps we build as flexible as possible. It is desirable to give more power over what the apps can do and look like, dynamically, to power users and developers, so as to be able to deliver apps without getting immersed in boring, mundane tasks such as defining models, validation, ui and data persistence code, with each and every new (but "similar") task we must grapple with. The power of machines to automate much of what can be automated, should be leveraged as much as possible, so that we as humans can focus on doing more creative, original work - tackling the bits of each new and old problem, that machines just aren't yet capable of solving.

So, these Histrions are essentially that - Histrions. They can do and look as whatever the persona says they should do (within sensible limits definitely, so that the persona doesn't necessarily become another Turing-complete language - not necessarily a bad thing, but for now, we just feel it's best to keep these specications as declarative contracts, specifying what the histrion should look like and behave like, wihtout telling them how to do just that - the how is up to the particular implementation of the histrion.) The personas are meant to be standards, so that those implementing histrions on any platform, can render the same expected experience, irrespective of how they choose to implement the behind-the-scenes machinery to make histrions do what they are expected to do. That way, the concept of personas beautifully solves the isse of writing truly cross-platform apps, without sacrificing native-advantage and without writing multiple variants of the spec for each distinct platform.

On the other hand, there's the theatre, which could likewise be implemented as a general-purpose data aggregation server, so that it can accept any acts(data) sent to it (by authorized histrions), and it can offer, at a bare minimum, standard means to dynamically validate, analyze and export this data based on the metadata gleanable from the personas describing the acts. Think of the personas serving as a schema of sorts, just like JSON Schema can help data sinks to validate incoming submissions against the specification in those schemas. And yes, personas have been much inspired by JSON Schema.

Automation everywhere!


 

Ok, Just How Can This Be Implemented?

Well, perhaps it's not trivial, but let's start simple, and see just how to make something close to this...

A: [The Studio] How is the Persona designed and published?

1. [Designer] The user/developer is provided a web-based interface, via which they can use a drag-n-drop UI, to declare the look and feel of their app.

  •     This tool should support all typical input controls, and any logical basic customization of these - colors, sizes, etc.
  • Since we are in a data-driven and networked world, it makes sense to assume that any histrions using these personas must be able to send their generated data to some remote server somewhere. So, as part of the design process, the user will specify a URI or some other addressing mechanism supported, so the terminal knows where and how to send their acts.
  • Possible transportation mechanisms can be: email, sms, http (get,post) - user should be able to specify which of these are to be used, as well as specifying the target address.

2. [Spec Generator] Once the designing of the persona is completed, it should be possible to tell the studio to generate an artifact with which the persona can be exported or utilized elsewhere. We propose that these special specifications have a standard file extension if they are to be transported offline (something like *.persona). The actual body of this specication file will be JSON (with a schema that's verifiable and globally accessible for validation to be done easily) - again, we are walking in the footsteps of JSON Schema here. This persona then captures all the essential metadata essential to declare the app's look and behavior as designed, all in valid JSON.

  • There should be a standard for what a typical persona should contain, but studio implementations might augement this specification with extra attributes that compatible or custom histrions and or theatres might support. But, as with all other technologies, standardisation and conventions makes things simpler and portable for everyone involved.

3. [Spec Publishing] The generated persona can then be published, so that Histrions can be pointed towards it, gain personality, and thus become useful actors in the opera ecosystem. Options for publishing could be:

  • The spec could be published online - to a persistent, globally unique URI, so that any histrions (and other tools) that can resolve or discover the URI, can read and use that persona.
  • For compatibility reasons, it might be advisable that each new version of a particular persona obtain a new URI. But then, for other cases, it might be preferable that one URI resolve to a particular persona, but that when any histrions using that persona need to be updated, only the contents of that persona change (and not the URI) - or there should be a means for the histrion to check and obtain new versions of the persona automatically? 
  • It might also be advisable that instead of always referencing the persona via a URI, it be possible for histrions to download and work against a cached version of the persona, so they can operate offline for example - also solves issues for how the histrion should behave across restarts, assuming the persona URI is not accessible in the meantime.
  • Additionally, instead of publishing to a URI, the studio should be able to allow for a the entire person to be downloaded directly onto the device where the designing is being done from, as a file for example (see above). This can allow for private personas to be used without exposing them or any sensitive data in them to the arguably insecure Internet.
  • Yet another option for publishing could be via QRCODE. In this mode, the studio can encode the entire persona as a QRCODE, and then this can be donwloaded as an image onto the designer's device, can be published online to a URI (similar to the earlier considerations), or compatible histrion implementations can be used to directly scan the generated persona QRCODE, and automatically load the embedded spec! Well, where the persona is very huge, it might make sense to use the QRCODE as a mere pointer to for example the full persona spec, via an embedded URI for example, so that histrions merely scan the code to get to a URI via which they can load the full spec.

    
 Ultimately, the mechanisms of publishing and the discoverability of these published persona can break or make this entire ecosystem, and so need be designed and implemented well. Also, using this approach, I can see the plausibility of having a persona marketplace - instead of an apps marketplace for example, since users wont have the need to install multiple apps anymore, and only need to discover and load new personas into their general-purpose histrions. This might greatly change how apps and their distribution is done, but also, this means, there might be need for users to customize some aspects of the personas they install - say altering the theatre pointer in the persona to their own private servers/theatres when they download a marketplace POS persona for example. The persona spec could allow or restrict such customizations, and again, compatible histrions should be able to allow users to take advantage of such power when its available.

Sounds exciting? Yes it is...

 

B: [The Histrion] How is the Persona to be used?

As for using a persona to give end-users some desired functionality that a traditional app would have offered, all that needs to be done, is to obtain the necessary Histrion app - this might be all that needs to be published in a traditional apps marketplace, or it can be downloaded directly off the web. Once a user has this general-purpose app, the histrion, installed on their device, all they need to do then, is find a persona they wish to work with (here is where persona markets might make sense, or where offline loading of personas - via file, QRCODE, etc would come in handy).

Once loaded with a persona, the histrion, in this case, much like a basic browser, should know how to change its look and behavior, in line with what the persona specifies it should. But, the rendering and adherence to the persona could vary from platform to platform, and from one implementation or version of the histrion to another - just like with web browsers, though, as we have already indicated, this is where standardization becomes very advisable and critical, especially once this technology becomes widely adopted, as we hope it should.

With the persona, the histrion would know how to prompt for inputs from the user, validate those inputs, and then format the resultant data as JSON, which they can then post to whatever end-point the persona specifies (or what the user overrides it with). With this approach to the histrion implementation, we can envision the histrion acting closer to an app-browser - a tool capable of rendering a native look and feel, when given a declarative spec, much like classic web-browsers, before the addition of javascript, were capable of. Shouldn't we be extending extending existing web-browsers to do this kind of thing? Yes and No. No, because we have a chance to do this from scratch, so as to take advantage of things current browser stacks might make unnecessarily complicated or which they might not even allow. Yes, because, the very reason technologies like JSON Schema exist, is to solve this kind of problem already... but duh, we don't wish to rely on Javascript and the web-stack, it's just too bloated for what we are proposing here, but, some ambitious innovators might prove us wrong. Ultimately, we desire to make this different, and much better, for its designated purpose.

So then, just like with web browsers, we can envision histrions being implemented for any current and future platforms, but with the ability to deliver the same powerful functionality to users on any of these platforms; Mobile (Android, Windows Phone, iOS, J2ME?); Shell/CLI (*nix bash, win* batch, powershell); Desktop (Win Store Apps, Win-Forms, *nix QT apps, etc); Web-Apps (think, Javascript that can take a persona, and render, capture, validate and submit a form as per the persona) - this nearly exists already; Browser-Extensions (not very different from the web-apps thing, but offline-capable, more web-browser depedent)etc...

Basically, Histrions can be written in any language, for any UI, and they should all be able to work with the persona without hustle... They are the new general-purpose apps to rule them all.

The real power of this technology? Build one Histrion App, have a bazillion potential apps it can be instantly transformed into, just as long as you can give it the personas for those apps. And then, ensure to keep the definition of personas as dead simple as possible (for most purposes though). So? Making apps just became as simple as XYZ... once again.

 

C: [The Acts] How is the data from Histrions used or shared then?


Once a Histrion (regardless of implementation), obtains user input, it should encode it to JSON (we could make this constraint more relaxed, but again, convention and standardization is better; JSON is as good as most formats for this job, and is becoming more of a standard data-representation format than most - even HTTP Forms have a JSON format specified atm), and then, submits the data to wherever the persona says it should go. This encoded data is the ACT.

  • Definitely, good Histrion implementations should allow offline saving of the user submissions, and later posting, to allow for scenarios where there's no connectivity - unless the persona also denies this sort of functionality? This makes much sense, given that we'd like to see personas making data-collection tasks as flexible as possible for those working with mobile technologies, and yet operating in areas with unreliable or no connectivity at all - can web-browsers allow this currently? Perhaps (localstorage), but by their very nature, offline mode is not such a great experience, and we wish to solve that from the very start.

    
Also, recall that as potential endpoints for the captured data, we propose the ability for the histrion to be able to submit data directly to an email address or sms address - this again, opens up potential to build more distributed ecosystems of apps, using transport mechanisms that the current web doesn't make easy to exploit, but which native apps (especially on mobile), ought be able to readily exploit. Future implementations might even post directly to other apps on the user-device (such as social networking apps, e-commerce apps, etc). The potential this kind of flexibility opens up is staggering when you come to think of it...

 

D: [The Theatre] Any ideas for how these Acts might be collected and made good use of?


Seriously, we shouldn't have been very concerned about where the data goes - just like the web standards don't really give a damn about where and how your servers are implemented. But, recall, we are trying to solve this entire problem holistically - end-to-end, for those who are less capable or who want to get to work with as little hinderance as possible - giving power to the masses, right? Okay, so we then propose one potential plugin into our ecosystem, that is entirely optional, but which completes the composition nicely.

On the receiving, final end-point, there could be a compatible platform we are calling "The Theatre" (or we could have just called it "An Acts Server" or "The Server", but duh... better we stay creative, and embrace the metaphor all the way).

This theatre can be written in any language, and might run on any reliable server platform, but as for what its characteristics are relative to the rest of the ecosystem, we propose the following standard characteristics at a bare minimum:

  • It exposes an HTTP API endpoint to which (authorized) Histrions can submit data
  • For security reasons, the designer of the persona can include a secret in the persona - a secret specific to, or known by the theatre to be used for example, which secret the Histrions can use to authenticate themselves to the theatre (this should prevent the theatre accepting arbitrary and or malicious submissions - though, clever bots and hackers could get hold of the persona, especially if it resides online, and impersonate genuine Histrions - but where do we draw the line anyways - current web-servers suffer many similar problems as well)
  • It should be able to validate the incoming submissions against their corresponding persona (for ease of implementation, each end-point might support a single persona, but, it could also be possible that a single end-point supports multiple persona, validating each incoming act relative to the persona is was generated against - the theater might fetch these persona from online for verification purposes, or could have been preloaded with the same by its administrator, so as to know which acts to accept via the end-point, and how to verify them.)
  • Yet, another, more flexible approach might be to just accept any incoming acts, and leave the verification and processing of the incoming acts to their ultimate consumers - other servers, plugins on the theatre, external tools, etc.
  • It might make sense to deserialize these acts into model instances on the theatre, which models can then be persited in traditional relational databases. But, since we are thinking general-purpose, and have chosen JSON as the encoding format for our data, it might make much more sense that theatres, by default at least, directly store the acts in a NoSQL, Schema-less database of sorts. The advantages of this approach would greatly outweigh the use of a traditional ORM or an RDBMs, or that's just what we think at the moment - note, traditional dbs like Postgres do natively support JSON actually, and it might make sense to directly exploit this on-top of their evident maturity and flexibility. But, these are implenentation specifics, for now, we are mostly focusing on high-level aspects of the theatre.

    
Once the theatre has acts to serve, it should make sense that it offers its users a dashboard for doing some or all of the following:

  • browsing of all saved acts
  • searching by any fields that make sense to search (automatically) - introspection from the persona or the acts themselves?
  • filtering of the same
  • basic analysis or visualization of slices of, or the entire collection of acts 
  • ability to export slices of or the entire collection of acts, as JSON or CSV.
  • [recommended] ability to offer most or all of the above functionality to external users/tools, via a data API : this would make this suite of technologies immediately relevant to existing infrastracture and for such endavours as open-data initiatives, distributed computing, etc.

 

OCHESTRATION/IMPLEMENTATION: Ok, we need a proof-of-concept, to make sense of this entire opera of ideas...


First of all, this technology and the concepts around it started with APPIAT (also one of our experiments in this domain), but, though the core ideas in APPIAT aren't very different from those which have inspired Histrion and Theatre, these later concepts have a better, more pragmatic and more generic principles behind them. Also, this entire project is greatly influenced by the need to facilitate the quick realization of a more data-driven world, and not just the multiplication of disparate, incompatible technologies. So, if we are to choose between evolving APPIAT Vs Project-O, the later would definitely win. And So Mote It Be.

Fine, so where do we start? 

The logical implementation road-map ought be something like this:

  1. Build the Studio.
  2. Design and Build a proper Persona specification.
  3. Build a compatible Histrion to consume that persona spec.
  4. Build a compatible Theatre.
  5. Review and iterate (anywhere along the previous phases).

So first, the studio. There are many existing projects trying to automate the creation of data-capturing apps - including the typical full-fledged native IDEs. But, we are biased towards web-based tools, and those which are opensource, for obvious reasons. Most of these are specifically for the generation of web forms (data capturing apps for the web), and of these, FormBuilder, which is opensource, seems to be the best starting point for our own experiments.

    https://github.com/dobtco/formbuilder
    https://dobtco.github.io/formbuilder/

So, if we use this FormBuilder as a baseline, we can for example start by extending its existing designer implementation to support the extra configurations we desire to support for our final Persona spec. Then, we can use their existing output JSON form definition as the starting point for building our own Persona spec. As a bonus, this impressive designer already supports the ability to bootstrap a new design session, based off of a pre-existing form spec - a JSON object, and this directly lends itself to the use case we desire to support as well.

Rather than extend that project, we propse to clone it, and move in an entirely different direction - possibly contributing back to that parent project where it makes sense (it no longer has a mantainer btw). Ultimately, we give much credit to them, but wish to stand on their shoulders, to make this even more useful than it currently is. 

The rest is ours to do...


Ok, too much technical stuff here, time for...

THE GREAT STORY (of Project Okot)


So, when all this becomes a reality, we can then talk of such great stories as...

Once upon a time, a softwaresmith was approached by one of his business clients called "John Doe", who needed a data collection app, ready in no more than 24-hours. The client described to him the specifics of the kind of data he wanted to collect, and requested the craftsman immediately get to work building the damn thing.

The craftsman knew what exactly to do. He knew of an amazing, hustle-free way in which to compose any kind of data-collection app his clients would ever require, without writing any more code whatsoever! There was no need to rush, and so he suggested to the client, that they go to a tea-shop, and while sitting over a few cups of pleasant herbal tea, compose the requirements into an instant, usable "work of art", and so the client agreed, and so it was...

As they talked with his client, going over his requirements, he visited a place called "The Studio", using his portable computer tablet. At this studio, he composed the requirements into something called "a persona", which elegantly captured exactly the specifics of what data the client wanted the app to collect, as well as indications for how the app should look and feel - including some branding. 

After this persona was created, the wise craftsman clicked a button, which then poped-up a QRCODE - a sort of image that can be read and interpreted by machines, visible at the front of the studio he'd been working from. He then asked the client to lend him his phone just momentarily. Next, he scanned the QRCODE, which QRCODE contained a link for downloading what he described to the client as "the last data-collection app you will ever need to install for the rest of your life." When asked what this special "last app" was, and why it was so, the smith proudly declared, "she's a special polymath unlike any other. She's also an actor by essence, and so we call her a Histrion." The client didn't quite understand, but was clearly excited...

A minute later, the Histrion, whose version was code-named "Lawino", was up and running on the client's smart phone. "So, how does she solve all my problems then?", asked the skeptical client. The smith clicked another button, and another barcode was displayed at the studio, this one more compact than the one before - but also bigger. "See that sweet button Lawino has? Touch it, and scan this CODE." The client did as he'd been told, and near-instantly, Lawino's looks changed, so that she was transformed into the very app he'd asked the blacksmith to make for his data-collection needs! 

"Wow, how did that happen!", inquired the awed John. "You just imbued her with a new persona", replied the jubilant artisan. "She, like all histrions, can morph into any kind of app a person specifies." The client, still not believing his eyes, said, "You mean, you were defining this 'persona', on that tablet of yours? Where is the code? And where did the original 'Lawino' app go?" The smith was just smiling, seeing as the secrets of this great opera would never make sense to many of the uninitiated...

"Lawino has all the code she ever needs within her, just like any good actor already contains within them all the DNA required to enable them assume any persona. The actor can assumed virtually any role, any persona, based on the function they must project while filling a particular character, in a particular play. And so, all we need to do, is tell them what role they will be acting as, and that's it!" 
"This is app is like a human!"
"Not really", corrected the artisan, but he didn't explain why.
"So, shall we try to collect some data now?" asked the excited client.
"Sure."

So, the craftman watched as his client entered some dummy data into the rendered input fields, and then clicked the shiny "Submit" button at the bottom. "So, that's it? Where does my data go then?" he asked, sure that something must have been overlooked by his clever magician. 

"Just a moment", the craftsman said, as he opened a new tab on his tablet's browser, and navigated to a server on the web, via the url "theatre.host.com". He tapped some credentials into a login form, and once inside what was being called "The Theatre", he navigated to a tabular view laballed, "Lawino:John-Doe", and in the rows, was a single row of data, containing the very dummy records that his client had just submitted a while before!

"Here", said the craftsman handing over the tablet to John, "there's all your submitted acts thus far."
"Wow! So all these Histrion-Lawino apps post data to this server?"
"Not really. Because I had to demonstrate to you how everything would work, I designed the persona in such a way, to tell the histrion to send all your acts to this particular theatre. We can definitely still use it for your actual data-collection work, or if you give me access to one of your servers, I can deploy a clean, new theatre for you as well. All your submisions will reside there, and you can filter, search and export them just as you can  do right here."

The client was speechless. He played with the theatre dashboard for a while, and then immediately asked for a quote for the project. He pulled out his device, and wired 75% of the agreed implementation cost on-spot, also promising this happy craftsman, that if this worked as demonstrated, he would be soon brining more data-collection work to him. "You are truly a magician!" he said, as they parted later on. 

"Magic is not just an art though, it's a science as well", said the engineer as he went on to enjoy the rest of his day. There was another client to meet later on, still, for yet-another data-collection app. But, that one already had bought into the idea of "the opera", and was already knowledgeable about how to design new personas for herself; all he needed to do was setup a new theatre for her, and write some custom data-analysis plugins for her unique business needs.

Like him, many other data engineers and clients were happily using Project Okot, and the world was to never see sadness, nor grinding of teeth, ever again.



Sounds like something from a bad book on theatrical arts? 

Well, with time, it will become more and more apparent, that software and the other automatons we are creating every now and then, are actually some sort of co-actors, in this immense, mysterious play we call life :-) The cosmic comedy has some new, increasingly intelligent actors sharing the same stage as us - and many of them are not humans (though many are increasingly looking and acting like us as well). The interesting or reassuring thing is, we are their creators - mostly. For lack of better metaphors, and a hate for stale technical terms, this project introduces the world of perfoming arts into technology, in lieu of what we are accustomed to (and bored with). After all, this is meant to be a disruptive technology, right? So, disrupt both the tech, and how we think about it... sounds like fun huh? 

Let's go hack and play some more, while making the world a better stage...


 

PRESENTING THE PROOF-of-CONCEPT...

Ok. It's less than a month since we dived into this, and here's a walkthrough of what's been built so far. It works and looks very promising! We continue to hack and forge a better future. Enjoy...