A framework for efficient (local or distributed) event driven programming, easing the creation of decoupled architectures and respecting real-time constraints.
Event driven architectures aims to increase development productivity by allowing further decoupling of components. The core concepts of Event Driven Programming are at the center of some serious and currently fashioned technologies:
This framework provides mechanisms for efficient producing and consuming (and also notifying and observing) events. Events can be "consumed" or "observed". For instance, when a server receives a request it needs to provide a response -- this is an example of an event that was produced and consumed. On the other hand, when you plug a USB Stick on your computer, there might be a software to show the files: if there is, it will show; if there is not, nothing will happen. This is an example of an event who got notifyed but for which there may or may not be observers listening -- the computer will keep working without flaws if no one is listening for such events.
Consumable Events may also be synchronous or asynchronous; and require a functional or procedural consumer. Asynchronous events are the ones which you don't care when they are consumed and synchronous, the opposite. Functional consumers will produce an answer and procedural consumers will not. The folowing table shows the possible combinations:
case # synchronicity consumer sub-routine valid?
1 sync procedure yes
2 async procedure yes
3 sync function yes
4 async function no
These classifications are important for they have performance implications. Case 2 is the most performant. Case 1 and 3 requires a mutex signaling to handle the answerd value or the "finished" flag. Case 4 is invalid: it must be reassigned as case 2 or 3.
Now, back to Listenable Events: they have some similarities to Consumable Events, but:
Sometimes there are cases in which it is difficult, by the definitions given above, to determine if we must consider an event a consumable or a listenable event:
When one looks at the simplifyed execution flow of a Synchronous Consumable Event, one might see:
registerEventConsumer(MY_EVENT_TYPE, MY_METHOD) // consumer
(...)
myEvent = produceEvent(MY_EVENT_TYPE, params) // producer
reportConsumableEvent(myEvent)
(...)
answer = waitForAnswer(myEvent) // will wait for execution of MY_METHOD(params)
Which is remakably similar to a standard function call:
answer = MY_METHOD(params)
So, before we dismiss the Synchronous Consumable Events, lets look at some features of this framework:
We consider any of the reasons above to be enough to justify the use case.
Some examples further explaining the above cases:
3 3. Imagine a server processing requests that require some database queries. It, obviously, has an optimum number of threads to process such requests. Lets call this number: "n". A solution for this is to configure your server to handle "n" sockets at a time and you'll achieve the maximum throughput -- and you don't need this framework for that, provided all your clients will download the answers at the same speed. But what if you add some caching? Lets suppose that you queries will return the same results for the same service parameters. Now you will still have requests qurying the database, but not all "n" simultaneous connections will be doing it. By using this framework you can fine tune specific tasks like this and achive maximum throughput even if some clients are slow downloading your reply.
4 4. Lower latency may be achieved when you are not at the maximum throughput -- the machine must have some idleness available. It happens by paralelizing requests. Imagine your server needs to make 3 independent database queries, from different databases. The traditional way is to execute Query1, Query2 and Query3 -- taking the time t1 + t2 + t3 to execute. If made parallel, it will take only max(t1, t2, t3). The pseudo code bellow illistrates the logic:
q1Handle = reportConsumableEvent(QUERY1_EVENT, params1)
q2Handle = reportConsumableEvent(QUERY2_EVENT, params2)
q3Handle = reportConsumableEvent(QUERY3_EVENT, params3)
(...)
q1Answer1 = waitForAnswer(q1Handle)
q2Answer2 = waitForAnswer(q2Handle)
q3Answer3 = waitForAnswer(q3Handle)
(...)
reportAnswer(serviceAnwer)
(...)
// some other steps not needed by the client
It is a good moment to remember that the traditional blocking method call with parameters & return values is not good for low latency solutions. Modern programs, nowadays, need to fullfil requisites not needed by their users (for instance, logging requests, computing & checking quotas, recording usage statistics, ...). Typically, when method calls that return values are used, you must wait for them to perform all their tasks in order to have access to their meaningful answer. The above line, "reportAnswer(serviceAnswer)", immediately makes the response available to the "caller". The benefits scale when that approach cascades.
The following benchmark demonstrates the latency improvements: ...
registerEventConsumer(MY_EVENT_TYPE, MY_METHOD)
reportConsumableEvent(myEvent)
eventId = reportConsumableEvent(QUERY3_EVENT, params3)
eventId = produceEvent(MY_EVENT_TYPE, params)
answer = waitForAnswer(myEvent)
answer = waitForAnswer(myEvent)
eventId = reportConsumableAnswerableEvent(MY_EVENT_TYPE, params)
registerSynchronizableFunctionConsumer()
registerSynchronizableProcedureConsumer()
registerNonSynchronizableProcedureConsumer() -- this way, every event must hold the information
'EventLink's are responsible for linking producers & consumers and notifyers & observers. The following properties are controled:
struct SERVER_REQUEST_EVENTS { struct RequestStaticContentEvent { void (* eventHandler ) (const socket& socket, const char* URI, const bool allowCache); } requestStaticContent;
struct RequestDynamicContentEvent {
void (* eventHandler ) (const socket& socket, const char* URI, const bool allowCache, const char* headers);
} requestDynamicContent
}
struct STATIC_FILE_EVENTS { struct ReadStaticFileEvent { void (* eventHandler ) (const char* relativeFilePath) void (* answerHandler ) (const char* staticFileContents); } readStaticFile; }
struct CACHE_EVENTS { const char* (* getFromCache) (const char* URI); void (* putIntoCache) (const char* URL, const char* contents); }
struct DYNAMIC_EVENTS { const char* (* processRequest) (const socket& socket, const char* URI, const char* headers); }
SERVER_REQUEST_EVENTS sre = {serveStaticContent = &serveStaticContentFunc, serveDynamicContent = &serveDynamicContentFunc}; DirectEventLink<SERVER_REQUEST_EVENTS> serverDEL(sre); serverDEL.setConsumer<SERVER_REQUEST_EVENTS::RequestStaticContentEvent>(&requestStaticContentHandler, false, false); serverDEL.reportEvent<SERVER_REQUEST_EVENTS::RequestStaticContentEvent>()