aura
0.1
|
Functions | |
void | aura_wait_status (struct aura_node *node, int status) |
int | aura_call_raw (struct aura_node *node, int id, struct aura_buffer **retbuf,...) |
int | aura_call (struct aura_node *node, const char *name, struct aura_buffer **retbuf,...) |
void | aura_enable_sync_events (struct aura_node *node, int count) |
int | aura_get_pending_events (struct aura_node *node) |
int | aura_get_next_event (struct aura_node *node, const struct aura_object **obj, struct aura_buffer **retbuf) |
One of the core things of aura is the export table, or etable for short. A node 'exports' a table of 'events' and 'methods' that it provides. This is all done by the transport in the main event loop. Once aura receives and compiles the table of available objects the node changes the state to indicate that it is ready to accept calls and deliver incoming events. If the node goes offline for some reason aura will read the export table again once it goes online.
A 'method' represents a simple remote function. It accepts several arguments and returns several results e.g. it looks just like a function call in lua:
a, b, c = function(arg1, arg2)
The method can be called either synchronously using (See aura_call() and aura_call_raw() or asynchronously using aura_start_call() or aura_start_call_raw(). This section covers only synchronous API.
Let's start with the simplest synchronous example for calling remote functions.
That's it? Thought it would be harder?
Events, on the contrary represent something that happened on the remote side. E.g. a timer expired, or a user pressed a button and so on. Events can deliver arbitrary payload. Just like returning function arguments. Normally events make sense if you use asynchronous API.
However, aura provides a way to handle events in synchronous way. By default the core will discard any incoming events unless they have an associated callback. If you want to process events in the synchronous way you have to first call aura_enable_sync_events() and specify the queue size. Up to count incoming events will be queued this way.
Once event queuing is enabled you can read the next event with aura_get_next_event() and aura_get_next_event_timeout(). These functions may block until the next event arrives.
You can also get aura_get_pending_events() to find out the number of queued events. If it is 0 - aura_get_next_event() will block.
Objects (methods and events alike) are stored internally in a struct aura_object, enumerated from 0 to n. You can either do calls and set callbacks using object names (they are searched using a hash table, that's pretty fast) or via their ids. Beware, though: If the node represents a hardware device, it goes offline for a firmware upgrade and when it comes back to life it has a different export table and the id for the same method may change. Calling by name allows you to avoid conflicts. You get the idea, so don't shoot yourself in the knee!
For something advanced usage - have a look at the async API that is way more powerful.
int aura_call | ( | struct aura_node * | node, |
const char * | name, | ||
struct aura_buffer ** | retbuf, | ||
... | |||
) |
Synchronously call a remote method of node identified by name. If the call succeeds, retbuf will be the pointer to aura_buffer containing the values. It's your responsibility to call aura_buffer_release() on the retbuf after you are done working with resulting values
node | |
name | |
retbuf |
Definition at line 666 of file aura.c.
References aura_core_call().
int aura_call_raw | ( | struct aura_node * | node, |
int | id, | ||
struct aura_buffer ** | retbuf, | ||
... | |||
) |
Synchronously call an object identified by id. If the call succeeds, retbuf will be the pointer to aura_buffer containing the values. It's your responsibility to call aura_buffer_release() on the retbuf after you are done working with resulting values
node | |
id | |
retbuf |
Definition at line 626 of file aura.c.
References aura_core_call().
void aura_enable_sync_events | ( | struct aura_node * | node, |
int | count | ||
) |
Enable synchronous event processing.
Call this function to make the node queue up to count events in an internal buffer to be read out. By default the node does not queue any events for synchronous readout and drops them immediately if no callbacks are installed to catch this event. If the number of events in this queue reaches count - events will be dropped (oldest first)
To disable synchronous event processing completely - call this function with count=0
Adding a callback for an event is not recommended if you use this API, although possible. It will just prevent events from being queued here - instead your callback will be fired.
If there are more than count events already queued - all extra events will be immediately discarded.
node | |
count | Maximum number of events to store for synchronous readout |
Definition at line 711 of file aura.c.
References aura_buffer_release(), and aura_get_next_event().
int aura_get_next_event | ( | struct aura_node * | node, |
const struct aura_object ** | obj, | ||
struct aura_buffer ** | retbuf | ||
) |
Retrieve the next event from the synchronous event queue. If there is no events in queue - this function may block until the next event arrives.
If the node goes offline during waiting for event this function will return an error
The caller should not in any way modify or free the obj pointer. The obj pointer returned will may not be valid after the next synchronous call (e.g. if the node went offline and back online) so do not rely on that in your application.
The caller should free the retbuf pointer with aura_buffer_release when it is no longer needed
node | |
obj | |
retbuf |
Definition at line 753 of file aura.c.
References aura_dequeue_buffer(), and aura_handle_events().
Referenced by aura_enable_sync_events().
int aura_get_pending_events | ( | struct aura_node * | node | ) |
void aura_wait_status | ( | struct aura_node * | node, |
int | status | ||
) |
Block until node's status becomes one of the requested
node | |
status |
Definition at line 607 of file aura.c.
References aura_handle_events().