The Benefits of the GREIO "queue" Option

In order to discuss what the -ogreio,queue option does, we first have to look at the default event processing behavior.

Conceptually, there are two “queues”, one is the GREIO channel and the other is an internal event queue inside the engine.

Incoming events from a backend will be written to the GREIO queue, then the GREIO plugin will read these events into the internal queue.

By default, when an event is received, the GREIO thread is blocked until the main thread has finished processing the event. Only once it has been processed will the plugin read further events from the GREIO queue. This means at most there will only ever be 1 GREIO event in the internal queue.

This also means that the screen will be redrawn once per GREIO event that changes data. 

The -ogreio,queue option however, does not block the GREIO thread. This option changes the plugin's behavior so that it will insert events into the internal queue without waiting for the previous one to complete being processed.

This means that with queue the screen can be redrawn for any GREIO event received before the first event that changes data.

Since you can squeeze multiple events into the queue, this essentially enables a type of batch processing.

This can easily be demonstrated if we take the ClusterIO sample and use the IOConnector's sliders to simulate a large influx of events.

This is the default behavior, without queue:

You can see that there comes a point where the events are coming in faster than the app can "keep up" with, and the app is still processing the events for some time after the slider has stopped moving.

With queue enabled:

You can clearly see that with the queue option enabled, because the GREIO thread is not blocked the application is able to "keep up".




Please sign in to leave a comment.

Didn't find what you were looking for?

New post