I have just released a new version of Aquila that features new Vision module running on parallel GPU devices. The Vision module was developed through the integration of OpenGL, OpenCV and CUDA. YARP image streams from the iCub robot or its simulator as well as the video streams from up to two cameras connected to a host machine are processed on either CPU or GPU side and mapped into 2D textures using OpenGL natively. Native use of OpenGL and offloading image processing computation to GPU side boosts the performance significantly.
The two images above show the two different submodules of the Vision module. The top one shows the iCub vision interface connected to the iCub simulator and applying different image processing filters. The iCub interface can connect to both the iCub robot as well as its simulator. The bottom image shows the additional Camera interfacw, which supports up to two cameras that are connected to a host machine. Both submodules provide similar functionalities such as saving individual images from the stream, recording videos, minimising and maximising individual viewports, applying different filters and changing their settings, rendering offscreen original stream or modified, etc.
Please see the latest Aquila manual for detailed information about every module as well as installation instructions. New developers that would like to participate in Aquila project can join via our SourceForge page.