I am doing PhD research in composition of actions and language for iTalk project, which aims to develop artificial embodied agents able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning.
I have started the development of Aquila software that aims to be a toolkit for cognitive robotics research accelerated through the supercomputing power of NVidia CUDA capable cards. This has attracted the attention of NVidia team that asked me to write this article, which is featured on their website.
Aquila implements a special type of neural network based on Jun Tani Multiple Time Scales Recurrent Neural Netwrok (MTRNN) model with different types of neurons that allow the network to emerge a functional hierarchical structure able to represent motor primitives and combine them into novel sequences of actions.
The image on the left shows the iCub humanoid robot controlled by Aquila's multiple time scales recurrent neural network system. The video below demonstrates how Aquila trains and runs this system to control th iCub in real time.
The results shown in this video are from the initial tests only and are meant to demonstrate the potential of neural system as well as the massive speed-ups achieved by running the network on parallel CUDA devices.
I am currently working on extending the neural system with additional modalities dealing with vision, extra proprioception as well as language so keep eyes on my site for new results.