Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Content archived on 2024-05-27

SenseMaker: A Multi-sensory, Task-specific, Adaptable perception System

Deliverables

Based on biological principles, Spiking Neural Network Models were proposed to solve problems in artificial intelligent systems. A reliable learning algorithm was obtained to solve function approximate, classification and time-series problems. Based on Spike Time Dependant Plasticity (STDP) learning rules, a Spiking Neural Network Model was proposed to learn arbitrary n-dimensional co-ordinate transformations based on multi-sensory observation of environmental interactions, for example the 2D transformation from an angular representation of arm position to a Cartesian representation. The network is robust and provides noise immunity as even if some of the neurons do not work, the network can still perform the transformation function. The model can provide a biologically plausible approach for designing artificial intelligent systems. Based on spiking neuron model and different receptive field models, hierarchical networks of spiking neurons are proposed to process visual stimuli in which multiple objects are represented by groupings of elementary bar elements with different orientation distributions. Simulations show that hierarchical networks of spiking neurons are able to segment the objects and bind the pixels to form shapes of objects by neighbour lateral connections and temporal correlation when they are implemented in biologically realistic networks of spiking neurons.
VLSI full custom ASICs have been designed in BiCMOS technology to emulate conductance-based neuron models developed at UNIC and constrained by biological experiments. The devices compute in biological real time and in analog mode neural and synaptic ionic currents, using Hodgkin-Huxley description. An integrated interface allows a dynamic digital control of synaptic weights. 2 generations of ASICs have been fabricated. A library of analog and mixed functions has been developed, to allow technological migration.
Inside each neural network ASIC, four analog Perceptron-based networks process the information. Their communication with each other as well as with other parts of the system is entirely digital. Each network block can implement a fully connected recursive network with 128 input and 64 output neurons. The weight values are stored on capacitances inside the synapses. A specialized weight storage circuit is able to load up to 400 million weight values per second. The neuron operation is based on the summation of currents generated in the synapses. If the excitatory exceeds the inhibitory current, the synapse will fire. This differential neuron input leads to high noise immunity and fast operation. A single network block, containing more than 8000 synapses, uses only 1.5 mm{2} silicon area. The high speed interface is realized by bidirectional low-voltage differential signalling (LVDS). This allows a high throughput without the generation of digital switching noise. 16 integrated digital-to-analog converters translate the numerical weight values into the according strengths of the synaptic connections.
The project required the development of an apparatus that was capabale of presenting both tactile and visual stimuli interactively and at the same time. We adapted an apparatus (Virtual Tactile Display), which was developed by the UHEI partners (Electronic Vision Group at the Kirchhoff-Institut für Physik, University of Heidelberg) and created the Virtual Haptic Device. The important characteristic of the VHD is that it requires active exploration, unlike previous visual tactile devices, which only use passive touch. Moreover, visual stimuli can be displayed at the same time either as a whole image or as part-based image through a manipulable sized aperture.
Based on a reconfigurable hardware platform, a method for implementing large scale Spiking Neural Networks was created. The approach is based on the I&F conductance model and includes a form of Spike Time Dependant Plasticity (STDP) for on-chip learning. Analysis of the logic requirements demonstrate that large scale implementations are not viable if a fully parallel implementation strategy is utilised thus an alternative approach where a trade off in terms of speed/area is made and time multiplexing of the neuron model implemented on the FPGA is used to generate large network topologies. To compensate for this speed performance compromise, optimised simulation strategies such as activity based and event based simulation were employed. The final system is capable of simulation networks with up to 53,892 neurons and 53,892,216 STDP synapses. The system has been verified using SNN networks to perform 1D and 2D co-ordinate transformation and implementation results demonstrate significant performance increase over a PC based simulation. As a reconfigurable hardware platform was used the system is also flexible in terms of the neuron model used and it is also relatively easy to implement various network topologies and connection strategies.
A hardware-software simulation system has been developed to address the simulation of biomemetic neural networks of conductance-based neuron (see details in D21). The system handles analog and digital hardware and hosts full custom ASICs. The analog hardware is in charge of continuous and real-time computing of neural and synaptic conductances. Digital hardware controls the network synaptic weights and connectivity, and processes spikes information from and to the ASICs. The system can also control in real-time the network connectivity using software programmed by adaptation functions, such as STDP. One system, including the associated ASICs, has been used in collaboration with partner UNIC.
The spiking neural network ASIC mimics neural behaviour to a large extend. First of all, its neurons are based on a membrane model. If the membrane potential reaches the threshold voltage, a spike generation process will be triggered. Contrary to the simple integrate-and-fire model, this process depends not only on the membrane voltage, but on its derivative as well. The synapses are conductance based, with realistic levels for their reversal potentials. The shortening of the membrane time constant in the case that the total synaptic conductance reaches the high-conductance region can therefore be studied with the chip. Also, the exponential decay of the synaptic conductance is part of the design. This is important for the transformation of information from the spatial into the temporal realm. Another important aspect is the statistical distribution of neural parameters. No two neurons are equal in nature; and this should be the case in an VLSI model. By looking closely at an analog circuit, it can be seen that this is also true for microelectronics. Fluctuations in the manufacturing process lead to parameter variations of each transistor, making it an individual as well. But we want to control these fluctuations to generate neural microcircuits with a known statistical distribution of their parameters. Therefore, each electronic neuron will contain several individually tunable parameters. Plasticity is the key to understand how the brain can adapt to its environment. One important aspect of plasticity discovered in the recent years is the spike time dependent plasticity. In the spiking neural network chip each synapse measures the correlation between pre- and postsynaptic signal. These measurements are used to calculate changes in the synaptic weights.
Using behavioural measures and fMRI imaging, we investigated whether tactile information processing in the brain was based on dual pathways, as in the visual system, with one pathway dedicated to spatial information processing and the other for recognition. Using novel, unfamiliar stimuli, we found that information processing of tactile information involves a shared network of cortical areas in the brain but that, in general, spatial information is processed by the occipto-parietal pathway and information for recognition is processed by the occipito-temporal pathway. Moreover, behavioural studies support these findings and show that performance in either of these tasks does not interfere with performance in the other task, suggesting task-dependent separation of resources. In further studies we found that vision can affect spatial and object recognition processing in touch. When visual information is reduced, however, then behavioural perceptual performance is enhanced by combining vision and touch.
The internal architecture of the neural network ASICs in SenseMaker provides connectivity between network blocks and thus enables the composition of recurrent, multi layered neural networks. Therefore and due to the underlying network model, several network blocks may function as one single neural network. Via the digital interface of the chips it is furthermore possible to not only do this cross-linking within one single chip but also to scale it over chip boundaries. This technique brings up high demands on the communication channels between the ASICs: Besides high bandwidth requirements to interface the network blocks the Perception model also requires isochronous network communication. The distributed backplane system fulfils these requirements. Basic operating unit of the system is the evolution module NATHAN, which basically consists of a Xilinx Virtex-II Pro FPGA, directly connected to the neural network ASIC. Using cutting edge FPGA technology, we can exhaust HAGEN's digital bandwidth, have Multi-GigaBit connectivity, and finally have local CPUs and memory to execute the training software HANNEE. The "distributed" resources of up to 16 NATHAN modules are hosted by a backplane providing the necessary support infrastructure.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available