Unknown

Dataset Information

0

A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision.


ABSTRACT: Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100?Hz for a power budget lower than a 200?mW (10?mW for the camera, 90?mW for the liquid lens and ~100?mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature.

SUBMITTER: Haessig G 

PROVIDER: S-EPMC6403400 | biostudies-literature | 2019 Mar

REPOSITORIES: biostudies-literature

altmetric image

Publications

A Spiking Neural Network Model of Depth from Defocus for Event-based Neuromorphic Vision.

Haessig Germain G   Berthelon Xavier X   Ieng Sio-Hoi SH   Benosman Ryad R  

Scientific reports 20190306 1


Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-b  ...[more]

Similar Datasets

| S-EPMC5227683 | biostudies-literature
| S-EPMC8190312 | biostudies-literature
| S-EPMC8180888 | biostudies-literature
| S-EPMC6015816 | biostudies-literature
| S-EPMC9807619 | biostudies-literature
| S-EPMC5487436 | biostudies-literature
| S-EPMC4754434 | biostudies-literature
| S-EPMC10210135 | biostudies-literature
| S-EPMC7339957 | biostudies-literature
| S-EPMC9205405 | biostudies-literature