A hierarchical distributed system is presented, which interprets 3D scenes through fusion of multisensory images. The recognition problem is partitioned into a set of less complex subproblems by associating with each representation level expert processing units that filter out unreliable solutions and focus attention on promising ones. In this way, the search space for possible solutions is limited in a distributed way, as a priori knowledge about observations and constraints is used at multiple levels. Different instances of the same inference mechanism are applied at each level. As a consequence, each processing unit is able to search autonomously for a local solution in order to contribute to obtaining a globally consistent solution. An important characteristic of the system is to be easy to maintain and extend. The results reported have been obtained by using multisensory images of real scenes considered in the context of an autonomous-driving application. Two examples of interpretation of 3D road scenes are given. and the distribution of computational load is discussed.

DISTRIBUTED SPATIAL REASONING FOR MULTISENSORY IMAGE INTERPRETATION

FORESTI, Gian Luca;
1993-01-01

Abstract

A hierarchical distributed system is presented, which interprets 3D scenes through fusion of multisensory images. The recognition problem is partitioned into a set of less complex subproblems by associating with each representation level expert processing units that filter out unreliable solutions and focus attention on promising ones. In this way, the search space for possible solutions is limited in a distributed way, as a priori knowledge about observations and constraints is used at multiple levels. Different instances of the same inference mechanism are applied at each level. As a consequence, each processing unit is able to search autonomously for a local solution in order to contribute to obtaining a globally consistent solution. An important characteristic of the system is to be easy to maintain and extend. The results reported have been obtained by using multisensory images of real scenes considered in the context of an autonomous-driving application. Two examples of interpretation of 3D road scenes are given. and the distribution of computational load is discussed.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/686782
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 6
social impact