Item


Multimodal data fusion based on mutual information

Multimodal visualization aims at fusing different data sets so that the resulting combination provides more information and understanding to the user. To achieve this aim, we propose a new information-theoretic approach that automatically selects the most informative voxels from two volume data sets. Our fusion criteria are based on the information channel created between the two input data sets that permit us to quantify the information associated with each intensity value. This specific information is obtained from three different ways of decomposing the mutual information of the channel. In addition, an assessment criterion based on the information content of the fused data set can be used to analyze and modify the initial selection of the voxels by weighting the contribution of each data set to the final result. The proposed approach has been integrated in a general framework that allows for the exploration of volumetric data models and the interactive change of some parameters of the fused data set. The proposed approach has been evaluated on different medical data sets with very promising results

The authors gratefully acknowledge and thank Joan San, Alberto Prats, and Gerard Blasco for their valuable assistance. The authors also wish to thank the anonymous reviewers for their helpful comments. This work has been funded in part by grants from the Spanish Government (Nr. TIN2010-21089-C03-01 and FIS PS09/00596 of I+D+I 2009-2012) and from the Catalan Government (Nr. 2009-SGR-643). This work has also been supported by SUR of DEC of Generalitat de Catalunya (Catalan Government)

Institute of Electrical and Electronics Engineers (IEEE)

Manager: Ministerio de Ciencia e Innovación (Espanya)
Generalitat de Catalunya. Agència de Gestió d’Ajuts Universitaris i de Recerca
Author: Bramon Feixas, Roger
Boada, Imma
Bardera i Reig, Antoni
Rodriguez, Joaquim
Feixas Feixas, Miquel
Puig Alcántara, Josep
Sbert, Mateu
Abstract: Multimodal visualization aims at fusing different data sets so that the resulting combination provides more information and understanding to the user. To achieve this aim, we propose a new information-theoretic approach that automatically selects the most informative voxels from two volume data sets. Our fusion criteria are based on the information channel created between the two input data sets that permit us to quantify the information associated with each intensity value. This specific information is obtained from three different ways of decomposing the mutual information of the channel. In addition, an assessment criterion based on the information content of the fused data set can be used to analyze and modify the initial selection of the voxels by weighting the contribution of each data set to the final result. The proposed approach has been integrated in a general framework that allows for the exploration of volumetric data models and the interactive change of some parameters of the fused data set. The proposed approach has been evaluated on different medical data sets with very promising results
The authors gratefully acknowledge and thank Joan San, Alberto Prats, and Gerard Blasco for their valuable assistance. The authors also wish to thank the anonymous reviewers for their helpful comments. This work has been funded in part by grants from the Spanish Government (Nr. TIN2010-21089-C03-01 and FIS PS09/00596 of I+D+I 2009-2012) and from the Catalan Government (Nr. 2009-SGR-643). This work has also been supported by SUR of DEC of Generalitat de Catalunya (Catalan Government)
Document access: http://hdl.handle.net/2072/296950
Language: eng
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Rights: Tots els drets reservats
Subject: Informació, Teoria de la
Information theory
Imatges -- Processament -- Tècniques digitals
Image processing -- Digital techniques
Imatges mèdiques
Imaging systems in medicine
Medicina -- Informàtica
Medicine -- Data processing
Title: Multimodal data fusion based on mutual information
Type: info:eu-repo/semantics/article
Repository: Recercat

Subjects

Authors