Publications
2008
1.
Alaiz-Rodríguez, Rocio; Japkowicz, Nathalie; Tischer, Peter
Visualizing classifier performance on different domains Artículo de revista
En: 2008 20th IEEE International Conference on Tools with Artificial Intelligence, vol. 2, pp. 3–10, 2008, (Publisher: IEEE).
Resumen | Enlaces | BibTeX | Etiquetas: classifier evaluation, Visualization
@article{alaiz-rodriguez_visualizing_2008,
title = {Visualizing classifier performance on different domains},
author = {Rocio Alaiz-Rodríguez and Nathalie Japkowicz and Peter Tischer},
url = {https://ieeexplore.ieee.org/abstract/document/4669748},
year = {2008},
date = {2008-01-01},
journal = {2008 20th IEEE International Conference on Tools with Artificial Intelligence},
volume = {2},
pages = {3–10},
abstract = {Classifier performance evaluation typically gives rise to vast numbers of results that are difficult to interpret. On the one hand, a variety of different performance metrics can be applied; and on the other hand, evaluation must be conducted on multiple domains to get a clear view of the classifier's general behaviour. In this paper, we present a visualization technique that allows a user to study the results from a domain point of view and from a classifier point of view. We argue that classifier evaluation should be done on an exploratory basis. In particular, we suggest that, rather than pre-selecting a few metrics and domains to conduct our evaluation on, we should use as many metrics and domains as possible and mine the results of this study to draw valid and relevant knowledge about the behaviour of our algorithms. The technique presented in this paper will enable such a process.},
note = {Publisher: IEEE},
keywords = {classifier evaluation, Visualization},
pubstate = {published},
tppubtype = {article}
}
Classifier performance evaluation typically gives rise to vast numbers of results that are difficult to interpret. On the one hand, a variety of different performance metrics can be applied; and on the other hand, evaluation must be conducted on multiple domains to get a clear view of the classifier's general behaviour. In this paper, we present a visualization technique that allows a user to study the results from a domain point of view and from a classifier point of view. We argue that classifier evaluation should be done on an exploratory basis. In particular, we suggest that, rather than pre-selecting a few metrics and domains to conduct our evaluation on, we should use as many metrics and domains as possible and mine the results of this study to draw valid and relevant knowledge about the behaviour of our algorithms. The technique presented in this paper will enable such a process.