Publications
2008
1.
Alaiz-Rodríguez, Rocío; Japkowicz, Nathalie
Assessing the impact of changing environments on classifier performance Artículo de revista
En: Advances in Artificial Intelligence: 21st Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2008 Windsor, Canada, May 28-30, 2008 Proceedings 21, pp. 13–24, 2008, (Publisher: Springer Berlin Heidelberg).
Resumen | Enlaces | BibTeX | Etiquetas: changing environments, classifier robustness, machine learning, performance evaluation
@article{alaiz-rodriguez_assessing_2008,
title = {Assessing the impact of changing environments on classifier performance},
author = {Rocío Alaiz-Rodríguez and Nathalie Japkowicz},
url = {https://link.springer.com/chapter/10.1007/978-3-540-68825-9_2},
year = {2008},
date = {2008-01-01},
journal = {Advances in Artificial Intelligence: 21st Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2008 Windsor, Canada, May 28-30, 2008 Proceedings 21},
pages = {13–24},
abstract = {This paper tests the hypothesis that simple classifiers are more robust to environmental changes than complex ones. The authors develop a strategy to generate artificial but realistic domains, allowing controlled testing of various scenarios. Their results show that evaluating classifiers in changing environments is challenging, as shifts can make a domain either simpler or more complex. They introduce a metric to address this issue and use it to assess classifier performance. The findings indicate that simple classifiers degrade more in mild population drifts, while in severe cases, all classifiers suffer equally. Ultimately, complex classifiers remain more accurate, refuting the initial hypothesis.},
note = {Publisher: Springer Berlin Heidelberg},
keywords = {changing environments, classifier robustness, machine learning, performance evaluation},
pubstate = {published},
tppubtype = {article}
}
This paper tests the hypothesis that simple classifiers are more robust to environmental changes than complex ones. The authors develop a strategy to generate artificial but realistic domains, allowing controlled testing of various scenarios. Their results show that evaluating classifiers in changing environments is challenging, as shifts can make a domain either simpler or more complex. They introduce a metric to address this issue and use it to assess classifier performance. The findings indicate that simple classifiers degrade more in mild population drifts, while in severe cases, all classifiers suffer equally. Ultimately, complex classifiers remain more accurate, refuting the initial hypothesis.