Publications
2002
1.
Alaíz-Rodríguez, Rocío; Cid-Sueiro, Jesús
Minimax strategies for training classifiers under unknown priors Artículo de revista
En: Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, pp. 249–258, 2002, (Publisher: IEEE).
Resumen | Enlaces | BibTeX | Etiquetas: class priors, Minimax Classifier, neural networks, robust learning
@article{alaiz-rodriguez_minimax_2002,
title = {Minimax strategies for training classifiers under unknown priors},
author = {Rocío Alaíz-Rodríguez and Jesús Cid-Sueiro},
url = {https://ieeexplore.ieee.org/abstract/document/1030036},
year = {2002},
date = {2002-01-01},
journal = {Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing},
pages = {249–258},
abstract = {This paper addresses the challenge of training supervised learning algorithms when the stationarity assumption does not hold, particularly when class prior probabilities in the real data do not match those in the training data. The authors propose a two-step learning algorithm to train a neural network for estimating a minimax classifier that is robust to changes in class priors. In the first step, posterior probabilities based on training data priors are estimated. In the second step, the class priors are adjusted to minimize a cost function that is asymptotically equivalent to the worst-case error rate. The proposed method is applied to a softmax-based neural network, and experimental results demonstrate its advantages over traditional approaches.},
note = {Publisher: IEEE},
keywords = {class priors, Minimax Classifier, neural networks, robust learning},
pubstate = {published},
tppubtype = {article}
}
This paper addresses the challenge of training supervised learning algorithms when the stationarity assumption does not hold, particularly when class prior probabilities in the real data do not match those in the training data. The authors propose a two-step learning algorithm to train a neural network for estimating a minimax classifier that is robust to changes in class priors. In the first step, posterior probabilities based on training data priors are estimated. In the second step, the class priors are adjusted to minimize a cost function that is asymptotically equivalent to the worst-case error rate. The proposed method is applied to a softmax-based neural network, and experimental results demonstrate its advantages over traditional approaches.