# Generalizations of the Bias/Variance Decomposition for Prediction Error

@inproceedings{James1997GeneralizationsOT, title={Generalizations of the Bias/Variance Decomposition for Prediction Error}, author={Gareth M. James and Trevor J. Hastie}, year={1997} }

The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classiication techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassiication) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the… Expand

No Paper Link Available

#### 28 Citations

General bias/variance decomposition with target independent variance of error functions derived from the exponential family of distributions

- Computer Science
- Proceedings 15th International Conference on Pattern Recognition. ICPR-2000
- 2000

It is proved that this family of error functions contains all error functions decomposable in that manner and presents a useful approximation of ambiguity that is quadratic in the ensemble coefficients. Expand

A Unified Bias-Variance Decomposition and its Applications

- 2000

This paper presents a unified bias-variance decomposition that is applicable to squared loss, zero-one loss, variable misclassification costs, and other loss functions. The unified decomposition… Expand

A Unified Bias-Variance Decomposition

The bias-variance decomposition is a very useful and widely-used tool for understanding machine-learning algorithms. It was originally developed for squared loss. In recent years, several authors… Expand

Bias/Variance Decompositions for Likelihood-Based Estimators

- Computer Science, Medicine
- Neural Computation
- 1998

A similar simple decomposition is derived, valid for any kind of error measure that, when using the appropriate probability model, can be derived from a Kullback-Leibler divergence or log-likelihood. Expand

Bias-Variance Decomposition for model selection

- 2009

Bias-variance decomposition is known to be a powerful tool when explaining the success of learning methods. By now it’s common practice to analyze newly developed techniques in terms of their bias… Expand

Pooling for Combination of Multilevel Forecasts

- Computer Science
- IEEE Transactions on Knowledge and Data Engineering
- 2009

It is shown that previously proposed pooling based only on error variances cannot fully exploit the complementary information present in a set of diverse forecasts to be combined and that covariance values could be reliably calculated and taken into account during the pooling process. Expand

Combination of Multi Level Forecasts

- Computer Science
- J. VLSI Signal Process.
- 2007

This paper provides a discussion of the effects of different multi-level learning approaches on the resulting out of sample forecast errors in the case of difficult real-world forecasting problems… Expand

Regularization of Portfolio Allocation

- Economics
- 2013

The mean-variance optimization (MVO) theory of Markowitz (1952) for portfolio selection is one of the most important methods used in quantitative finance. This portfolio allocation needs two input… Expand

Error coding and PaCT's

- Computer Science
- 1997

A new class of plug in classi cation techniques have recently been developed in the statistics literature and some motivation for their success is given. Expand

Error coding and PaCT ' sGareth

- 1997

A new class of plug in classiication techniques have recently been developed in the statistics literature. A plug in classiication technique (PaCT) is a method that takes a standard classiier (such… Expand

#### References

SHOWING 1-5 OF 5 REFERENCES

Bias, Variance and Prediction Error for Classification Rules

- Computer Science
- 1996

A decomposition of prediction error into its natural components is developed and a bootstrap estimate of the error of a \bagged" classiier is obtained. Expand

On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality

- Mathematics, Computer Science
- Data Mining and Knowledge Discovery
- 2004

This work candramatically mitigate the effect of the bias associated with some simpleestimators like “naive” Bayes, and the bias induced by the curse-of-dimensionality on nearest-neighbor procedures. Expand

Error-Correcting Output Coding Corrects Bias and Variance

- Computer Science
- ICML
- 1995

An investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms, shows that it can reduce the variance of the learning algorithm. Expand

Solving Multiclass Learning Problems via Error-Correcting Output Codes

- Computer Science
- J. Artif. Intell. Res.
- 1995

It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. Expand

Experiments with a New Boosting Algorithm

- Computer Science
- ICML
- 1996

This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand