# Boosting与GBDT

Ensemble Learning

variability的减小可以认为是通过一个平均滤波器来减小高频噪声（对应于高方差）,每个样本的信号由其邻域内的样本求取平均。这里有一个问题,PCA中认为高方差是有效的信号,那么这里所指方差与PCA协方差矩阵的方差是否为一个概念呢？

• 数据采样/选择(data sampling/selection)
• 训练成员分类器(training member classifier)
• 组合分类器(combining classifiers)

1）理想情况下,各个分类器的输出应该是独立的或者负相关(negatively correlated)。（显然, 如果每个子分类器输出相同,那么就无法从组合中获得有效信息）。最常见的做法是如上图所示使用训练数据的不同子集来训练各个分类器以获得整个集合的多样性(diversity)。

2）组合分类器。对于某些分类器如SVM，只提供离散值的标号输出。另外一些分类器，如多层感知器及朴素贝叶斯分类器，提供连续输出值，即可解释为对应于每一类的支持机。

2.1）组合类标号。假定分类器输出为类标号，第$t$个分类器的决策为$d_{t,c}\in \{0,1}$,$t=1,...,T$,$c=1,...,C$$T$为分类器数目,$C$为类别数目。如果$t_{th}$个分类器$h_{t}$选择类$w_{c}$,那么相应的$d_{t,c}=1$。从而连续输出值可以转换为标好输出。

Boosting思想

Kearns和Valiant提出“强可学习”和“弱可学习”的概念：在概率近似正确(probably approximately correct,PAC)学习的框架中,一个概念(类), 如果存在一个多项式的学习算法能够学习它,并且正确率很高,那么称这个概念是强可学习的；一个概念,如果存在一个多项式的学习算法能够学习它,学习的正确率仅仅比随机猜测略好,那么称这个概念是弱可学习的。Schapire证明：在PAC学习的框架下,一个概念时强可学习的充分必要条件是一个概念是弱可学习的。

1)构建X为样本集，Y为样本类别标识集，假设限定Y={-1,+1}，则可构建训练样本集$S=\{(X_{i},y_{i})\}$

2)初始化m个样本的权重，假设样本分布$D_{t}$为均匀分布：$D_{t}(i)=1/m$$D_{t}(i)$表示在第$t$轮迭代中赋给样本$X_{i},y_{i}$的权值。

3）For t=1 to T

$\epsilon_{t}=\sum_{i=1}^{N} D_{t}(i)I(h_{t}(x_{i}\neq y_{i})$

where
$x_{i}$ is a vector of predictor values for observation i; $y_{i}$ is the true class label; ht is the prediction of learner (hypothesis) with index t; $I$ is the indicator function; $D_{t}(i)$  is the weight of observation i at step t.

AdaBoostM1算法增大对于被分类器$h_{t}$错分的样本权重，然后减小对于被分类器$h_{t}$正确分类的样本权重。在更新权重的样本后，训练得到下一个分类器$h_{t+1}$.

$D_{t+1}(i)=\frac{D_{t}(i)}{Z_{t}} \times \frac{e^{-\alpha_{t}}, if \quad h_{t}(x_{i})=y_{i}}{e^{\alpha_{t}}, if \quad h_{t}(x_{i})\neq y_{i}}$

$\alpha_{t}=\frac{1}{2}(ln[\frac{\epsilon_{t}}{1-\epsilon_{t}}])$

4)最终的强分类器为：
$H(x)=sign(\sum_{t=1}{T}\alpha_{t}h_{t})$
Adaboost的训练可以看做对指数损失$\sum_{i=1}^{N}w_{i}exp(-y_{i}f(x_{i}))$的stagewise minimization。
where $w_{i}$ is 归一化到(0-1)的样本权重，是一开始设定的权重; $y_{i}$ is the true class label; $f(x_{i})$ 为预测分类分数。

The second output from the predict method of an AdaBoostM1 classification ensemble is an N-by-2 matrix of classification scores for the two classes and Nobservations. The second column in this matrix is always equal to minus the first column. predict returns two scores to be consistent with multiclass models, though this is redundant because the second column is always the negative of the first.

### 讨论：

Matlab工具包介绍

• X is the matrix of data. Each row contains one observation, and each column contains one predictor variable.
• Y is the vector of responses, with the same number of observations as the rows in X.
• model is a string naming the type of ensemble.
• numberens is the number of weak learners in ens from each element of learners. So the number of elements in ens is numberens times the number of elements in learners.
• learners is either a string naming a weak learner, a weak learner template, or a cell array of such templates.

#### Choose an Applicable Ensemble Method

fitensemble uses one of these algorithms to create an ensemble.

• For classification with two classes:
• 'AdaBoostM1'
• 'LogitBoost'
• 'GentleBoost'
• 'RobustBoost' (requires an Optimization Toolbox™ license)
• 'LPBoost' (requires an Optimization Toolbox license)
• 'TotalBoost' (requires an Optimization Toolbox license)
• 'RUSBoost'
• 'Subspace'
• 'Bag'
• For classification with three or more classes:
• 'AdaBoostM2'
• 'LPBoost' (requires an Optimization Toolbox license)
• 'TotalBoost' (requires an Optimization Toolbox license)
• 'RUSBoost'
• 'Subspace'
• 'Bag'
• For regression:
• 'LSBoost'
• 'Bag'

This table lists characteristics of the various algorithms. In the table titles:

• Regress. — Regression
• Classif. — Classification
• Preds. — Predictors
• #### Imbalance — Good for imbalanced data (one class has many more observations than the other)

• Stop — Algorithm self-terminates
• Sparse — Requires fewer weak learners than other ensemble algorithms

Fisher's Iris data set，是一类多重变量分析的数据集。

1） Freund and Schapire [16],

2）Schapire et al. [26],

3）Friedman, Hastie, and Tibshirani [18], and Friedman [17].

http://blog.csdn.net/sunmenggmail/article/details/8956986

http://www.zhizhihu.com/html/y2011/3327.html

http://www.cnblogs.com/LeftNotEasy/archive/2011/03/07/1976562.html