sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis

class sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis(loss='coxph', learning_rate=0.1, n_estimators=100, subsample=1.0, dropout_rate=0, random_state=None, verbose=0)

Gradient boosting with component-wise least squares as base learner.

Parameters:
loss : {‘coxph’, ‘squared’, ‘ipcwls’}, optional, default: ‘coxph’

loss function to be optimized. ‘coxph’ refers to partial likelihood loss of Cox’s proportional hazards model. The loss ‘squared’ minimizes a squared regression loss that ignores predictions beyond the time of censoring, and ‘ipcwls’ refers to inverse-probability of censoring weighted least squares error.

learning_rate : float, optional, default: 0.1

learning rate shrinks the contribution of each base learner by learning_rate. There is a trade-off between learning_rate and n_estimators.

n_estimators : int, default: 100

The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance.

subsample : float, optional, default: 1.0

The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators. Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias.

dropout_rate : float, optional, default: 0.0

If larger than zero, the residuals at each iteration are only computed from a random subset of base learners. The value corresponds to the percentage of base learners that are dropped. In each iteration, at least one base learner is dropped. This is an alternative regularization to shrinkage, i.e., setting learning_rate < 1.0.

random_state : int seed, RandomState instance, or None, default: None

The seed of the pseudo random number generator to use when shuffling the data.

verbose : int, default: 0

Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree.

References

[1]Hothorn, T., Bühlmann, P., Dudoit, S., Molinaro, A., van der Laan, M. J., “Survival ensembles”, Biostatistics, 7(3), 355-73, 2006
Attributes:
coef_ : array, shape = (n_features,)

The aggregated coefficients.

loss_ : LossFunction

The concrete LossFunction object.

estimators_ : list of base learners

The collection of fitted sub-estimators.

train_score_ : array, shape = (n_estimators,)

The i-th score train_score_[i] is the deviance (= loss) of the model at iteration i on the in-bag sample. If subsample == 1 this is the deviance on the training data.

oob_improvement_ : array, shape = (n_estimators,)

The improvement in loss (= deviance) on the out-of-bag samples relative to the previous iteration. oob_improvement_[0] is the improvement in loss of the first stage over the init estimator.

__init__(loss='coxph', learning_rate=0.1, n_estimators=100, subsample=1.0, dropout_rate=0, random_state=None, verbose=0)

Methods

__init__([loss, learning_rate, …])
fit(X, y[, sample_weight]) Fit estimator.
predict(X) Predict risk scores.
score(X, y)
coef_

Return the aggregated coefficients.

Returns:
coef_ : ndarray, shape = (n_features + 1,)

Coefficients of features. The first element denotes the intercept.

fit(X, y, sample_weight=None)

Fit estimator.

Parameters:
X : array-like, shape = (n_samples, n_features)

Data matrix

y : structured array, shape = (n_samples,)

A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field.

sample_weight : array-like, shape = (n_samples,), optional

Weights given to each sample. If omitted, all samples have weight 1.

Returns:
self
predict(X)

Predict risk scores.

Parameters:
X : array-like, shape = (n_samples, n_features)

Data matrix.

Returns:
risk_score : array, shape = (n_samples,)

Predicted risk scores.