sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis

class sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis(loss='coxph', learning_rate=0.1, n_estimators=100, subsample=1.0, dropout_rate=0, random_state=None, verbose=0)[source]

Gradient boosting with component-wise least squares as base learner.

See [1] for further description.

Parameters:
  • loss ({'coxph', 'squared', 'ipcwls'}, optional, default: 'coxph') – loss function to be optimized. ‘coxph’ refers to partial likelihood loss of Cox’s proportional hazards model. The loss ‘squared’ minimizes a squared regression loss that ignores predictions beyond the time of censoring, and ‘ipcwls’ refers to inverse-probability of censoring weighted least squares error.
  • learning_rate (float, optional, default: 0.1) – learning rate shrinks the contribution of each base learner by learning_rate. There is a trade-off between learning_rate and n_estimators.
  • n_estimators (int, default: 100) – The number of boosting stages to perform. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance.
  • subsample (float, optional, default: 1.0) – The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting. subsample interacts with the parameter n_estimators. Choosing subsample < 1.0 leads to a reduction of variance and an increase in bias.
  • dropout_rate (float, optional, default: 0.0) – If larger than zero, the residuals at each iteration are only computed from a random subset of base learners. The value corresponds to the percentage of base learners that are dropped. In each iteration, at least one base learner is dropped. This is an alternative regularization to shrinkage, i.e., setting learning_rate < 1.0.
  • random_state (int seed, RandomState instance, or None, default: None) – The seed of the pseudo random number generator to use when shuffling the data.
  • verbose (int, default: 0) – Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree.
coef_

The aggregated coefficients. The first element coef_[0] corresponds to the intercept. If loss is coxph, the intercept will always be zero.

Type:array, shape = (n_features + 1,)
loss_

The concrete LossFunction object.

Type:LossFunction
estimators_

The collection of fitted sub-estimators.

Type:list of base learners
train_score_

The i-th score train_score_[i] is the deviance (= loss) of the model at iteration i on the in-bag sample. If subsample == 1 this is the deviance on the training data.

Type:array, shape = (n_estimators,)
oob_improvement_

The improvement in loss (= deviance) on the out-of-bag samples relative to the previous iteration. oob_improvement_[0] is the improvement in loss of the first stage over the init estimator.

Type:array, shape = (n_estimators,)

References

[1]Hothorn, T., Bühlmann, P., Dudoit, S., Molinaro, A., van der Laan, M. J., “Survival ensembles”, Biostatistics, 7(3), 355-73, 2006
__init__(loss='coxph', learning_rate=0.1, n_estimators=100, subsample=1.0, dropout_rate=0, random_state=None, verbose=0)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__([loss, learning_rate, …]) Initialize self.
fit(X, y[, sample_weight]) Fit estimator.
predict(X) Predict risk scores.
score(X, y) Returns the concordance index of the prediction.

Attributes

coef_ Return the aggregated coefficients.
feature_importances_
coef_

Return the aggregated coefficients.

Returns:coef_ – Coefficients of features. The first element denotes the intercept.
Return type:ndarray, shape = (n_features + 1,)
fit(X, y, sample_weight=None)[source]

Fit estimator.

Parameters:
  • X (array-like, shape = (n_samples, n_features)) – Data matrix
  • y (structured array, shape = (n_samples,)) – A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field.
  • sample_weight (array-like, shape = (n_samples,), optional) – Weights given to each sample. If omitted, all samples have weight 1.
Returns:

Return type:

self

predict(X)[source]

Predict risk scores.

If loss=’coxph’, predictions can be interpreted as log hazard ratio corresponding to the linear predictor of a Cox proportional hazards model. If loss=’squared’ or loss=’ipcwls’, predictions are the time to event.

Parameters:X (array-like, shape = (n_samples, n_features)) – Data matrix.
Returns:risk_score – Predicted risk scores.
Return type:array, shape = (n_samples,)
score(X, y)[source]

Returns the concordance index of the prediction.

Parameters:
  • X (array-like, shape = (n_samples, n_features)) – Test samples.
  • y (structured array, shape = (n_samples,)) – A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field.
Returns:

cindex – Estimated concordance index.

Return type:

float