sksurv.linear_model.IPCRidge#
- class sksurv.linear_model.IPCRidge(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=None, tol=0.001, solver='auto', positive=False, random_state=None)[source]#
Accelerated failure time model with inverse probability of censoring weights.
This model assumes a regression model of the form
\[\log y = \beta_0 + \mathbf{X} \beta + \epsilon\]L2-shrinkage is applied to the coefficients \(\beta\) and each sample is weighted by the inverse probability of censoring to account for right censoring (under the assumption that censoring is independent of the features, i.e., random censoring).
See 1 for further description.
- Parameters
alpha (float, optional, default: 1.0) –
Small positive values of alpha improve the conditioning of the problem and reduce the variance of the estimates. alpha must be a non-negative float i.e. in [0, inf).
For numerical reasons, using alpha = 0 is not advised.
fit_intercept (bool, default: True) – Whether to fit the intercept for this model. If set to false, no intercept will be used in calculations (i.e.
X
andy
are expected to be centered).copy_X (bool, default: True) – If True, X will be copied; else, it may be overwritten.
max_iter (int, default: None) – Maximum number of iterations for conjugate gradient solver. For ‘sparse_cg’ and ‘lsqr’ solvers, the default value is determined by scipy.sparse.linalg. For ‘sag’ solver, the default value is 1000. For ‘lbfgs’ solver, the default value is 15000.
tol (float, default: 1e-4) – Precision of the solution. Note that tol has no effect for solvers ‘svd’ and ‘cholesky’.
solver ({'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga', 'lbfgs'}, default: 'auto') –
Solver to use in the computational routines:
’auto’ chooses the solver automatically based on the type of data.
’svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. It is the most stable solver, in particular more stable for singular matrices than ‘cholesky’ at the cost of being slower.
’cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.
’sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter).
’lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.
’sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.
’lbfgs’ uses L-BFGS-B algorithm implemented in scipy.optimize.minimize. It can be used only when positive is True.
All solvers except ‘svd’ support both dense and sparse data. However, only ‘lsqr’, ‘sag’, ‘sparse_cg’, and ‘lbfgs’ support sparse input when fit_intercept is True.
positive (bool, default: False) – When set to
True
, forces the coefficients to be positive. Only ‘lbfgs’ solver is supported in this case.random_state (int, RandomState instance, default: None) – Used when
solver
== ‘sag’ or ‘saga’ to shuffle the data.
- coef_#
Weight vector.
- Type
ndarray, shape = (n_features,)
- intercept_#
Independent term in decision function. Set to 0.0 if
fit_intercept = False
.- Type
float or ndarray of shape (n_targets,)
- n_iter_#
Actual number of iterations for each target. Available only for sag and lsqr solvers. Other solvers will return None.
- Type
None or ndarray of shape (n_targets,)
- n_features_in_#
Number of features seen during
fit
.- Type
int
- feature_names_in_#
Names of features seen during
fit
. Defined only when X has feature names that are all strings.- Type
ndarray of shape (n_features_in_,)
References
- 1
W. Stute, “Consistent estimation under random censorship when covariables are present”, Journal of Multivariate Analysis, vol. 45, no. 1, pp. 89-103, 1993. doi:10.1006/jmva.1993.1028.
- __init__(alpha=1.0, *, fit_intercept=True, copy_X=True, max_iter=None, tol=0.001, solver='auto', positive=False, random_state=None)[source]#
Methods
__init__
([alpha, fit_intercept, copy_X, ...])fit
(X, y)Build an accelerated failure time model.
get_params
([deep])Get parameters for this estimator.
predict
(X)Predict using the linear accelerated failure time model.
score
(X, y[, sample_weight])Return the coefficient of determination of the prediction.
set_params
(**params)Set the parameters of this estimator.
- fit(X, y)[source]#
Build an accelerated failure time model.
- Parameters
X (array-like, shape = (n_samples, n_features)) – Data matrix.
y (structured array, shape = (n_samples,)) – A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field.
- Return type
self
- get_params(deep=True)#
Get parameters for this estimator.
- Parameters
deep (bool, default=True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns
params – Parameter names mapped to their values.
- Return type
dict
- predict(X)[source]#
Predict using the linear accelerated failure time model.
- Parameters
X ({array-like, sparse matrix}, shape = (n_samples, n_features)) – Samples.
- Returns
C – Returns predicted values on original scale (NOT log scale).
- Return type
array, shape = (n_samples,)
- score(X, y, sample_weight=None)[source]#
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares
((y_true - y_pred)** 2).sum()
and \(v\) is the total sum of squares((y_true - y_true.mean()) ** 2).sum()
. The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.- Parameters
X (array-like of shape (n_samples, n_features)) – Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape
(n_samples, n_samples_fitted)
, wheren_samples_fitted
is the number of samples used in the fitting for the estimator.y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True values for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns
score – \(R^2\) of
self.predict(X)
wrt. y.- Return type
float
Notes
The \(R^2\) score used when calling
score
on a regressor usesmultioutput='uniform_average'
from version 0.23 to keep consistent with default value ofr2_score()
. This influences thescore
method of all the multioutput regressors (except forMultiOutputRegressor
).
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters
**params (dict) – Estimator parameters.
- Returns
self – Estimator instance.
- Return type
estimator instance