skpro.survival.ensemble.SurvGradBoostSkSurv#
- class skpro.survival.ensemble.SurvGradBoostSkSurv(loss='coxph', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, max_features=None, max_leaf_nodes=None, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, dropout_rate=0.0, ccp_alpha=0.0, verbose=0, random_state=None)[source]#
Gradient-boosted survival trees with proportional hazards loss, from sksurv.
Direct interface to
sksurv.ensemble.boosting.GradientBoostingSurvivalAnalysis
.In each stage, a regression tree is fit on the negative gradient of the loss function.
For more details on gradient boosting see [1] and [2]. If loss=’coxph’, the partial likelihood of the proportional hazards model is optimized as described in [3]. If loss=’ipcwls’, the accelerated failure time model with inverse-probability of censoring weighted least squares error is optimized as described in [4]. When using a non-zero dropout_rate, regularization is applied during training following [5].
- Parameters:
- loss{‘coxph’, ‘squared’, ‘ipcwls’}, optional, default: ‘coxph’
loss function to be optimized. ‘coxph’ refers to partial likelihood loss of Cox’s proportional hazards model. The loss ‘squared’ minimizes a squared regression loss that ignores predictions beyond the time of censoring, and ‘ipcwls’ refers to inverse-probability of censoring weighted least squares error.
- learning_ratefloat, optional, default: 0.1
learning rate shrinks the contribution of each tree by
learning_rate
. There is a trade-off betweenlearning_rate
andn_estimators
. Values must be in the range[0.0, inf)
.- n_estimatorsint, default: 100
The number of regression trees to create. Gradient boosting is fairly robust to over-fitting so a large number usually results in better performance. Values must be in the range
[1, inf)
.- subsamplefloat, optional, default: 1.0
The fraction of samples to be used for fitting the individual base learners. If smaller than 1.0 this results in Stochastic Gradient Boosting.
subsample
interacts with the parameter n_estimators. Choosingsubsample < 1.0
leads to a reduction of variance and an increase in bias. Values must be in the range(0.0, 1.0]
.- criterionstring, optional, “squared_error” or “friedman_mse” (default)
The function to measure the quality of a split. Supported criteria are “friedman_mse” for the mean squared error with improvement score by Friedman, “squared_error” for mean squared error. The default value of “friedman_mse” is generally the best as it can provide a better approximation in some cases.
- min_samples_splitint or float, optional, default: 2
The minimum number of samples required to split an internal node:
If int, values must be in the range
[2, inf)
.If float, values must be in the range
(0.0, 1.0]
andmin_samples_split
will beceil(min_samples_split * n_samples)
.
- min_samples_leafint or float, default: 1
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
min_samples_leaf
training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.If int, values must be in the range
[1, inf)
.If float, values must be in the range
(0.0, 1.0)
andmin_samples_leaf
will beceil(min_samples_leaf * n_samples)
.
- min_weight_fraction_leaffloat, optional, default: 0.
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided. Values must be in the range
[0.0, 0.5]
.- max_depthint or None, optional, default: 3
Maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. If int, values must be in the range
[1, inf)
.- min_impurity_decreasefloat, optional, default: 0.
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity)
where
N
is the total number of samples,N_t
is the number of samples at the current node,N_t_L
is the number of samples in the left child, andN_t_R
is the number of samples in the right child.N
,N_t
,N_t_R
andN_t_L
all refer to the weighted sum, ifsample_weight
is passed.- max_featuresint, float, string or None, optional, default: None
The number of features to consider when looking for the best split:
If int, values must be in the range
[1, inf)
.If float, values must be in the range
(0.0, 1.0]
and the features considered at each split will bemax(1, int(max_features * n_features_in_))
.If ‘sqrt’, then
max_features=sqrt(n_features)
.If ‘log2’, then
max_features=log2(n_features)
.If None, then
max_features=n_features
.
Choosing
max_features < n_features
leads to a reduction of variance and an increase in bias.Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than
max_features
features.- max_leaf_nodesint or None, optional, default: None
Grow trees with
max_leaf_nodes
in best-first fashion. Best nodes are defined as relative reduction in impurity. Values must be in the range [2, inf). IfNone
, then unlimited number of leaf nodes.- validation_fractionfloat, default: 0.1
The proportion of training data to set aside as validation set for early stopping. Values must be in the range
(0.0, 1.0)
. Only used ifn_iter_no_change
is set to an integer.- n_iter_no_changeint, default: None
n_iter_no_change
is used to decide if early stopping will be used to terminate training when validation score is not improving. By default it is set to None to disable early stopping. If set to a number, it will set asidevalidation_fraction
size of the training data as validation and terminate training when validation score is not improving in all of the previousn_iter_no_change
numbers of iterations. The split is stratified. Values must be in the range[1, inf)
.- tolfloat, default: 1e-4
Tolerance for the early stopping. When the loss is not improving by at least tol for
n_iter_no_change
iterations (if set to a number), the training stops. Values must be in the range[0.0, inf)
.- dropout_ratefloat, optional, default: 0.0
If larger than zero, the residuals at each iteration are only computed from a random subset of base learners. The value corresponds to the percentage of base learners that are dropped. In each iteration, at least one base learner is dropped. This is an alternative regularization to shrinkage, i.e., setting
learning_rate < 1.0
. Values must be in the range[0.0, 1.0)
.- ccp_alphanon-negative float, optional, default: 0.0.
Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than
ccp_alpha
will be chosen. By default, no pruning is performed. Values must be in the range[0.0, inf)
.- verboseint, default: 0
Enable verbose output. If 1 then it prints progress and performance once in a while (the more trees the lower the frequency). If greater than 1 then it prints progress and performance for every tree. Values must be in the range
[0, inf)
.- random_stateint seed, RandomState instance, or None, default: None
Controls the random seed given to each Tree estimator at each boosting iteration. In addition, it controls the random permutation of the features at each split. It also controls the random splitting of the training data to obtain a validation set if
n_iter_no_change
is not None. Pass an int for reproducible output across multiple function calls.
References
[1]J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” The Annals of Statistics, 29(5), 1189–1232, 2001.
[2]J. H. Friedman, “Stochastic gradient boosting,” Computational Statistics & Data Analysis, 38(4), 367–378, 2002.
[3]G. Ridgeway, “The state of boosting,” Computing Science and Statistics, 172–181, 1999.
[4]Hothorn, T., Bühlmann, P., Dudoit, S., Molinaro, A., van der Laan, M. J., “Survival ensembles”, Biostatistics, 7(3), 355-73, 2006.
[5]K. V. Rashmi and R. Gilad-Bachrach, “DART: Dropouts meet multiple additive regression trees,” in 18th International Conference on Artificial Intelligence and Statistics, 2015, 489–497.
- Attributes:
- n_estimators_int
The number of estimators as selected by early stopping (if
n_iter_no_change
is specified). Otherwise it is set ton_estimators
.- feature_importances_ndarray, shape = (n_features,)
The feature importances (the higher, the more important the feature).
- estimators_ndarray of DecisionTreeRegressor, shape = (n_estimators, 1)
The collection of fitted sub-estimators.
- train_score_ndarray, shape = (n_estimators,)
The i-th score
train_score_[i]
is the deviance (= loss) of the model at iterationi
on the in-bag sample. Ifsubsample == 1
this is the deviance on the training data.- oob_improvement_ndarray, shape = (n_estimators,)
The improvement in loss (= deviance) on the out-of-bag samples relative to the previous iteration.
oob_improvement_[0]
is the improvement in loss of the first stage over theinit
estimator.- unique_times_array of shape = (n_unique_times,)
Unique time points.
Methods
Check if the estimator has been fitted.
clone
()Obtain a clone of the object with same hyper-parameters.
clone_tags
(estimator[, tag_names])Clone tags from another estimator as dynamic override.
create_test_instance
([parameter_set])Construct Estimator instance if possible.
create_test_instances_and_names
([parameter_set])Create list of all test instances and a list of names for them.
fit
(X, y[, C])Fit regressor to training data.
get_class_tag
(tag_name[, tag_value_default])Get a class tag's value.
Get class tags from the class and all its parent classes.
Get config flags for self.
get_fitted_params
([deep])Get fitted parameters.
Get object's parameter defaults.
Get object's parameter names.
get_params
([deep])Get a dict of parameters values for this object.
get_tag
(tag_name[, tag_value_default, ...])Get tag value from estimator class and dynamic tag overrides.
get_tags
()Get tags from estimator class and dynamic tag overrides.
get_test_params
([parameter_set])Return testing parameter settings for the estimator.
Check if the object is composed of other BaseObjects.
predict
(X)Predict labels for data from features.
predict_interval
([X, coverage])Compute/return interval predictions.
Predict distribution over labels for data from features.
predict_quantiles
([X, alpha])Compute/return quantile predictions.
predict_var
([X])Compute/return variance predictions.
reset
()Reset the object to a clean post-init state.
set_config
(**config_dict)Set config flags to given values.
set_params
(**params)Set the parameters of this object.
set_random_state
([random_state, deep, ...])Set random_state pseudo-random seed parameters for self.
set_tags
(**tag_dict)Set dynamic tags to given values.
- classmethod get_test_params(parameter_set='default')[source]#
Return testing parameter settings for the estimator.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- paramsdict or list of dict, default = {}
Parameters to create testing instances of the class Each dict are parameters to construct an “interesting” test instance, i.e., MyClass(**params) or MyClass(**params[i]) creates a valid test instance. create_test_instance uses the first (or only) dictionary in params
- check_is_fitted()[source]#
Check if the estimator has been fitted.
Inspects object’s _is_fitted attribute that should initialize to False during object construction, and be set to True in calls to an object’s fit method.
- Raises:
- NotFittedError
If the estimator has not been fitted yet.
- clone()[source]#
Obtain a clone of the object with same hyper-parameters.
A clone is a different object without shared references, in post-init state. This function is equivalent to returning sklearn.clone of self.
- Raises:
- RuntimeError if the clone is non-conforming, due to faulty
__init__
.
- RuntimeError if the clone is non-conforming, due to faulty
Notes
If successful, equal in value to
type(self)(**self.get_params(deep=False))
.
- clone_tags(estimator, tag_names=None)[source]#
Clone tags from another estimator as dynamic override.
- Parameters:
- estimatorestimator inheriting from :class:BaseEstimator
- tag_namesstr or list of str, default = None
Names of tags to clone. If None then all tags in estimator are used as tag_names.
- Returns:
- Self
Reference to self.
Notes
Changes object state by setting tag values in tag_set from estimator as dynamic tags in self.
- classmethod create_test_instance(parameter_set='default')[source]#
Construct Estimator instance if possible.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- instanceinstance of the class with default parameters
Notes
get_test_params can return dict or list of dict. This function takes first or single dict that get_test_params returns, and constructs the object with that.
- classmethod create_test_instances_and_names(parameter_set='default')[source]#
Create list of all test instances and a list of names for them.
- Parameters:
- parameter_setstr, default=”default”
Name of the set of test parameters to return, for use in tests. If no special parameters are defined for a value, will return “default” set.
- Returns:
- objslist of instances of cls
i-th instance is cls(**cls.get_test_params()[i])
- nameslist of str, same length as objs
i-th element is name of i-th instance of obj in tests convention is {cls.__name__}-{i} if more than one instance otherwise {cls.__name__}
- fit(X, y, C=None)[source]#
Fit regressor to training data.
- Writes to self:
Sets fitted model attributes ending in “_”.
Changes state to “fitted” = sets is_fitted flag to True
- Parameters:
- Xpandas DataFrame
feature instances to fit regressor to
- ypd.DataFrame, must be same length as X
labels to fit regressor to
- Cpd.DataFrame, optional (default=None)
censoring information for survival analysis, should have same column name as y, same length as X and y should have entries 0 and 1 (float or int) 0 = uncensored, 1 = (right) censored if None, all observations are assumed to be uncensored
- Returns:
- selfreference to self
- classmethod get_class_tag(tag_name, tag_value_default=None)[source]#
Get a class tag’s value.
Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Parameters:
- tag_namestr
Name of tag value.
- tag_value_defaultany
Default/fallback value if tag is not found.
- Returns:
- tag_value
Value of the tag_name tag in self. If not found, returns tag_value_default.
- classmethod get_class_tags()[source]#
Get class tags from the class and all its parent classes.
Retrieves tag: value pairs from _tags class attribute. Does not return information from dynamic tags (set via set_tags or clone_tags) that are defined on instances.
- Returns:
- collected_tagsdict
Dictionary of class tag name: tag value pairs. Collected from _tags class attribute via nested inheritance.
- get_config()[source]#
Get config flags for self.
- Returns:
- config_dictdict
Dictionary of config name : config value pairs. Collected from _config class attribute via nested inheritance and then any overrides and new tags from _onfig_dynamic object attribute.
- get_fitted_params(deep=True)[source]#
Get fitted parameters.
- State required:
Requires state to be “fitted”.
- Parameters:
- deepbool, default=True
Whether to return fitted parameters of components.
If True, will return a dict of parameter name : value for this object, including fitted parameters of fittable components (= BaseEstimator-valued parameters).
If False, will return a dict of parameter name : value for this object, but not include fitted parameters of components.
- Returns:
- fitted_paramsdict with str-valued keys
Dictionary of fitted parameters, paramname : paramvalue keys-value pairs include:
always: all fitted parameters of this object, as via get_param_names values are fitted parameter value for that key, of this object
if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value
if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc
- classmethod get_param_defaults()[source]#
Get object’s parameter defaults.
- Returns:
- default_dict: dict[str, Any]
Keys are all parameters of cls that have a default defined in __init__ values are the defaults, as defined in __init__.
- classmethod get_param_names()[source]#
Get object’s parameter names.
- Returns:
- param_names: list[str]
Alphabetically sorted list of parameter names of cls.
- get_params(deep=True)[source]#
Get a dict of parameters values for this object.
- Parameters:
- deepbool, default=True
Whether to return parameters of components.
If True, will return a dict of parameter name : value for this object, including parameters of components (= BaseObject-valued parameters).
If False, will return a dict of parameter name : value for this object, but not include parameters of components.
- Returns:
- paramsdict with str-valued keys
Dictionary of parameters, paramname : paramvalue keys-value pairs include:
always: all parameters of this object, as via get_param_names values are parameter value for that key, of this object values are always identical to values passed at construction
if deep=True, also contains keys/value pairs of component parameters parameters of components are indexed as [componentname]__[paramname] all parameters of componentname appear as paramname with its value
if deep=True, also contains arbitrary levels of component recursion, e.g., [componentname]__[componentcomponentname]__[paramname], etc
- get_tag(tag_name, tag_value_default=None, raise_error=True)[source]#
Get tag value from estimator class and dynamic tag overrides.
- Parameters:
- tag_namestr
Name of tag to be retrieved
- tag_value_defaultany type, optional; default=None
Default/fallback value if tag is not found
- raise_errorbool
whether a ValueError is raised when the tag is not found
- Returns:
- tag_valueAny
Value of the tag_name tag in self. If not found, returns an error if raise_error is True, otherwise it returns tag_value_default.
- Raises:
- ValueError if raise_error is True i.e. if tag_name is not in
- self.get_tags().keys()
- get_tags()[source]#
Get tags from estimator class and dynamic tag overrides.
- Returns:
- collected_tagsdict
Dictionary of tag name : tag value pairs. Collected from _tags class attribute via nested inheritance and then any overrides and new tags from _tags_dynamic object attribute.
- is_composite()[source]#
Check if the object is composed of other BaseObjects.
A composite object is an object which contains objects, as parameters. Called on an instance, since this may differ by instance.
- Returns:
- composite: bool
Whether an object has any parameters whose values are BaseObjects.
- property is_fitted[source]#
Whether fit has been called.
Inspects object’s _is_fitted attribute that should initialize to False during object construction, and be set to True in calls to an object’s fit method.
- Returns:
- bool
Whether the estimator has been fit.
- predict(X)[source]#
Predict labels for data from features.
- State required:
Requires state to be “fitted”.
- Accesses in self:
Fitted model attributes ending in “_”
- Parameters:
- Xpandas DataFrame, must have same columns as X in fit
data to predict labels for
- Returns:
- ypandas DataFrame, same length as X
labels predicted for X
- predict_interval(X=None, coverage=0.9)[source]#
Compute/return interval predictions.
If coverage is iterable, multiple intervals will be calculated.
- State required:
Requires state to be “fitted”.
- Accesses in self:
Fitted model attributes ending in “_”.
- Parameters:
- Xpandas DataFrame, must have same columns as X in fit
data to predict labels for
- coveragefloat or list of float of unique values, optional (default=0.90)
nominal coverage(s) of predictive interval(s)
- Returns:
- pred_intpd.DataFrame
Column has multi-index: first level is variable name from
y
in fit, second level coverage fractions for which intervals were computed, in the same order as in input coverage. Third level is string “lower” or “upper”, for lower/upper interval end. Row index is equal to row index ofX
. Entries are lower/upper bounds of interval predictions, for var in col index, at nominal coverage in second col index, lower/upper depending on third col index, for the row index. Upper/lower interval end are equivalent to quantile predictions at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.
- predict_proba(X)[source]#
Predict distribution over labels for data from features.
- State required:
Requires state to be “fitted”.
- Accesses in self:
Fitted model attributes ending in “_”
- Parameters:
- Xpandas DataFrame, must have same columns as X in fit
data to predict labels for
- Returns:
- yskpro BaseDistribution, same length as X
labels predicted for X
- predict_quantiles(X=None, alpha=None)[source]#
Compute/return quantile predictions.
If alpha is iterable, multiple quantiles will be calculated.
- State required:
Requires state to be “fitted”.
- Accesses in self:
Fitted model attributes ending in “_”.
- Parameters:
- Xpandas DataFrame, must have same columns as X in fit
data to predict labels for
- alphafloat or list of float of unique values, optional (default=[0.05, 0.95])
A probability or list of, at which quantile predictions are computed.
- Returns:
- quantilespd.DataFrame
Column has multi-index: first level is variable name from
y
in fit, second level being the values of alpha passed to the function. Row index is equal to row index ofX
. Entries are quantile predictions, for var in col index, at quantile probability in second col index, for the row index.
- predict_var(X=None)[source]#
Compute/return variance predictions.
- State required:
Requires state to be “fitted”.
- Accesses in self:
Fitted model attributes ending in “_”.
- Parameters:
- Xpandas DataFrame, must have same columns as X in fit
data to predict labels for
- Returns:
- pred_varpd.DataFrame
Column names are exactly those of
y
passed infit
. Row index is equal to row index ofX
. Entries are variance prediction, for var in col index. A variance prediction for given variable and fh index is a predicted variance for that variable and index, given observed data.
- reset()[source]#
Reset the object to a clean post-init state.
Using reset, runs __init__ with current values of hyper-parameters (result of get_params). This Removes any object attributes, except:
hyper-parameters = arguments of __init__
object attributes containing double-underscores, i.e., the string “__”
Class and object methods, and class attributes are also unaffected.
- Returns:
- self
Instance of class reset to a clean post-init state but retaining the current hyper-parameter values.
Notes
Equivalent to sklearn.clone but overwrites self. After self.reset() call, self is equal in value to type(self)(**self.get_params(deep=False))
- set_config(**config_dict)[source]#
Set config flags to given values.
- Parameters:
- config_dictdict
Dictionary of config name : config value pairs.
- Returns:
- selfreference to self.
Notes
Changes object state, copies configs in config_dict to self._config_dynamic.
- set_params(**params)[source]#
Set the parameters of this object.
The method works on simple estimators as well as on composite objects. Parameter key strings
<component>__<parameter>
can be used for composites, i.e., objects that contain other objects, to access<parameter>
in the component<component>
. The string<parameter>
, without<component>__
, can also be used if this makes the reference unambiguous, e.g., there are no two parameters of components with the name<parameter>
.- Parameters:
- **paramsdict
BaseObject parameters, keys must be
<component>__<parameter>
strings. __ suffixes can alias full strings, if unique among get_params keys.
- Returns:
- selfreference to self (after parameters have been set)
- set_random_state(random_state=None, deep=True, self_policy='copy')[source]#
Set random_state pseudo-random seed parameters for self.
Finds
random_state
named parameters viaestimator.get_params
, and sets them to integers derived fromrandom_state
viaset_params
. These integers are sampled from chain hashing viasample_dependent_seed
, and guarantee pseudo-random independence of seeded random generators.Applies to
random_state
parameters inestimator
depending onself_policy
, and remaining component estimators if and only ifdeep=True
.Note: calls
set_params
even ifself
does not have arandom_state
, or none of the components have arandom_state
parameter. Therefore,set_random_state
will reset anyscikit-base
estimator, even those without arandom_state
parameter.- Parameters:
- random_stateint, RandomState instance or None, default=None
Pseudo-random number generator to control the generation of the random integers. Pass int for reproducible output across multiple function calls.
- deepbool, default=True
Whether to set the random state in sub-estimators. If False, will set only
self
’srandom_state
parameter, if exists. If True, will setrandom_state
parameters in sub-estimators as well.- self_policystr, one of {“copy”, “keep”, “new”}, default=”copy”
“copy” :
estimator.random_state
is set to inputrandom_state
“keep” :
estimator.random_state
is kept as is“new” :
estimator.random_state
is set to a new random state,
derived from input
random_state
, and in general different from it
- Returns:
- selfreference to self