feature_scoring

Feature scoring functionality

ema_workbench.analysis.feature_scoring.CHI2(X, y)

Compute chi-squared stats between each non-negative feature and class.

This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes.

Recall that the chi-square test measures dependence between stochastic variables, so using this function “weeds out” the features that are the most likely to be independent of class and therefore irrelevant for classification.

Read more in the User Guide.

Parameters
  • X ({array-like, sparse matrix} of shape (n_samples, n_features)) – Sample vectors.

  • y (array-like of shape (n_samples,)) – Target vector (class labels).

Returns

  • chi2 (ndarray of shape (n_features,)) – Chi2 statistics for each feature.

  • p_values (ndarray of shape (n_features,)) – P-values for each feature.

Notes

Complexity of this algorithm is O(n_classes * n_features).

See also

f_classif

ANOVA F-value between label/feature for classification tasks.

f_regression

F-value between label/feature for regression tasks.

ema_workbench.analysis.feature_scoring.F_CLASSIFICATION(X, y)

Compute the ANOVA F-value for the provided sample.

Read more in the User Guide.

Parameters
  • X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The set of regressors that will be tested sequentially.

  • y (ndarray of shape (n_samples,)) – The target vector.

Returns

  • f_statistic (ndarray of shape (n_features,)) – F-statistic for each feature.

  • p_values (ndarray of shape (n_features,)) – P-values associated with the F-statistic.

See also

chi2

Chi-squared stats of non-negative features for classification tasks.

f_regression

F-value between label/feature for regression tasks.

ema_workbench.analysis.feature_scoring.F_REGRESSION(X, y, *, center=True)

Univariate linear regression tests returning F-statistic and p-values.

Quick linear model for testing the effect of a single regressor, sequentially for many regressors.

This is done in 2 steps:

  1. The cross correlation between each regressor and the target is computed, that is, ((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) * std(y)) using r_regression function.

  2. It is converted to an F score and then to a p-value.

f_regression() is derived from r_regression() and will rank features in the same order if all the features are positively correlated with the target.

Note however that contrary to f_regression(), r_regression() values lie in [-1, 1] and can thus be negative. f_regression() is therefore recommended as a feature selection criterion to identify potentially predictive feature for a downstream classifier, irrespective of the sign of the association with the target variable.

Furthermore f_regression() returns p-values while r_regression() does not.

Read more in the User Guide.

Parameters
  • X ({array-like, sparse matrix} of shape (n_samples, n_features)) – The data matrix.

  • y (array-like of shape (n_samples,)) – The target vector.

  • center (bool, default=True) – Whether or not to center the data matrix X and the target vector y. By default, X and y will be centered.

Returns

  • f_statistic (ndarray of shape (n_features,)) – F-statistic for each feature.

  • p_values (ndarray of shape (n_features,)) – P-values associated with the F-statistic.

See also

r_regression

Pearson’s R between label/feature for regression tasks.

f_classif

ANOVA F-value between label/feature for classification tasks.

chi2

Chi-squared stats of non-negative features for classification tasks.

SelectKBest

Select features based on the k highest scores.

SelectFpr

Select features based on a false positive rate test.

SelectFdr

Select features based on an estimated false discovery rate.

SelectFwe

Select features based on family-wise error rate.

SelectPercentile

Select features based on percentile of the highest scores.

ema_workbench.analysis.feature_scoring.get_ex_feature_scores(x, y, mode=RuleInductionType.CLASSIFICATION, nr_trees=100, max_features=None, max_depth=None, min_samples_split=2, min_samples_leaf=None, min_weight_fraction_leaf=0, max_leaf_nodes=None, bootstrap=True, oob_score=True, random_state=None)

Get feature scores using extra trees

Parameters
Returns

  • pandas DataFrame – sorted in descending order of tuples with uncertainty and feature scores

  • object – either ExtraTreesClassifier or ExtraTreesRegressor

ema_workbench.analysis.feature_scoring.get_feature_scores_all(x, y, alg='extra trees', mode=RuleInductionType.REGRESSION, **kwargs)

perform feature scoring for all outcomes using the specified feature scoring algorithm

Parameters
  • x (DataFrame) –

  • y (dict of 1d numpy arrays) – the outcomes, with a string as key, and a 1D array for each outcome

  • alg ({'extra trees', 'random forest', 'univariate'}, optional) –

  • mode ({RuleInductionType.REGRESSION, RuleInductionType.CLASSIFICATION}, optional) –

  • kwargs (dict, optional) – any remaining keyword arguments will be passed to the specific feature scoring algorithm

Return type

DataFrame instance

ema_workbench.analysis.feature_scoring.get_rf_feature_scores(x, y, mode=RuleInductionType.CLASSIFICATION, nr_trees=250, max_features='sqrt', max_depth=None, min_samples_split=2, min_samples_leaf=1, bootstrap=True, oob_score=True, random_state=None)

Get feature scores using a random forest

Parameters
Returns

  • pandas DataFrame – sorted in descending order of tuples with uncertainty and feature scores

  • object – either RandomForestClassifier or RandomForestRegressor

ema_workbench.analysis.feature_scoring.get_univariate_feature_scores(x, y, score_func=<function f_classif>)

calculate feature scores using univariate statistical tests. In case of categorical data, chi square or the Anova F value is used. In case of continuous data the Anova F value is used.

Parameters
  • x (DataFrame) –

  • y (1D nd.array) –

  • score_func ({F_CLASSIFICATION, F_REGRESSION, CHI2}) – the score function to use, one of f_regression (regression), or f_classification or chi2 (classification).

Returns

sorted in descending order of tuples with uncertainty and feature scores (i.e. p values in this case).

Return type

pandas DataFrame