SHAP (SHapley Additive exPlanations) for variable importance in Python
it is a unified approach to explain the output of any machine learning model. pip install shap ( on anaconda prompt) import xgboost import shap # load JS visualization code to notebook shap.initjs() # train XGBoost model X,y = shap.datasets. boston () model = xgboost.train({ " learning_rate " : 0.01 }, xgboost.DMatrix(X, label = y), 100 ) # explain the model's predictions using SHAP values # (same syntax works for LightGBM, CatBoost, and scikit-learn models) explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X) # visualize the first prediction's explanation (use matplotlib=True to avoid Javascript) shap.force_plot(explainer.expected_value, shap_values[ 0 ,:], X.iloc[ 0 ,:]) The above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output. Features pushing the prediction higher are shown...