Skip to content

Conversation

@DhmhtrhsPakakis
Copy link

Reference Issue

Addres issue #1128.

What does this implement/fix? Explain your changes.

I added a new example to the examples folder showing with plots and metrics the effect of resampling methods on the calibration of predicted probabilities.

Context

  1. The difference between an undersampled model and a normal model comparing the calibration curves.
  2. How to fix this issue using CalibratedClassifierCV
  3. Verification that possibility accuracy improved (Brier Score) and ranking ability is preserved (ROC-AUC)

Any other comments?

This PR provides a manual workflow of an example so one can understand and see the issue. It does not implement an automated solution within the library's codebase.

A fix would likely require implementing a new meta estimator and this is beyond the scope of this contribution.

I hope with this example I can help someone understand the issue and get an idea of ​​how to solve it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant