Feature Request P2
Status Update
Comments
jo...@google.com <jo...@google.com> #2
I have forwarded this request to the engineering team. We will update this issue with any progress updates and a resolution.
Best Regards,
Josh Moyer
Google Cloud Platform Support
Best Regards,
Josh Moyer
Google Cloud Platform Support
Description
Please describe your requested enhancement. Good feature requests will solve common problems or enable new use cases.
What you would like to accomplish: Calculation of ROC & AUC metrics in "Evaluate" to better asses classifier output quality.
How this might work: Based on True Positive Rate & False Positive Rate for a multi-class problem, define the methodology & calculate the metric as area under the said curve. ROC curves are typically used in binary classification to study the output of a classifier. In order to extend ROC curve and ROC area to multi-label classification, it is necessary to binarize the output. One ROC curve can be drawn per label, but one can also draw a ROC curve by considering each element of the label indicator matrix as a binary prediction (micro-averaging).
Another evaluation measure for multi-label classification is macro-averaging, which gives equal weight to the classification of each label. (Ref:
If applicable, reasons why alternative solutions are not sufficient:
Other information (workarounds you have tried, documentation consulted, etc):