Change theme
Help
Press space for more information.
Show links for this issue (Shortcut: i, l)
Copy issue ID
Previous Issue (Shortcut: k)
Next Issue (Shortcut: j)
Sign in to use full features.
Vote: I am impacted
Notification menu
Refresh (Shortcut: Shift+r)
Go home (Shortcut: u)
Pending code changes (auto-populated)
View issue level access limits(Press Alt + Right arrow for more information)
Request for new functionality
View staffing
Description
Without losing the model.
Reasoning:
1. The initial results are satisfactory, and the remainder of the training is spent on improving impurities, which can lead to overfitting.
2. The training picks a wrong direction, and there is no point in wasting anymore time on training it.
3. Pricing.
4. Training fails on an easy set of examples, and there is no point to continue training it on a more complex one.