Definition
An underconfident model or estimator is one that overestimates uncertainty.
- Predictions are too cautious compared to reality.
- Confidence intervals are too wide, or predicted probabilities are too close to 0.5 (for binary classification), even when the model could be more certain.
Opposite of overconfident.
Typical Cases
- Underconfident Predictions
- True accuracy = 90%, but the model outputs probabilities around 0.6–0.7 instead of near 0.9.
- The model lacks sharpness: it rarely makes strong predictions, even when it could.
- Underconfident Confidence Intervals
- A 95% confidence interval is so wide that it actually covers the true parameter 99.9% of the time.
- The interval is technically safe but not informative.
- Bayesian Models
- If priors or posterior variance are too broad, the result is an overly uncertain distribution (posterior too “flat”).
Why Underconfidence Happens
- Too much regularization (L2/L1 penalties pushing probabilities toward neutral).
- Poorly trained model with insufficient data.
- Calibration issue: Some models are conservative in assigning high probabilities.
- Deliberate design: In high-risk fields (medicine, finance), models may be tuned to avoid extreme predictions.
Why It’s a Problem
- Leads to missed opportunities because the model doesn’t take advantage of its true predictive power.
- Decision-makers may distrust the model if predictions always look uncertain.
- In A/B testing or clinical trials, overly wide confidence intervals reduce the chance of finding real effects.
How to Fix Underconfidence
- Calibration methods (same as for overconfidence): Platt scaling, isotonic regression, temperature scaling.
- Model tuning: Reduce excessive regularization, improve training.
- Better features / more data: Increase signal-to-noise ratio so the model can make stronger predictions.
- Sharpness metrics: Use proper scoring rules (log-loss, Brier score) to encourage sharper predictions.
Example
Suppose a binary classifier is tested on 1,000 samples:
- It’s correct 90% of the time.
- But when it predicts, its average probability for the chosen class is only 70%.
This means it’s underconfident: it’s right more often than its own confidence suggests.
In short:
- Underconfident = predictions are too cautious; confidence intervals too wide, probabilities too close to neutral.
- Fix: calibration, better training, more data.
- Opposite of overconfident (too certain).
