18-11-13 Powerful Concepts in Machine Learning
Category: Idea Lists (Upon Request)
<!-- gdoc-inlined -->
Understand their implications for thought. Their implications for the nature of information itself. The nature of knowledge, and the limitations of knowledge.
- Bias-Variance Tradeoff
- Overfitting
- Controlling complexity
- Model simplicity (restriction methods)
- Selection methods (over features)
- Regularization
- Curse of dimensionality
- Ensemble Modeling
- Occam’s Razor (Formalized)
- Training vs. Generalization Error
- Interpolation vs. Extrapolation
- Smoothness
- VC Dimension
- Solomonoff Induction
- Variance Maximization
- Optimizing for Volatility vs Expected Value
- Bayes Rule
- Bayes Error
- Exploration-Exploitation
- Manifolds as Data Representation
I would like to make this stuff runnable.
Source: Original Google Doc