17-08-10 Machine Intelligence Contrarian
Category: Idea Lists (Upon Request)
<!-- gdoc-inlined -->
1. We don’t care about intelligence, we care about generalized problem solving. It just so happens that intelligence is the way.
- Intelligence is used to refer to abstract reasoning, pattern recognition, etc. as tasks that are related to and transfer between one another. It is defined, in the sense that most common parlance words are defined. The only reason that it feels like it’s not is that what we want is conflated with general problem solving.
- Unsupervised learning isn’t the future - there’s effective supervised learning baked into predicting the future state of the world, and then finding out what happened. Somehow people think about it in terms of a manually provided sigal, and then claim that supervised learning isn’t done.
- A lot of people have this weird belief that human-given high level labels need to be used for supervised learning. Neuroscientists will claim that the brain doesn’t implement supervised learning on this basis. All you need to do is contrast prediction with reality over a time series. Apparently this is some contrarian position.
- Time Series Data paradigm broken
- Instead of a new feature for each lag variable, introduce a time dimension and so turn it into a 3D tensor.
- It makes no sense to frame a 3D structure as 2D, and it makes it harder to do transfer over similar features as well as think about the structure of the data.
- Dataset representation
- The inability to capture feature-level features is damning - dataset level features lose far too much information to allow efficient metalearning.
- Missing data shouldn’t be transformed into an interpolation problem. It’s a machine learning problem, should be possible for a model to take an arbitrary subset of features.
- It’s suboptimal to try to go all the way with networks, as compositional structure is only (an important!) one of many categories of structure that we can learn.
- All data is time series data
- Necessary for causal / anticausal learning
- Long-term dependency learning as much less valuable than representation learning + transfer
- AlphaGo playing against itself is effectively causal counterfactual reasoning, where it asks what outcome is likely over each choice and has simulated that.
- DeepRL can do reasoning insofar as it’s doing counterfactual modeling through simulation.
- Autonomous weaponry will be cheap and decisive, to the point of putting the power to take over small countries into the hands of deca-millionaires.
- The fact that gradient boosting based on decision trees, which don’t even capture continuous structure, are state of the art on ~all datasets that don’t have compositional structure is a strong indictment of the state of machine learning.
- The fact that evolutionary methods are close in performance to reinforcement learning is a strong indictment of our ability to learn in those environments.
Source: Original Google Doc