18-08-27 New Concrete Representation Learning Ideas

Category: Idea Lists (Upon Request)

Read the original document

<!-- gdoc-inlined -->


Let them be shitty. But go super concrete. Super concrete.

I like this idea of ‘new metrics over representations’. I want to learn abstract representations.

  1. Sensitivity score (maml based).
  2. Do close analysis of how the representation represents a single image.
    1. Credit assignment over particular filters.
    2. Look at the way the filters are recombined with one another - find simple examples of composition that models a particular part of the image.
  3. Some filters will be composed with more or less other filters, which is a different metric than their importance. Which filters activate over the most images? At each level of the network?
    1. Can we use this metric of generality as a heuristic for transfer? Say, only filters that are sufficiently general get used in transfer?
  4. Find ‘conflation’ in a representation. (this may be hard)
    1. Have a notion of which features should be recombined to create a higher level feature
      1. Look for overlapping activations where they should not exist (misclassified examples should be really good for this)
  5. Get to a ‘why’ for misclassified examples
    1. Look at the ways that the representation couldn’t distinguish between particular parts of an input, look at the mistakes made over 4-5 examples and diagnose them
    2. Are you allowed to publish a paper titled ‘why our networks fail?’
      1. This may be hard to get causal on, but could be extremely useful.
  6. Do VQ-VAE, but with a forward predictive model. Generative model of future, rather than present. Auto-regressive generative model.
    1. I guess this is what the forward predictive lstm over VAE state is, in a way.
  7. Take Ben Poole’s Gaussian Mixture Model VQ-VAE and apply it to something like world models (where you want this ability to go discrete or continuous)
    1. Is this idea for generative future prediction a thing in general? VAE + LSTM to do it is awesome, but is it the best in its class for that task?
  8. Check manifold learning hypothesis
    1. Dog manifold on animal manifold, for example
  9. Causal representation learning - which filtermaps have counterfactual impact on the output?
    1. Use filter-level dropout to estimate this

Source: Original Google Doc

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?