One-shot learning examples

As I’ve asked here, I often hear the buzzword-like term “One-Shot Learning” thrown about and I’ve often heard the question asked if any models built using the NEF/SPA exhibit “One-Shot Learning”.

Is Dan’s model that complete’s Raven’s Progressive Matrice’s an example of “One-Shot Learning”? Are there other examples?

It depends somewhat on what you mean with “one-shot-learning”. If you mean, learning an association with a single presentation, then I have another example lying around, but it’s currently broken because the scaling of learning rates changed in the meantime. But what it basically does is, it projects into a sparse neural represenation and then uses PES and BCM (actually PES should be sufficient if you don’t need pattern completion and that makes balancing the learning rates a non-issue) to learn the associations. In this way multiple associations can be learned with a single presentation without destroying previously learned associations. Without the sparsification layer you would need to use a lower learning rate and interleave the presentation of different associations.

If you mean with “one-shot-learning” that the model generalizes from a single example, I don’t think we did that (and I assume it’s still a mostly unsolved problem in general?). Though, maybe the RPM model could manage to do that in some cases?

A link to that example (or at least the results) would be appreciated.

In terms of generalisation from a single example, would you consider this paper a good example of that?

Maybe, ask me again when I reduced my backlog of papers to read.