Elements of the Model
The idea behind the model is that the idealized learner picks a hypothesis (a concept extension, a set of objects) based on a finite set of examples. In their experiments, the training sets always consist of either one or three examples. There are 45 objects in the "world" in which the learner lives: some vegetables, some cars, and some dogs.As far as I understand, the prior probabilities fed into the computational model were based on human similarity judgments. This is quite problematic, as similarity can reasonably be seen as a dual of categories (with being-similar corresponding to being-in-the-same-category). So if I've gotten this right, then the answer is to some extent already built into the question.
Variations
A number of tweaks are further applied to the model:- The priors of the "basic-level" concepts (dog, car, and vegetable) can be manually increased to introduce a bias towards this level. This increases the fit immensely.
- The priors of groups with high internal similarity (relative to the nearest neighbor) can be increased to introduce a bias towards coherent and separated categories. Tenenbaum and Xu call this the "size principle."
- Applying the learned posteriors, the learner can either use a weighted average of probabilities, using the model posteriors as weights, or simply pick the most likely model and forget about the rest. The latter corresponds to crisp rule-learning, and it gives suboptimal results in the one-example cases.
Questions
I wonder how the model could be changed so as to- not have concept learning rely on preexisting similarity judgments;
- take into account that similarity judgments vary with context.
No comments :
Post a Comment