Tuesday, March 3, 2009

[week 7] Muddiest Points

As the last post, this one is about Relevance Feedback. Firstly, I would like to know more details about the Wilcoxon signed-rank test. I know, I know... I can google it by myself.

Another question is that I guess that relevance feedback seems to be too attached to one specific user needs. If the algorithm updates weights based on user feedback, how can we be sure that if another user makes the same or a similar query has the same information need? Even if they have the same information need, How can we be sure that they will judge the results likely relevant? Even more, how is it possible that a model with two assumptions so weak (users like to provide feedback, we can obtain reliable feedback from users) can be successful?

No comments:

Post a Comment