I’m in the middle of revising a paper, and a bit lost in bibliography: I’m acutely suffering of the usual “Oh Lord! Why haven’t I seen this one before?”
The paper itself should come appropriately as I plan to give an argument on whether “social services” (phone, e-mail, social graphing services) should encourage compatibility — e.g. is it in MySpace interest to allow its users to ‘friend’ Facebook accounts? After all, Hotmail and Yahoo! users have been able to send e-mails one to another for years and both companies greater profit. Such a decision made Theodore Weil, of AT&T fame, able to transform ‘theaterophone’ into was became today’s three billion user network.
But that is not my point today: what I would love to tackle is my fear of missing key-arguments in my area. Google Scholar has a comprehensive database of scientific paper, SSRN is more specialised; with CiteSeer and others all proved amazingly useful. I can go up citation tracks, and down; but I can’t see an weighted list of what I’m missing. I can’t vote down approaches that I noticed but do not want to follow on.
It is rather similar to Amazon or Netflix recommendations — but based on data withheld by private scholarly publisher or National Science Institutes — and both types proved to be not so agile so far.
Recommendation system are easy to set up; so far, most of them use a string input: I would like to hand a BibTeX file as an input. Is anyone able to help?
My idea would be to list the papers associated to each element in my input list (same references, referred by); remove the inputs from the potential list, ordered by how often they came up. All this could be weighted, too.
Last question: would this favour academic diversity?