4 months or so ago
I launched this blog, experimenting with the academic paper recommendation tool
Google Scholar Updates, which uses your own papers as a 'search term' to recommend recently archived papers. I've now read and written on 12 papers, from 14 recommendations (I broke the purity of the 'experiment' by
skipping two papers that looked very unlikely to be of value to me); in this post I will review what I've learned from the experiment so far.
Relevance: The biggest question is whether Updates has been regularly recommending papers that are interesting and useful to me. To answer this we must first observe that there is one very obvious problem with the methodology of a tool like this, which is that its only input is the research I've done in the past, and not that which I'm doing right now but have yet to publish. This is particularly noticeable to me right now because I've been partly transitioning away from my PhD line of research to take on some new interests. These new directions are not catered for by Updates, but neither would you expect them to be; it would be foolish to use Updates as your only source of reading. Nonetheless the point should be made that Updates could only ever be useful in proportion to how much your current research interests are a continuation of your past ones.
With this caveat set aside, I have been happy with the relevance of my recommendations - happy enough to continue using this tool, and maintaining this blog. There has been
only one conspicuous example of a paper with very low relevance to my research interests, although a number of other papers have fairly weak links. In my case, these weakly linked papers usually comprised a brief discussion of handling name and binding issues in the context of some other investigation. In fact these weakly linked papers have in many ways been the most interesting to read about, as they help me situate my work in the broader fabric of theoretical computer science research. So while there is a concern (not one I totally buy into) that these sorts of recommender systems in general
can produce an 'echo chamber' effect, fitting people into narrow profiles and never challenging them, that's not been my experience with Updates.
Breadth of sources: One problem that might be underestimated is the sheer diversity of different outlets that publish research relevant to me. A long time ago everything worth reading would appear in the
Proceedings of the Royal Society, but nowadays research of interest to me might appear in all sorts of journals, conference proceedings, and online repositories. The only venue to 'publish' more than one of the twelve papers I have had recommended is
arXiv, which is far too large and diverse to regularly check. In fact, 4 of the 12 papers were not formally 'published' anywhere more high profile than the personal websites of the authors (more on such '
grey literature' below). The converse of this diversity of outlets is that any one outlet publishes only a very small proportion of material I am interested in, or even understand. Therefore regularly checking certain outlets for new research borders on the useless, and is certainly inefficient. Some sort of automation, whether through explicit keyword or citation alerts or more 'intelligent' but opaque recommender systems, seems virtually compulsory if you want to stay current with relevant research.
False negatives: A tricky question to judge is whether any highly relevant papers are not being recommended to me. Following my earlier discussion, the first answer to this is "of course!" - at a minimum, Updates is worthless to bring my attention to research relating only to my current unpublished research. On the rather fairer question of whether Updates is failing to recommend papers relevant to my existing body of work, one thing that could help make that judgement is that Updates offers, along with a selective 'top' list (currently numbering 32), a less selective 'all' list of recommendations (currently 99). Are there recommendations in the second list that I would have preferred on the first? I haven't studied this systematically, but on a brief look the answer is no - the algorithm has highlighted papers that at least from their titles and authors seem more interesting than those on its backup list. The more imponderable question is whether Updates is missing some highly relevant papers altogether; all I can say here is that I am not aware of any.
Quality: Of course a paper can be highly related to my research and still not be worth my time -
here is one clear example, and as I said I entirely skipped two others (by the same authors). Updates does not really filter by quality in any direct way and so this problem is probably unavoidable. It is, however, not chronic - perhaps I'm simply lucky to be in a field not too overburdened with cranks, because the vast majority of papers I have read are genuinely good science. I wouldn't necessarily say 'excellent' science, but the reality is that most science is not groundbreaking - and that's OK! For every paradigm shift that changes everything you need a volume of 'normal' science to fill in the gaps and push at the edges; it's fair to say that most of the papers I have read for this blog fit into the latter category. In particular, many have been 'sequel' papers extending earlier work by the same authors, which are very popular in an era where
quantity of publication gets (perhaps undue) emphasis.
One issue on the quality front that deserves particular emphasis is the large amount of unreviewed work I have had recommended. 7 out of the 12 papers had not yet been accepted for publication at the time of my review (
one of these has
subsequently been accepted for publication). Researchers are not all of one mind on whether such unverified work should be exposed to public view or not, but enough do put this sort of work out there that it constitutes a major proportion of Updates's recommendations. The advantage of this is that you see new science quicker; the disadvantage is that the work tends to be less polished in its exposition and less reliable in its results. Your mileage with this tool will almost certainly depend on your tolerance for this unpublished grey literature.
Sociology: One slightly unexpected aspect of this tool is that it has given a pretty interesting big picture not just of who is releasing research of interest to me, but of where. This is quite important as I look for post-postdoc work! Of the 12 papers an overwhelming 11 come from institutions in Europe, with 1 each from Asia and South America (numbers add to more than 12 because some papers have multiple authors from different institutions). Breaking that down, there were 5 from the United Kingdom, 2 each from Germany and Poland, and 1 each from Brazil, France, Japan, Romania, and Sweden. While these statistics are only compelling so far as one trusts the quality of Updates's recommendations, the sheer weight of European institutions (not to mention the surprising absence of the USA) say a lot about where my sort of research is being done, and emphasise the role geography still plays despite all the connections modern communications make possible - something difficult for a researcher in relatively isolated Australia to be unaware of, of course.