The topic that I choose to pick on today is survivor bias in advice on how to be successful. Laura Haas, IBM, just finished giving us a plethora of advice (that she and Dave Patterson, UC Berkeley, compiled). It was good advice. It was logical advice. Having said that, I'm skeptical. I am not just skeptical for one of the two obvious reasons: (1) retrospective advice of successful people often is aimed at the perfect world and may be far more romanticized than would be useful to a young researchers, and (2) most such advice, in my opinion, is extrapolated from a single data point of the success of a particular advice giver (e.g., I wrote a shorter-than-usual research statement and I got 10 job offers, so you should write a short one!). I am skeptical because of the survivor bias, coupled with a special kind of selective memory, that I feel affects such advice.
Such advice typically includes a list of don'ts and a shorter list of dos. For example, don't follow the least publishable units model, don't work on 20-years-out problems, do follow the scientific method to the letter. To justify this advice, the advice givers then typically apply, retrospectively, this advice to their own experience. The interesting phenomenon is that, in all the examples I've encountered, this past experience contains as many dos as don'ts. The dos, especially the noble dos, such as "work with others," and "focus on quality, not quantity in publishing," are regarded as the keys to the person's success. The don'ts, however, especially the "ignoble" don'ts, such as "generate lots of publications from a single idea," get classified as mistakes and dismissed as having put the person where he/she is today.
The real problem is that we rarely hear from unsuccessful academics, who may be able to discuss whether they actually executed more don'ts than dos, as the advice we get today would imply. In fact, it's quite possible that striving to do only the dos destroyed their academic careers. It may also be possible that having the proper ratio of dos to don'ts is what led to the success of the advice givers, and, without the don't, they would not be where they are today. There is no good way to evaluate the advice, except with some sense of morality. It seems noble to work with others, it seems noble to publish quality papers, it seems noble spend your time doing deep, meaningful reviews. It seems less so to try to publish small improvements on your previous work. So our minds have little issue accepting the premise that doing these noble things is good and what will help us succeed in our careers. But that statement is far from "executing as many dos and as few don'ts as one can will lead to success in academia."
Even if the advice givers had done lots of dos and few don't, and we could at least argue correlation, it wouldn't mean causation. (On that note, Margaret Martonosi just said "I was at the original CRA Career Mentoring Workshop, and here I am, so clearly, it works!") But we cannot. There is no evidence that what they call mistakes aren't precisely the actions that brought them success.