I just finished reading Surely You’re Joking Mr Feynman!, a collection of anecdotes as told by the famous physicist, Richard Feynman. It’s really a wonderful book, which I would highly recommend; Feynman is such a mischievous character that most of these tales will have you roaring with laughter. Reading these stories also made me realise the importance of asking questions, even if you feel that they are ‘stupid’. I remember at the conference how many people would start their questions along the lines of ‘I know this is a naïve question…’ or ‘this is probably a naïve question, but…’. In academia, there are so many really smart people that you feel the need to maintain a façade of “perfect” knowledge, and often feel the pressure of not wanting to look dumb. Here’s Feynman’s take on that:
Some people think in the beginning that I’m kind of slow and I don’t understand the problem, because I ask a lot of ‘dumb’ questions: ‘Is a cathode plus or minus? Is an anion this way, or that way?’ But later, when the guy’s in the middle of a bunch of equations, he’ll say something and I’ll say, ‘Wait a minute! There’s an error! That can’t be right!’
The guy looks at his equations, and sure enough, after a while, he finds the mistake and wonders, ‘How the hell did this guy, who hardly understood at the beginning, find that mistake in the mess of these equations?’
The point of the story speaks on a wider topic of knowing something versus understanding it. There are all sorts of things that we know, like that gravity exists, or we see things due to light bouncing off objects which are then captured by our eyes, but it is an entirely different story in actually understanding how these processes work. For much of the time, to obtain a deep understanding of something requires asking seemingly ‘dumb’ questions.
But that’s not what I wanted to address in this post. In the last chapter of the book, Feynman addresses the question of scientific integrity, and how often this is the only thing which separates true scientific inquiry from pseudoscientific baloney:
It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it…Details that could throw doubt on your interpretation must be given, if you know them…In summary, the idea is to try to give all the information to help others to judge the value of your contribution; not just the information that leads to judgement in one particular direction or another.
Often, it feels as if the academic system actively discourages such honesty. In papers, it is very common to sanitise any results which conflict with the authors’ arguments, or sometimes remove them altogether. When I was doing my Master’s work, there was one technique which never worked particularly well, giving lots of results when others had only reported on the one result this technique gave them. Now, it could be that we were particularly bad with this technique. But equally so, it could be that scientists using this technique only report results that they are expecting – and not mentioning all of the other ‘hits’ obtained. The thing is, you can’t tell what factors are influencing your results if others are not completely honest with how well this particular technique is supposed to work. Consider the well-know example of someone firing a machine gun into a wall and only circling the holes which are close together – this gives the illusion of accuracy to a random event. Likewise, if other scientists only report results which agree with their idea, they can easily give the impression that this idea is very good simply by expunging anything which makes it look bad.
The current academic treadmill is so emphasised on getting papers in good journals which allows one to get more grants that sometimes one forgets why they are doing science in the first place. Science is about the thrill of discovery, of uncovering something about nature that no one else has found before. And that requires complete and utter honesty. I don’t know if there is an easy solution to this problem, not when the current incentives are so that people need to publish “good” results. Any efforts at encouraging greater honesty will not work in such a high-pressure system, where the incentive to not be completely honest is so great. But we owe it to ourselves and to future generations of scientists to find a way of encouraging greater scientific integrity, to demand that unpleasant facts be shared. Otherwise, why are we even doing science when the gap between what we are doing and all the other pseudoscientific hocus-pocus is becoming uncomfortably thin?