Why are (neuro)scientists such terrible programmers? I just spent most of the morning trying to add some simple functionality to some code from a fairly prominent computational neuroscience lab, and the code was so bad, I began to entertain serious doubts about the method and a whole string of papers that have been published using the method. I briefly debated rewriting the whole thing from scratch, but I have about twelve other projects on my plate now, and the benefits of challenging a fairly accepted method are pretty low, unless you can also upset a few cherished hypothesis at the same time. So I crammed my patches into the spaghetti, prayed it didn't break anything, and started the analysis, which only makes a rather minor contribution to my results anyway. No doubt better programmers than me have felt and done similar things, and in all likelihood the problem is much larger: science is so competitive now, with everyone looking for big, new, flashy results, that verification has become a really low priority. There's not a lot I can do about that, but at least in the realm of computational techniques it may be worthwhile to outline the problem.
What makes code good, and why is it important? Good code is first and foremost, easy to interpret by someone who didn't write it. Anyone with a basic understanding of programming, and the scientific question being addressed, should be able to tell what's going on in every line. This goes far beyond commenting; in fact, really good code doesn't need to have a lot of comments, because the expressions are largely self-explanatory. But regardless of how it's achieved, when code is readable the methodology is clear. It should be obvious that this is important to good science, because it allows your colleagues to evaluate what you did and extend your work. But let's be honest: not everyone shares those goals. A cynical explanation for why there's so much bad scientific code is that the people releasing it want to maintain control over the method, for commercial or competitive reasons. I don't think this motive is all that common in neuroscience, so I prefer to think that it stems from a lack of education, which I am willing to do my part to try to dispel. Toward that end, I'll be posting articles here detailing some common mistakes and mistaken attitudes.
In the long run, as computational techniques become more complex and more common, I would propose that code needs to be treated as a part of a paper's Methods section. If a program hasn't been used previously, it should be subject to review, and even if the method isn't new, authors should be required to deposit the code they used in a given paper somewhere online. Many journals make similar requirements for gene sequences. I think this would do a lot to improve the quality of programming, and it would keep researchers from purposefully obfuscating code to keep one foot in the door for commercial exploitation. But of course there is a much larger discussion about how well scientific and pecuniary motives can coexist in research.