2007-11-29 14:30 in /tech/haskell
This post is going to be a little closer to a flame than I’m usually comfortable with, but I’m finding myself really frustrated by a recent rash of bad language advocacy by people who should really know better. This is by no means a problem limited to the Haskell community, but I’ve been noticing an increase from the Haskellers lately.
Today, dons posted a pair of articles benchmarking naive fibonacci implementations. Now, someone else started it, but this is still pretty silly. In post #1, we learn the shocking fact that static, compiled, finite-precision Haskell is faster than dynamic, interpreted, arbitrary-precision Python and Ruby at a heavily numeric and recursive task.
Post #2 is supposed to impress us with how easy it is to parallelize Haskell to use multiple cores. Unfortunately, the naive attempt to parallelize is actually slower than the original serial version. But if you:
- implement the algorithm twice,
- add a magic number apparently pulled out of someone’s nether regions, and
- add not just some compile flags, but some runtime flags too,
then you can get a whopping 5% speed improvement with 2 cores, and almost a factor of two speedup with 4 cores, meanwhile burning about twice as many total CPU cycles!
Color me not impressed.
Meanwhile, elsewhere people have pointed out that using an intelligent algorithm yields a 1000-fold speed improvement. And, the better algorithm is actually shorter than the original naive implementation.
What’s the point here, since I’m not writing this just to pick on people? It’s that we shouldn’t give in to the temptation to participate in dumb benchmarks like this. Doing dumb stuff makes you look dumb, or worse(?) dishonest. And Haskell is about making you look smart, right? So, let’s not play the game, okay?