|In reply to Comment 31 (Johan Rönnblom):|
>is the kind of thing people use their machines for
You would not be testing software, ie. thousands of lines of code, but a highly specific filter with some dozen lines of code. There is no statistical value to that. Those few lines might just by coincidence favour one system over the other: because of the use of registers (many vs. few), because of targeting a special CPU (optimized for 060 scheduling etc), because of the size of the routine, because of the amount of local data, because of code/data cache use, because of the amount of memory accesses, because of the dependency on endianess and so on. Just forget about it, the result doesn't mean a thing. Same objections apply to a number of short one-sided benchmarks, like decoding jpegs, prime number searching etc. If you want to compare the JIT of MorphOS vs UAE, compilation is a good test because it runs for a long time and a lot of different code gets executed. Consequently, one often finds kernel compilation times in reviews. Running games is a good test, too, for the same reason: lots of code. If you want to stick with image processing, the only way to get a meaningful result out of it is to use dozens of different functions and calculate an average.