Points are determined by the performance of a given machine relative to a becnhmark machine. Before putting out any new work unit, we benchmark it on a dedicated 2.8GHz Pentium 4 machine
does that strike anyone else as a little backwards?
the whole point of distributed computing is that you're meant to tackle tasks so large that they couldn't all be processed by one computer, yet every single work unit has already been processed in house at base?
leaves me wondering... what's the point of re-crunching already worked units?