This is a reprint of a comment on this Vox Popoli thread.
Aha! Figures I’d have to be drunk to come up with a better analogy. Fun fact, I score higher on IQ tests when I’m drunk. Well, I suppose there’s a local maximum of drunkenness vs. IQ, at that.
Anyway, when we say a computer is “fast”, it is like saying a person is “smart”. When we say “this computer is faster than that computer” it is because we have performed a common benchmark test on both computers, and one performed better. If the benchmark is “reliable” then we mean it is robust, like Spearman’s g. Maybe one of the computers is designed to be a home theater and the other is supposed to be a server- if the tester does not account for this, then the benchmark was misapplied. But common sense tells us the performance difference between an Apple II and a MacBook Pro is unmistakable.
To claim that IQ tests are not science is like claiming that benchmark tests are not science, because the “units” are not explicitly defined. This doesn’t mean they aren’t there- the transistors and their voltages are still humming along happily without our conscious awareness of them.
To claim that benchmarks are not scientific is silly, because computer science is a real thing. Benchmarks are “high-level” computer science (in the same way that Python is a higher-level language than C or assembly) whereas chip design is “low-level” computer science. By analogy, psychometrics are “high-level” neurology, whereas brain surgery is “low-level” neurology.
Emphasis here: The tendency of benchmark users to fail to understand the extraordinary, underlying complexity of their benchmarks does not prove that high-level computer science cannot be practiced by a sufficiently intelligent computer scientist.
In philosophical terms, the interface between the soul and the meatbrain is similar to the interface between the user and a desktop computer. This may account for all mind-body problems- as Markku has previously explained in the technobabble of his profession (WTF is a thin client, anyway?)- but I can’t say it has necessarily done so, because I have no idea what he’s talking about most of the time.
The problem, therefore, is that we are not in the historical position of computer scientists, who have built their field from the ground level (transistors) up to a very high level of abstraction (e.g. TurboTax, or Blogger’s tendency to drop long-winded comments for its own sadistic purposes). The situation is more like a dystopian future, where we have rediscovered a supercomputer and are attempting to piece together an understanding of its functioning. We have discovered the low-level, underlying units (neurons, axons, action potential; analogous to transistors, wires, voltages, etc.) and we’ve learned a great deal about the high-level operation of the thing (having observed that one is generally faster than the other), but we’ve as yet failed to bridge the gap with a middle level. It’s as if the computer scientists of old had failed to leave instructions for how a network of transistors could give rise to a logic circuit, and then a breadboard, etc. We’re stumbling through the rediscovery of these middle parts.
Okay, having properly illustrated the problem, I’ll attempt to provide an ad-hoc bridge before my depressive vice puts me to sleep. No promises.