After I wrote my last blog post reacting to Alex Knapp's critique of Ray Kurzweil's predictive accuracy, Ray Kurzweil wrote his own rebuttal of Alex's argument.
Ray then emailed me, thanking me for my defense of his predictions, but questioning my criticism of his penchant for focusing on precise predictions about future technology. I'm copying my reply to Ray here, as it may be of general interest...
Hi Ray,
I wrote that blog post in a hurry and in hindsight wish I had framed things more carefully there.... But of course, it was just a personal blog post not a journalistic article, and in that context a bit of sloppiness is OK I guess...
Whether YOU should emphasize precise predictions less is a complex question, and I don't have a clear idea about that. As a maverick myself, I don't like telling others what to do! You're passionate about predictions and pretty good at making them, so maybe making predictions is what you should do ;-) .... And you've been wonderfully successful at publicizing the Singularity idea, so obviously there's something major that's right about your approach, in terms of appealing to the mass human psyche.
I do have a clear feeling that the making of temporally precise predictions should play a smaller role in discussion of the Singularity than it now does. But this outcome might be better achieved via the emergence of additional, vocal Singularity pundits alongside you, with approaches complementing your prediction-based approach -- rather than via you toning down your emphasis on precise prediction, which after all is what comes naturally to you...
One thing that worries me about your precise predictions is that in some cases they may serve to slow progress down. For example, you predict human-level AGI around 2029 -- and to the extent that your views are influential, this may dissuade investors from funding AGI projects now ... because it seems too far away! Whereas if potential AGI investors more fully embraced the uncertainty in the timeline to human-level AGI, they might be more eager for current investment.
Thinking more about the nature of your predictions ... one thing that these discussions of your predictive accuracy highlights is that the assessment of partial fulfillment of a prediction is extremely qualitative. For instance, consider a prediction like “The majority of text is created using continuous speech recognition.” You rate this as partially correct, because of voice recognition on smartphones. Alex Knapp rates this as "not even close." But really -- what percentage of text do you think is created using continuous speech recognition, right now? If we count on a per character basis, I'm sure it's well below 1%. So on a mathematical basis, it's hard to justify "1%" as a partially correct estimate of ">50%. Yet in some sense, your prediction *is* qualitatively partially correct. If the prediction had been "Significant subsets of text production will be conducted using continuous speech recognition", then the prediction would have to be judged valid or almost valid.
One problem with counting partial fulfillment of predictions, and not specifying the criteria for partial fulfillment, is that assessment of predictive accuracy then becomes very theory-dependent. Your assessment of your accuracy is driven by your theoretical view, and Alex Knapp's is driven by his own theoretical view.
Another problem with partial fulfillment is that the criteria for it, are usually determined *after the fact*. To the extent that one is attempting scientific prediction rather than qualitative, evocative prediction, it would be better to rigorously specify the criteria for partial fulfillment, at least to some degree, in advance, along with the predictions.
So all in all, if one allows partial fulfillment, then precise predictions become not much different from highly imprecise, explicitly hand-wavy predictions. Once one allows partial matching via criteria defined subjectively on the fly, “The majority of text will be created using continuous speech recognition in 2009” becomes not that different from just saying something qualitative like "In the next decade or so, continuous speech recognition will become a lot more prevalent." So precise predictions with undefined partial matching, are basically just a precise-looking way of making rough qualitative predictions ;)
If one wishes to avoid this problem, my suggestion is to explicitly supply more precise criteria for partial fulfillment along with each prediction. Of course this shouldn't be done in the body of a book, because it would make the book boring. But it could be offered in endnotes or online supplementary material. Obviously this would not eliminate the theory-dependence of partial fulfillment assessment -- but it might diminish it considerably.
For example the prediction “The majority of text is created using continuous speech recognition.” could have been accompanied with information such as "I will consider this prediction strongly partially validated if, for example, more than 25% of the text produced in some population comprising more than 25% of people is produced by continuous speech recognition; or if more than 25% of text in some socially important text production domain is produced by continuous speech recognition." This would make assessment of the prediction's partial match to current reality a lot easier.
I'm very clear on the value of qualitative predictions like "In the next decade or so, continuous speech recognition will become a lot more prevalent." I'm much less clear on the value of trying to make predictions more precisely than this. But maybe most of your readers actually, implicitly interpret your precise predictions as qualitative predictions... in which case the precise/qualitative distinction is largely stylistic rather than substantive
Hmmm...
Interesting stuff to think about ;)
ben