I might be.
The Wolfram|Alpha “computational knowledge engine” has been generating buzz for some time, especially since Stephen Wolfram, its eccentric progenitor, announced that it would be going live in mid-May. Expect the twittering to reach a crescendo.
Since the Wolfram|Alpha (WA, let’s say) promises to answer questions typed into a simple text box, it’s being described in the press as a Google-killer. The idea, in an alpha nutshell, is that WA interprets a natural language query and then combs through a gigantic pile of databases, both public and licensed, in order to respond with an answer — rather than Google’s list of web pages that may or may not contain an answer.
Wolfram recently gave a demonstration of WA at Harvard’s Berkman Center. The whole presentation is posted, but you can get a quicker sense of what WA aims to do in this surprisingly murky collection of screenshots:
From this demo and other the-Wolfram-is-coming reviews blooming like tremulous flowers in the rain, WA looks to be a fancy calculator, an atlas on steroids, a deft collator of visualized data.
But is it more than that? Beyond looking up and presenting information, will it give us genuine and new answers? Will it represent a significant push beyond Google’s suddenly modest ambition to “organize the world’s information and make it universally accessible and useful”?
…what about all the actual knowledge that we as humans have accumulated?
A lot of it is now on the web—in billions of pages of text. And with search engines, we can very efficiently search for specific terms and phrases in that text.
But we can’t compute from that. And in effect, we can only answer questions that have been literally asked before. We can look things up, but we can’t figure anything new out.
So how can we deal with that? Well, some people have thought the way forward must be to somehow automatically understand the natural language that exists on the web. Perhaps getting the web semantically tagged to make that easier.
… I realized there’s another way: explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.
Wolfram is know for making audacious claims about the power of computation; his massive boiling down of all complexity into relatively simple mathematical rules, A New Kind of Science, was a ‘surprise best seller’ on Amazon even though Wolfram posts all of it for free. The promise of a simple handle on an immensely complex world–frothing up into a good dose of post-religious hype–is irresistible. It’s quite congruent, when you think about it, to Google’s keyword-search doorway to the infinite.
But Google is best used to locate information, not to solve problems. Sure, if you type into its search field “square root of 81” it will offer you a quick answer atop the usual pagerank results. Google has dabbled, in fact, with calculator functions. This slippage between search and calculation, though, is what alarms me.
A pernicious information illiteracy takes root — the world of clear ascription of responsibility suffers another blow — anytime someone starts assigning oracular power to the Google search algorithm. “It says [fill in information claim here].” I’ve seen college students actually cite a Google search in research–not research on Google search, mind you, but research on a subject informed by something that the search dug up one night. Who wrote and published the data is unimportant: in the middle of that dreary night, “It says….”
At an extreme point, we reach the absurdity of Carol Beer in Little Britain, overriding every thought and instinct as she dabbles on the keyboard and announces, after desultory searches, “Computer says no…”
Of course any decent web calculator will draw on good data, and won’t be nearly as mechanistic or useless or funny as Carol. But even an amazing one — and WA promises to be amazing — shouldn’t be confused with actual intelligence; assembling and synthesizing only gets you so far. One of WA’s biggest cheerleaders, Twine founder Nova Spivack, makes a similar point:
Wolfram Alpha, at its heart is quite different from a brute force statistical search engine like Google. And it is not going to replace Google — it is not a general search engine: You would probably not use Wolfram Alpha to shop for a new car, find blog posts about a topic, or to choose a resort for your honeymoon. It is not a system that will understand the nuances of what you consider to be the perfect romantic getaway, for example — there is still no substitute for manual human-guided search for that. Where it appears to excel is when you want facts about something, or when you need to compute a factual answer to some set of questions about factual data.
Spivack’s distinction between (WA’s) computation and (Google’s) look-up is helpful, as is his concession that WA, as elegantly structured as it may be, will only be useful in presenting and recombining known facts. Wolfram himself, no stranger to hyperbole, may wish to characterize WA as generating new knowledge. But until it develops algorithms for context, nuance, interpretation, influence, critique, seriousness, incoherence–until it embraces all of human expression, in all of its messiness–it will never offer sufficient answers to questions more debatable than “What was the average rainfall in Boston last year?”–just as Wikipedia cannot extend beyond professed neutrality.
So my fear of WA, knowing little about how it actually will work and feel, is that it will offer a fancy dashboard of pseudo-expertise, subtly diverting human inquiry into what’s pre-known. This seems an old fear, a fear of robots, and maybe, like many old human fears, it will melt away in the light of new threats.
In any case, by WA seems poised to offer a counterpoint to the semantic web, a different model of bringing structure to information to make search more responsive to the questions we ask. The road is strewn with various ‘natural language’ search disappointments — Ask Jeeves was deaf, Powerset seems blind to all but Wikipedia — and there’s reason to hope that Wolfram’s interpretation of natural language will be smarter, that it will process our questions and deliver them to large and various datasets. If it then answers authoritatively, though — caveat emptor.