|Marshall Lager, contributor, CRM magazine|
My fear is that I’m about to write a rambling, semi-coherent post full of paranoid fantasies about privacy violations and evil corporations. But isn’t that what blogs are for?
Today’s bit is inspired by a New York Times article about the Deep Web, a term referring to the wealth of information stored in databases that are hidden from regular search engines. New technologies are being developed that can reach into these dark corners and mine useful and relevant data including “financial information, shopping catalogs, flight schedules, medical research and all kinds of other material stored in databases that remain largely invisible to search engines.”
I’m all for better searching, greater relevance, and more accurate predictions of what I am really looking for when I type in a search. But aren’t some of these things hidden from public view for a reason?
I’m aware that the intent of these efforts is not to expose private or classified information to unauthorized scrutiny. Still, one of my first thoughts on the topic was “keep your nose out of my business.” I’ve gotten past that, for the most part; these new engines aren’t built to snoop passwords and steal secrets, and the owner has to take reasonable steps to protect what is owned.
No, by this point I’m more concerned with practicality, relevance, and value. It’s already hard enough to find information, despite Web tech making information freely available to anybody who can use a browser—there’s a lot of clutter and misdirection that can get in the way. Opening up our options to include what the Times article calls an infinitely large haystack has the potential to create an infinite mess.
I have the feeling that digging through hidden databases is the easy part of the problem. The real trick is going to be making sense of what’s out there. Searches will trigger searches of their own, adding another layer of complexity to what goes on in the background. In addition to finding what is requested, machines will have to guess (through statistical analysis) what an inquiry is really after, get that information, and prepare to present it. Refining a search might mean throwing out all that second-level data and digging for another set, and so on until the user is done. And it all has to look simple and seamless.
Guessing right feels great, and it’s a magical experience when a person (or a machine) seems to intuit our inner thoughts based on minimal information. I still get an “oh wow” moment when I look something up on the Web and find exactly what I’m looking despite seemingly inadequate search terms. If I could send a question into the ether and get a direct answer, along with some truly relevant next steps, every time, it would be a whole new Web for me.
The possibilities are tremendous. The challenges are as well. The company that gets it right and makes it truly useful first will have a license to print money. My ramble is over. Let the discussions begin.