I do believe there might be a place for AI in search, but I don’t think it’s as part of a a black box. If I’m going to use AI as part of my search experience I want to see what it’s doing right in front of me.
I’ve been messing around a bit making an AI-assisted search tool with the ChatGPT API, but instead of applying the AI to the search directly I’m using it to extract concepts from a block of text and apply them contextually to a base Google search. For example, I might have a interest in German banking when I come across an article about Jan Marsalek. I feed the article to my program, which uses a ChatGPT API call to extract the top names and topics and then generates a series of Google search URL featuring your base topic and the concepts extracted by ChatGPT.
I did it this way first because I knew if I built searches for Google they would find SOMETHING, generally related/contextual, and it does work somewhat. But ideally I want do Stract API calls instead of build Google URLs, extract all the results to one object and filter them, and then do further text/concept analysis.
AI applied this way has a very specific task — it’s extracting concepts from a block of text. It’s not trying to independently understand them for a search, it’s not trying to correlate them with some kind of external (possibly hallucinated) knowledge – the boundaries are well set. Unfortunately ChatGPT 3.5 isn’t nearly as good as 4 when it comes to extracting concepts — I’m going to do some experimenting with my prompts.
This looks like a promising step on my eternal quest to answer the question, “How do we ask for what we don’t know?”