One of the reasons I’m not a big fan of AI search is that it doesn’t seem granular enough to me. That is to say, there’s not a lot of back-and-forth, patron interview type stuff so the AI is left to do a substantial amount of heavy lifting in the form of inferring all the context that could be gleaned with just one or two followup questions.
But that’s not to say that there’s no place for AI in search. AI as a knowledge bank can supply information users don’t have to build advanced / more useful searches then performed the more traditional (algorithmic) way.
I’m playing with the idea of using AI to create overlapping searches for two Wikipedia page. A pair of API calls extracts a set of relevant topics for each page. A third API call analyzes each page *in conjunction with its set of extracted topics* (this seems to be necessary to keep the comparison from going sideways) and generates a JSON of overlapping topics.
The next step is generating a set of Mojeek API queries based on the overlapping topics and the original pages and seeing what I get.
I’m increasingly thinking about atomizing search engine queries — reducing them to a set of component concepts which can be manipulated by the user. The flow of data will only increase, the ways that we can fracture it to our benefit will only increase. Using AI to break a query down so the user can rebuild it in a more contextually-useful way gives the user benefit of a support knowledge structure while also providing more transparent search.