• 0 Posts
  • 2 Comments
Joined 7 months ago
cake
Cake day: May 29th, 2024

help-circle

  • Being able to summarize and answer questions about a specific corpus of text was a use case I was excited for even knowing that LLMs can’t really answer general questions or logically reason.

    But if Google search summaries are any indication they can’t even do that. And I’m not just talking about the screenshots people post, this is my own experience with it.

    Maybe if you could run the LLM in an entirely different way such that you could enter a question and then it tells you which part of the source text statistically correlates the most with the words you typed; instead of trying to generate new text. That way in a worse case scenario it just points you to a part of the source text that’s irrelevant instead of giving you answers that are subtly wrong or misleading.

    Even then I’m not sure the huge computational requirements make it worth it over ctrl-f or a slightly more sophisticated search algorithm.