AI Winter -- When will the next one be here?
Each AI advance leap frogs our understanding to a new plateau. Then we stagnate for a while, or make minimal progress. During the 1970s and 1990s, there was a large reduction in funding and interest of researchers in AI. There has been a jump in capabilities with LLMs, but once this methodology loses speed, will we enter a new low period?
AI usually refers to "the new hot thing" in intelligent machines, whether playing strategy games, generating math proofs, pattern recognition or manipulating language. With LLMs, openai, meta, and google have proven that we have conversational chatbots with pretty good recall, albeit with hallucinations. I suggest that novel thoughts are significantly harder to generate than a "best of" human knowledge from the past.
We've used the LLMs primarily for information retrieval. Generating complex ideas and new markets has not been a strong suit as yet. This might be why Sam Altman is still cagey about how openai will make money. He's said he would ask the model how to make money "when we're done", but I suspect this sort of question will be significantly more difficult to answer. And this sort of reasoning or generation may require a new type of model. In the same way that game tree pruning in chess (a game of perfect information) gave way to "regret minimization" in poker (a game with incomplete and unknown information), I suspect we're going to need a new type of model for these harder (and clearly more lucrative) questions.