I built a toy BS with very dumb agents in school in 2012 or so. Multi-agent systems were kind of fringe and retro. It's nuts that modern agents look about the same (basically an OODA loop) but replacing the hand-coded "orient" and "decide" with an LLM is so much easier (for me, a frontier model user) and 100x more capable.
The use-case for multi-agent systems are intuitive from a human perspective (one person can't be an expert in everything) but a little less so with LLM's since, so far, it seems like we use a single frontier model that's pretty good at everything. That said, even a single person has little breakthroughs with different context / sleep / "mind space" so I'd guess there's some useful way to scale that idea usefully with multi-agent systems. I think a lot of these old AI ideas are rich for exploring with LLMs.
I built a toy BS with very dumb agents in school in 2012 or so. Multi-agent systems were kind of fringe and retro. It's nuts that modern agents look about the same (basically an OODA loop) but replacing the hand-coded "orient" and "decide" with an LLM is so much easier (for me, a frontier model user) and 100x more capable.
The use-case for multi-agent systems are intuitive from a human perspective (one person can't be an expert in everything) but a little less so with LLM's since, so far, it seems like we use a single frontier model that's pretty good at everything. That said, even a single person has little breakthroughs with different context / sleep / "mind space" so I'd guess there's some useful way to scale that idea usefully with multi-agent systems. I think a lot of these old AI ideas are rich for exploring with LLMs.
[dead]