@epicdylan
I think the eventual trend will be not Claude 7 sucks compared to Claude 5, but open models that catch up and frontier model gains that end up being expressed in terms of efficiency rather than performance related to training.
I guess one read of OpenClaw would be that it’s representing a feedback loop that creates new features on a horizontal basis in the open source space that then gets vertically reintegrated and reimagined in the closed frontier models.
The compression of the models needs to be a priority and I think it could be accomplished by running relevant parts of the process in different places, but that’s probably the thing that gets the hardest to predict.
https://apple.news/AccIkgSwLQOG4LgRJHNrVbw