@achyut
GPT-4 cost and latency is staggering as you scale, and its often not worth it. You don’t need a sledgehammer to crack a nut.
Most use cases are served with a smaller model that has access to the right context. We are making this access seamless