Adam pfp
Adam

@adam-

When OpenAI inevitably flips the ad switch, you'll still have the ability to choose an option that doesn't go this route. Personal belief is that within the next 14 months there will be a breakthrough on compression that lets you access large models at a fraction of the RAM and GPU usage. Two factors make this likely: 1) RAM shortages means devs are forced to find ways to make 8gigs of RAM work more efficiently 2) There's still a drive to get LLMs working locally on mobile, and I wouldn't be surprised if the The Apple x Gemini partnership has this as one of their top level goals. While the likelihood of this partnership resulting in an exclusive model made for the iphone is high, I wouldn't rule out the possibility of European regulation forcing them to let end users to bring their own model within their OS. All this to say, you'll have options to work with if you don't agree with the direction that one of the big players is taking.
2 replies
0 recast
4 reactions