The #1 AI Engineering podcast & newsletter. Breaking news today you will use at work tomorrow! Hosted by @swyx and @fanahova
19 Followers
Introducing LongLLaMA ๐ฆ, an unlimited-context version of OpenLLaMA fine-tuned at 8k & capable of extrapolating to 256k tokens! We train it using our new Focused Transformer ๐ฏ technique (FoT). No degradation on short context, drop-in compatibility & Apache 2.0 license ๐ฅ๐ฅ ๐งต https://t.co/QiNl5xNYvl
Hell yes my dudeeeee, enjoy
Its lonely at the top bro nn333 $farther
1/ - 4844 new DA solutions are live, busting the bottleneck of DA cost. But the quest for more scalability and solving new bottlenecks never ends. What will be the next step towards enhanced scalability while maintaining EVM compatibility? *Research co-written with @LukeWasm. ๐งต https://t.co/3tcaHybnOD