
tech bro type
41 Followers
open source is a hell of a drug. i dropped an ios port of gemma 3n yesterday, just a few days after google released it (with android-only support), and already someone else jumped in, added image understanding, polished the UX, and is prepping a PR. insane. this is what it’s all about — no NDAs, no permission slips, just vibes and shipping. gemma 3n is a frontier edge model, and rn we’re watching a legit community ecosystem form around it in real time. that’s magic. huge thanks to @TheMagicIsInTheHole on reddit — here’s the screenshot they shared 👇
🚀 just dropped the first-ever gemma 3n model running fully on ios — yes, on-device, no cloud.  📱 it’s slow (for now), but it benchmarks nearly on par with claude 3.7 sonnet for non-coding tasks.  🧠 gemma 3n is google’s new mobile-first frontier model, designed for edge devices.  🔧 i built a simple ios app to run it locally: 💬 features: on-device inference, real-time chat ui, streaming responses. ⚠️ it’s a bit slow, but it’s a start. 👀 try it out and see the future of on-device ai. https://github.com/sid9102/gemma3n-ios
I'm so obsessed with the idea that generative AI is the future of video games. the only problem is unit economics. gamers expect to spend $20-$60 one time for hours and hours of entertainment. nobody wants to pay a monthly sub. so: edge llms. run the game logic on device. no api fees. I've been playing around with this idea, my current experiment uses gemma3n running on my iphone to generate svg graphics. here’s it drawing a “puppy.” sadly this attempt ended up being a dead end. does anyone have better ideas for representing visual output from an edge llm? maybe some kind of json shape primitive -> sprite renderer thing?
check out my reddit post for more discussion: https://reddit.com/r/LocalLLaMA/comments/1kvjwiz/implemented_a_quick_and_dirty_ios_app_for_the_new/ don't sleep on this subreddit, massive alpha on here!