Content pfp
Content
@
https://warpcast.com/~/channel/privacy
0 reply
0 recast
0 reaction

kazani.base.eth 🦂  pfp
kazani.base.eth 🦂
@kazani.eth
I run an offline LLM locally on my phone device to answer all my questions, and I use RSS feeds to keep up with all my news sources and blogs. Doing both of those things has cut my browser use down by 95%. You should try it. ➡️ I use this FOSS app for RSS: Feeder: https://github.com/spacecowboy/Feeder ➡️ As for LLM check: Personally I use ChatterUI (FOSS app) with Gemma 3 1/4B models. (I use this link to download models in gguf format: https://huggingface.co/models?library=gguf) Noteworthy alternatives: 1. PocketPal: https://github.com/a-ghorbani/pocketpal-ai Launch the app, open the menu, and navigate to Models. Download one or more models (e.g. Phi, Llama, Qwen). Once downloaded, tap Load to start chatting ℹ️ Experiment with different models and their quantizations (Q4, Q6, Q8, etc.) to find the most suitable one. 2. Maid: https://github.com/Mobile-Artificial-Intelligence/maid 3. MLC: https://github.com/mlc-ai/mlc-llm
2 replies
1 recast
6 reactions

chris 🤘🏻 pfp
chris 🤘🏻
@ckurdziel.eth
have you found a good LLM solution to parse your RSS feeds and give you daily digests?
2 replies
0 recast
1 reaction

ɃΞrn pfp
ɃΞrn
@b7
yes the combination of rss with (local) llm would be nice
0 reply
0 recast
2 reactions

kazani.base.eth 🦂  pfp
kazani.base.eth 🦂
@kazani.eth
No. I enjoy reading the whole thing. @ckurdziel.eth You can try this: Use Feedparser in a Python script running in Termux Or Feeds.fun (self-hosted)
0 reply
0 recast
0 reaction