
https://michaelhly.com
38 Followers
https://github.com/tiangolo/typer
Just shipped a tool to let hub runners generate a @farcaster training corpus for LLM tuning — zero network requests. If you have your hub synced, it should be 100x+ faster in pulling data out of your hub compared to RPC-based methods. Try it out with: `pip install "farglot[cli]"`
Also there is a corresponding analyzer library to use your tuned models for cast classification to help with reputation ranking, spam detection, or auto-moderation: https://warpcast.com/michaelhly/0xb47dc6
What about using a LLM backend to classify spam?