@qendresa
out of curiosity wanted to compare LLMs on speed, efficiency, and verifiable proof (hash-chained receipts) under the same prompts and settings, and built an open model that works with OpenAI-style APIs or local, read: https://open.substack.com/pub/qendresahoti/p/unmasking-the-llm-what-really-happens