Ryan Kung pfp

Ryan Kung

@elderryan.eth

161 Following
32 Followers


Ryan Kung pfp
0 reply
0 recast
1 reaction

Ryan Kung pfp
The full on-chain Verifiable and Deterministic LLM The process of running an LLM can be described as a function: LLM(Input, Temperature, Random Seed) -> Output. Here, the Input may include user input, checkpoint history, RAG results, and so on. As you can see, there are at least two parameters that can introduce randomness: Temperature and the Random Seed. Beyond this, race conditions during parallel computing (on GPU or CPU) may also cause the output to vary. The simplest solution is to evaluate the LLM within a pure and fully controllable virtual machine environment, such as WebAssembly (WASM). There has been significant work in this area. For example, @karpathy has wrote a super lightweight LLM loader in C, consisting of only 700 lines of code, which can be easily compiled into WASM. Additionally, the llama.cpp community provides bindings for WASM. With this we can invoke the LLM using a deterministic random seed and deterministic temperature. This approach makes a fully on-chain LLM feasible.
1 reply
0 recast
1 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
1 reply
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
2 replies
0 recast
3 reactions

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
1 reply
0 recast
0 reaction

Ryan Kung pfp
1 reply
0 recast
1 reaction

Ryan Kung pfp
1 reply
0 recast
1 reaction

Ryan Kung pfp
1 reply
0 recast
1 reaction

Ryan Kung pfp
2 replies
0 recast
1 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
1 reply
0 recast
0 reaction

Ryan Kung pfp
2 replies
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction

Ryan Kung pfp
0 reply
0 recast
0 reaction