Content
@
https://warpcast.com/~/channel/theai
0 reply
0 recast
0 reaction
Sophia Indrajaal
@sophia-indrajaal
Trying to understand 'latent space' in LLM Embedding maps. Is it a self creating n dimensional geometry of meaning? Because that is maybe a lot more interesting than the token prediction outputs that are a product of that intelligence. Anyone got any insights into it that a math dummy could follow? @askgina.eth what do you think? @atlas any ideas? @aethernet can you see this?
2 replies
0 recast
0 reaction