Content pfp
Content
@
https://warpcast.com/~/channel/imgn
0 reply
0 recast
0 reaction

eggman 🔵 pfp
eggman 🔵
@eggman.eth
pls @horsefacts.eth Don't make me go all @yes2crypto.eth on dis If $ETH goes to $6k I'll be able to afford paying him $3 to do it daily I swear
7 replies
2 recasts
27 reactions

eatzebugs🎩 pfp
eatzebugs🎩
@eatzebugs
this is a random request, but it's something that I think would make me feel better informed about what genning does energy/cost-wise. Is there a way imgn could show the true cost of using AI? Maybe it would help assuage guilt and clear up false-narratives? Or maybe those narratives are true and AI does use a lot.
1 reply
0 recast
0 reaction

eggman 🔵 pfp
eggman 🔵
@eggman.eth
I'm not sure where those ideas came from - training models is pretty energy-intensive, but inference is cheaper than a google search on most models. Stuff like o3-pro would likely be above the single search cost, but not by a terrific amount. On training, it depends on the model complexity (and the experience of the person overseeing it). You could have a base model that trains for a month on 10k H100s and then collapse, requiring the entire process to start again. Or you could fine-tune an existing base model on an RTX3090 for a few days and get stellar results. Some people consider good model merges to be equivalent to "a newly trained model" in terms of output, but I'm not in that camp. But if you are, that means "training" that model costs a couple of watts at best. There's no one cost that can really be given unless we threw out training costs and just focused on inference; even then, it'll vary depending on what hardware it's running on. But in general - it's less intensive than a google search.
1 reply
0 recast
2 reactions