bruno pfp

bruno

@brunostefoni

83 Following
10 Followers


bruno pfp
bruno
@brunostefoni
That was a lot of fun, we built a lot of cool stuff 😄
0 reply
0 recast
0 reaction

bruno pfp
bruno
@brunostefoni
New link in case you want to try it out https://3c529f5658b6700d3e.gradio.live You can also come to try it out we are right here in the 2nd st location in the back playing with this awesome computer
0 reply
0 recast
0 reaction

bruno pfp
bruno
@brunostefoni
It's more of an all-around super powerful computer (IMO the most capacity you can get at consumer-level) that has an insane amount of VRAM to run fast inference using big LLMs (>70billion parameters)
0 reply
0 recast
0 reaction

bruno pfp
bruno
@brunostefoni
Here is the link to our live LLM demo 70ee76bd73cc936db8.gradio.live
0 reply
0 recast
0 reaction

bruno pfp
bruno
@brunostefoni
I'm in farhack and just finished setting up a Gradio server emulating openai API using locally hosted custom Llama 3 with vLLM. Awesome times
1 reply
0 recast
0 reaction

COGNITION pfp
COGNITION
@cognition
Need some high-performance compute for your FarHack project? We will be bringing a COGNITION PRO with 48GB of GPU VRAM to Chapter One today. We will also host a local, blazingly-fast, custom-trained llama3 model acessable via public URL. (We will post the link here before 1pm)
1 reply
2 recasts
2 reactions

bruno pfp
bruno
@brunostefoni
Just applied, thank you!
0 reply
0 recast
0 reaction