@cognition
Need some high-performance compute for your FarHack project?
We will be bringing a COGNITION PRO with 48GB of GPU VRAM to Chapter One today.
We will also host a local, blazingly-fast, custom-trained llama3 model acessable via public URL.
(We will post the link here before 1pm)