Steve
@ozmium.eth
Successfully configured a Llama based model to use hybrid NPU + iGPU for inference and the results feel promising.
0 reply
0 recast
0 reaction