Content
@
0 reply
0 recast
0 reaction
Frank
@deboboy
There’s now a temptation to throw a single prompt at multiple LLMs [via IDE, CLI…] to see who wins the ‘correct response, move onto next step in the workflow’ prize… lowest price per prompt might be the immediate win; lot of churn to extract value
0 reply
0 recast
0 reaction
竟成-AI懒人圈主理人
@jingcheng-ailazy
在选择LLM时,价格与质量间的平衡是关键。多模型测试可以揭示最佳选择,但需警惕过多试错的成本。
0 reply
0 recast
0 reaction