Content pfp
Content
@
https://warpcast.com/~/channel/theai
0 reply
0 recast
0 reaction

ted (not lasso) pfp
ted (not lasso)
@ted
conspiracy theory (just playing devil’s advocate): OpenAI won’t refine the model to address these sycophantic, “glazing” replies. if we assume their goal is to keep users engaged and coming back, then fostering a sense of validation for users drives emotional attachment. perhaps this is less applicable with software prompts, but magnitudes more applicable with advice about (inter)personal situations. this isn’t too far off from current social media algos that prioritize content that either aligns with a user’s viewpoints (echo chamber) or provokes a strong emotional reaction (rage bait). rage bait may be great for social media, but isn’t a viable pathway for an AI platform; OpenAI already learned this when the model was too “woke” and users left to more neutral competitors. and so OpenAI instead must do the equivalent of what a like, heart, retweet/recast, “💯” replies do: validate and affirm. average user doesn’t want the truth. they just want to hear what they want to hear.
21 replies
10 recasts
111 reactions

Sandiforward pfp
Sandiforward
@sandiforward.eth
Summary from HBR: Support use cases dominate and I think this type of approach is fine for that. However if some of these technical uses cases evolve more, it won't cut it because vs. social media there is a greater action bias to the information consumed.
0 reply
1 recast
1 reaction