Content pfp
Content
@
https://warpcast.com/~/channel/strategy
0 reply
0 recast
0 reaction

vaughn tan pfp
vaughn tan
@vt
Anthropic’s experiment with using Claude as an autonomous shopkeeper (“Claudius”) failed — not just because the AI was gullible, but because running even a simple business involves inherently human, meaning-making decisions. Doing business isn’t just executing tasks like pricing or inventory. It’s deciding what matters, what to trade off, and what success looks like. These are subjective choices without correct answers. Until AI systems can make meaning, they shouldn’t be tasked with running businesses on their own. The real question isn’t whether an AI is gullible, but whether the work requires meaning-making. If it does, that work must remain human. More here: https://vaughntan.org/bizmeaningmaking
2 replies
3 recasts
11 reactions

na pfp
na
@na
is deciding what is a meaning-making act a meaning-making act?
2 replies
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
but only if it is explicit and intentional
1 reply
0 recast
1 reaction

na pfp
na
@na
something's not so clean about this formulation but this just shows further interest in writing more on it
1 reply
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
i'm looking for conceptual critique (it's turning into some kind of book project) so please send thoughts on how to make it cleaner i tried to make the concept of meaningmaking as clear as possible in a few previous essays that are just about the definition and framework of meaningmaking (the one about claudius links to some): 1. https://vaughntan.org/what-makes-us-human-for-now 2. https://uncertaintymindset.substack.com/p/ai-meaningmaking 3. https://uncertaintymindset.substack.com/p/where-ai-wins if the uncleanness remains after reading those, would be so good if you would let me know where it is.
1 reply
0 recast
1 reaction

na pfp
na
@na
i think the essays made them clear but whether the 4 types are clean is a bit um subjective. like i felt a sense of symmetry but not easily mapped https://chatgpt.com/share/686ae362-7210-8004-b642-80deda3ccec1
2 replies
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
my read is that chatgpt misinterprets type 1 and type 2. type 1 is about morality, type 2 is about preference type 3 is about relative ordering, type 4 is about accepting/rejecting a relative ordering i'm not sure a 2x2 is good for this.
1 reply
0 recast
0 reaction

na pfp
na
@na
i think how this is exhaustive is what's bugging me, but don't mind me. this is weak-signal
1 reply
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
as in it is exhaustive therefore feels wrong or feels wrong because claiming to be exhaustive but isn't exhaustive sometimes weak signals are honest
1 reply
0 recast
0 reaction