akshaan
@akshaan
On today's TBPN the founder of higgsfield.ai alluded to training video gen models on watch time and dropoff metrics from social media posts. Exciting. Imagine the kinds of brainrot this could conjure.
2 replies
1 recast
8 reactions
July
@july
FUCK
1 reply
0 recast
1 reaction
July
@july
I hate it
1 reply
0 recast
1 reaction
akshaan
@akshaan
I’m simultaneously excited and wary. Will be cool to see the new content design spaces it’ll unlock, less cool to see the brains it’ll gigafry
0 reply
0 recast
1 reaction
July
@july
Im excited about AI videos in general but this specific use case, I have zero sympathy for and it is not an interesting design space at all - I hate it it’s pure optimization and is the antithesis of what I believe to be “design”
1 reply
0 recast
0 reaction
July
@july
Designing something ultimately means thinking of the person who’s going to use it; this is not at all thinking of the person that’s using it; this is thinking about you don’t want them to drop off so you’re tweaking the parameters to make sure they don’t drop off and doing at an industrial scale. Is it an interesting problem? Absolutely. Is it design? Absolutely not
2 replies
0 recast
1 reaction
akshaan
@akshaan
Yea def agree, high risk of it being net negative from a content consumer’s POV. I’m mostly curious to see how applying these signals affects the output artifacts of these models. Specifically whether it’ll create enhanced versions of content we’ve already seen or some totally new kind of thing.
1 reply
0 recast
1 reaction
July
@july
Def agree with you here that this will be interesting problem for how it affects models and their output, and training against it is pretty cool. Just slightly distraught at the use case that its aimed at.
0 reply
0 recast
0 reaction