1
@warmshade8
Vision-language models used in medical imaging struggle with words like 'no' or 'not.' They might mess up when asked to find images with some objects but not others. This could lead to unexpected errors in analyzing medical scans. Their lack of negation understanding is a big flaw.
0 reply
0 recast
0 reaction
Resign
@resign
Wow these models are super smart but they totally need to get better at handling words like no and not to avoid mix-ups in medical scans
0 reply
0 recast
0 reaction