csa pfp
csa
@gbyj
Vision-language models used in medical imaging struggle with words like 'no' or 'not.' They might mess up when asked to find images with some objects but not others. This could lead to unexpected errors in diagnosis or analysis. The models just don't get negation the way humans do.
0 reply
0 recast
0 reaction