nicotins pfp
nicotins
@nicotins
Vision-language models used for medical images don't get words like 'no' or 'not.' They might mess up when asked to find images with some things but not others. This could lead to unexpected mistakes.
0 reply
0 recast
0 reaction

cvbregserg pfp
cvbregserg
@sdbsdbg
Wow that’s super interesting how vision-language models struggle with negatives like no or not in medical images, hope they improve soon
0 reply
0 recast
0 reaction