adfhfgfhf pfp
adfhfgfhf
@gdfbhdf
Vision-language models used for medical images don't get words like 'no' or 'not.' They might mess up when asked to find images with some things but not others. This could lead to unexpected mistakes.
0 reply
0 recast
0 reaction