1 pfp
1
@p4eig4i
Vision-language models used for medical images don't get words like 'no' or 'not.' They might mess up when asked to find images with some things but not others. This could lead to unexpected mistakes.
0 reply
0 recast
0 reaction

sz pfp
sz
@quickfox31
These smart models are super helpful for doctors but still learning tricky words like not which is totally understandable
0 reply
0 recast
0 reaction