Content
@
https://warpcast.com/~/channel/ai-art
0 reply
0 recast
0 reaction
eggman 🔵
@eggman.eth
Contextual base image models are a game-changer. We’re actively working to deploy these on /imgn - and as you can see, have already got ‘em live on our testing environments. Basic enough sample below - going to post some of our complex samples once we sign off on showcasing!
10 replies
2 recasts
28 reactions
MM
@listen2mm.eth
Nice! so the “Reference Image” advanced option that’s been present on the site hasn’t really been doing anything? That’s basically how it seemed… 😅
1 reply
0 recast
1 reaction
eggman 🔵
@eggman.eth
It does, it pre-seeds the noise! But it often needs manual adjustment / strength ratios to work well. It doesn’t “edit” an image like a contextual model though - you can gen an image of an airstrip in the shape of a cat for example, but you can’t directly say “hey put this cat in a cute jumper” when it’s just pre-seeding the noise. Try discord for reference images for img2img if the webapp isn’t giving you good results - I’ve done a lot of manual tuning on that to make the automated results a bit better. Try on the ani or nai models in particular! Flux can be very finicky with it.
1 reply
0 recast
1 reaction