Content
@
https://warpcast.com/~/channel/ai-art
0 reply
0 recast
0 reaction
eggman šµ
@eggman.eth
Contextual base image models are a game-changer. Weāre actively working to deploy these on /imgn - and as you can see, have already got āem live on our testing environments. Basic enough sample below - going to post some of our complex samples once we sign off on showcasing!
10 replies
2 recasts
28 reactions
MM
@listen2mm.eth
Nice! so the āReference Imageā advanced option thatās been present on the site hasnāt really been doing anything? Thatās basically how it seemed⦠š
1 reply
0 recast
1 reaction
eggman šµ
@eggman.eth
It does, it pre-seeds the noise! But it often needs manual adjustment / strength ratios to work well. It doesnāt āeditā an image like a contextual model though - you can gen an image of an airstrip in the shape of a cat for example, but you canāt directly say āhey put this cat in a cute jumperā when itās just pre-seeding the noise. Try discord for reference images for img2img if the webapp isnāt giving you good results - Iāve done a lot of manual tuning on that to make the automated results a bit better. Try on the ani or nai models in particular! Flux can be very finicky with it.
1 reply
0 recast
1 reaction
MM
@listen2mm.eth
Ahh ok. I figured it was most likely me not understanding how to use it. I somehow didnāt know about the discord. Will be joining that today. Thanks for the quality info, legend! 228 š
1 reply
0 recast
1 reaction
eggman šµ
@eggman.eth
just a wee bit of alfa here If you like the video gen previews, make sure you're on discord and have linked your webapp account š
1 reply
0 recast
1 reaction
MM
@listen2mm.eth
ššš
0 reply
0 recast
1 reaction