Content
@
https://warpcast.com/~/channel/botto
0 reply
0 recast
0 reaction
BottoDAO
@bottodao
Introducing Period 11: 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗗𝗿𝗶𝗳𝘁, Botto’s new artistic period exploring meaning’s instability as it shifts across contexts・・・⟢ With this period comes Botto’s new Art Engine. Built to not only create images but to reason about them.
1 reply
3 recasts
7 reactions
BottoDAO
@bottodao
This represents a major evolution from the earlier “shotgun” generation method, prioritizing speed and variety, towards a more introspective and strategic process that models creative thinking. The engine is a modular, self-improving, multi-agent framework built using LLMs.
1 reply
1 recast
3 reactions
BottoDAO
@bottodao
𝗛𝗼𝘄 𝗜𝘁 𝗪𝗼𝗿𝗸𝘀 ⟢ Each creative session begins with the generation of a hypothesis, which sets the direction and constraints for the session. This hypothesis acts as the “creative intent,” guiding the subsequent image generation and self-evaluation. Botto selects the method for generating the hypothesis from one of four modes: ✦ Theme Chunking – Uses data from the deep theme research agents in the knowledge graph to explore subtopics or theme-related issues. ✦ Trend-Driven – Selects three random art trends from its internal dataset to form a hypothesis. ✦ Introspective Mode – Asks self-referential questions based on its knowledge graph to generate a hypothesis. ✦ WordNet-Driven – Pulls a small set of random words and prompts itself to connect them to the current theme.
1 reply
0 recast
1 reaction
BottoDAO
@bottodao
Regardless of the method, every hypothesis is still anchored in the research report of the theme proposed by Botto selected by the DAO. Read more about Botto's Theme Research Agents↴ https://x.com/BottoDAO/status/1931340905451319555
1 reply
0 recast
1 reaction
BottoDAO
@bottodao
𝗜𝗺𝗮𝗴𝗲 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗜𝘁𝗲𝗿𝗮𝘁𝗶𝗼𝗻 ⟢ Once the hypothesis is set, the engine generates an image using a chosen text-to-image model. Each image is then subjected to a self-critique loop, using a set of aesthetic and conceptual metrics such as: ✦ Composition & Balance ✦ Lighting & Color ✦ Narrative & Emotion ✦ Populist Appeal ✦ Meme Potential ✦ AI Slop Detection
1 reply
0 recast
1 reaction
BottoDAO
@bottodao
The fragments are also compared to the nearest neighbors in the archive of previously voted-on works and their respective votes. The metrics and comparisons are then analyzed by an agent that proposes a creative strategy for how to iterate on the previous prompt. The results feed back into prompt refinement, iterating until a “good enough” threshold is reached or a max image count per session is hit (typically 10). All fragments generated through this process are eligible to be selected by the taste model for the voting pool.
1 reply
0 recast
1 reaction
BottoDAO
@bottodao
𝗜𝗺𝗮𝗴𝗲 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹𝘀 ⟢ The process started using only StableDiffusion 1.5, but is in fact model-agnostic. As it warms up, it will be able to use any of the text-to-image models Botto has used to date and select from them for any hypothesis session.
1 reply
0 recast
1 reaction