MetaEnd🎩 pfp
MetaEnd🎩
@metaend.eth
Listen: @fileverse but with @venice-ai or openrouter api support and MCP for mem0
1 reply
1 recast
4 reactions

Fileverse pfp
Fileverse
@fileverse
👩‍🏭👀 Halooooooo @metaend.eth Here's a lil overview of our llm roadmappp👇 Stage 1: Olama, local LLM <> webApp (https) with no extra download Stage 2: local storage webLLM and/or third-party models (API) Stage 3: suuuurrrpriiiiise 🧁 Let us know if u have any feedback or feature requests!!
1 reply
0 recast
1 reaction

MetaEnd🎩 pfp
MetaEnd🎩
@metaend.eth
And MCP? mem0 can be run locally and would add global memory for all connected llms
1 reply
0 recast
2 reactions

miro𓆣 pfp
miro𓆣
@miroyato
good idea! shouldnt be too difficult to connect ollama to an mcp server, might not even need cahnges on ddocs.new side. I'll look into it! is it mainly for global memory that you ask? because we're thinking of having all model interactions also storable directly on ddocs for users to be able to use as model context/memory
1 reply
0 recast
1 reaction