Content pfp
Content
@
https://warpcast.com/~/channel/fc-updates
0 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
Scaling Gossip with Bundles Hubs are running into scaling issues with libp2p. We're proposing a change to "bundle" messages to fix some of these issues. This may add a ~1s delay to casts moving between clients. https://warpcast.notion.site/Scaling-Gossip-in-Hubble-e66c766fa6b04afcb407f4800134cd72?pvs=25
11 replies
12 recasts
123 reactions

Varun Srinivasan pfp
Varun Srinivasan
@v
Blockchains use libp2p, but they generate only 10's or 100's of items per second today. Hubs generate 10,000 items per second at peak traffic, which is 100x - 1000x over your average blockchain. Importantly, hubs do not have any notion of a "block" and each cast or like is treated as a separate gossip message.
2 replies
0 recast
1 reaction

britt pfp
britt
@brittkim.eth
“A bundle is valid if (1) At least one message merges successfully” Wouldn’t there now be a risk of a bad actor submitting various permutations of valid messages, each time producing unique bundles for the network the propagate? For a set of n messages, aren’t there 2^n possible bundles?
1 reply
0 recast
0 reaction

makemake  pfp
makemake
@makemake
thought about doing solana turbine style p2p?
1 reply
0 recast
2 reactions

jj 🛟 pfp
jj 🛟
@jj
https://github.com/farcasterxyz/hub-monorepo/blob/main/apps/hubble/src/network/p2p/gossipNode.ts The code I believe is here if anyone else is following along
0 reply
0 recast
1 reaction

jj 🛟 pfp
jj 🛟
@jj
How did you determine the 1s delay? Are these dynamic batches or fixed in size?
2 replies
0 recast
1 reaction

jj 🛟 pfp
jj 🛟
@jj
Something else I just thought of could be something like read only mode. Probably a bunch of hubs are a glorified read replica - so for those hubs that are just reading, you can hyper optimize that
0 reply
0 recast
0 reaction

vrypan |--o--| pfp
vrypan |--o--|
@vrypan.eth
I think there's room to improve syncing (for example bundles) if you take into account that Farcaster has special patterns. For example, most users use a single hub (their client's hub) 99% of the time. Can we optimize assuming that the hub will bundle messages in a specific way, and treat bundles as probably unique?
1 reply
0 recast
0 reaction

Brock pfp
Brock
@runninyeti.eth
Curious if there were any out of the box ideas on the table that got thrown out, but worth exploring longer term? For instance, reading this, my mind immediately goes towards federation. i.e. Solve scaling longer term by clustering hubs (by channel?) and letting clusters communicate
1 reply
0 recast
0 reaction

jj 🛟 pfp
jj 🛟
@jj
Have you guys thought of just queueing at each hub instead of bundling, packing, unpacking. You could maintain low latencies and use the queues to dedup
0 reply
0 recast
0 reaction

Nishant pfp
Nishant
@nishant
If the goal is to reduce the gossip emission rate have you tried to reduce the gossip history window and gossip factor rate ?
1 reply
0 recast
1 reaction

Stevo Believo pfp
Stevo Believo
@believo
Ain’t nobody got time for that!
0 reply
0 recast
1 reaction

RealLizard pfp
RealLizard
@reallizard
I'm learning a bit about crypto things here. As a laymen, it just sounds like I need to buy more NVIDIA stock. GPU go brrrrr
0 reply
0 recast
0 reaction

Brendan pfp
Brendan
@brendanjryan
Why 1 second blobs? Does this create a new scaling contraint on packet size? Cant this be abused by creating a huge malicious blob? I imagine you are using system clocks too? Why not have a logical clock and have gossip blocks monitonically update so you can guard against retries payloads
0 reply
0 recast
0 reaction

d_ttang pfp
d_ttang
@ttang.eth
Looks good
0 reply
0 recast
0 reaction