I don’t think the way things are framed here is particularly helpful or illuminating. For XMPP the argument is basically “we provide some archiving capabilities visible to the C2S layer, and then we have some high availability stuff.”
Fully half of the notes about XMPP was about this HA stuff, but that has absolutely nothing to do with XMPP? Who here or at Process One thinks that matrix.org is a single-node cluster that misses messages because there’s no hot code reloading in Synapse?
(I’m not saying that XMPP or Matrix are bad, just that this article was not clear and felt almost like an ad for ejabberd’s HA capabilities.)
The perspective is useful since Process One is a adding Matrix support to ejabberd. What really stands out to me is how much more storage & processing is required to uphold the eventual consistency model mentioned in the Resource Penalty where the resources of running Matrix should be compared to the resources of running a blockchain application. The XMPP MAM stuff can get you enough of a conversation context without the assumption that every node needs an exact copy of all of the data. These are the sort of notes communities need to take when choosing their own technologies–especially if the goal is decentralization & if you want that decentralization to be sustainable/self-hostable.
I think synapse (even on matrix.org) doesn’t really do HA. There might be a warm or cold standby. Synapse workers are used to make use of more cores with python, but that’s not HA
That’s… bonkers to me. Although now that you mention it, it’s not really surprising that Synapse isn’t architected to handle >1 node.
I would imagine, though, and hope, that’s that’s just an implementation detail? And not due to the spec causing problems? Synapse does not exactly have a reputation for performance. (Reading the Dendrite README, it looks like it’s designed for multi-machine deployment, but this just isn’t implemented yet?)
I believe the spec doesn’t limit this. It’s mostly just synapse being grown from a prototype. But synapse has been rearchitected quite a lot. So e.g. the performance is not as bad as old information claims. The team has done good work with profiling and fixing things that have been limiting performance.
Yep, I was thinking about that a lot while writing these comments. I recall them recently speeding up huge room joins by some ridiculous factor because the relevant function just hadn’t been optimized, ever.
I don’t think the way things are framed here is particularly helpful or illuminating. For XMPP the argument is basically “we provide some archiving capabilities visible to the C2S layer, and then we have some high availability stuff.”
Fully half of the notes about XMPP was about this HA stuff, but that has absolutely nothing to do with XMPP? Who here or at Process One thinks that matrix.org is a single-node cluster that misses messages because there’s no hot code reloading in Synapse?
(I’m not saying that XMPP or Matrix are bad, just that this article was not clear and felt almost like an ad for ejabberd’s HA capabilities.)
The perspective is useful since Process One is a adding Matrix support to ejabberd. What really stands out to me is how much more storage & processing is required to uphold the eventual consistency model mentioned in the Resource Penalty where the resources of running Matrix should be compared to the resources of running a blockchain application. The XMPP MAM stuff can get you enough of a conversation context without the assumption that every node needs an exact copy of all of the data. These are the sort of notes communities need to take when choosing their own technologies–especially if the goal is decentralization & if you want that decentralization to be sustainable/self-hostable.
I think synapse (even on matrix.org) doesn’t really do HA. There might be a warm or cold standby. Synapse workers are used to make use of more cores with python, but that’s not HA
That’s… bonkers to me. Although now that you mention it, it’s not really surprising that Synapse isn’t architected to handle >1 node.
I would imagine, though, and hope, that’s that’s just an implementation detail? And not due to the spec causing problems? Synapse does not exactly have a reputation for performance. (Reading the Dendrite README, it looks like it’s designed for multi-machine deployment, but this just isn’t implemented yet?)
I believe the spec doesn’t limit this. It’s mostly just synapse being grown from a prototype. But synapse has been rearchitected quite a lot. So e.g. the performance is not as bad as old information claims. The team has done good work with profiling and fixing things that have been limiting performance.
Yep, I was thinking about that a lot while writing these comments. I recall them recently speeding up huge room joins by some ridiculous factor because the relevant function just hadn’t been optimized, ever.