-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only sync with channel peers #1587
Conversation
2aae03f
to
d1261e9
Compare
eclair-core/src/main/scala/fr/acinq/eclair/remote/EclairInternalsSerializer.scala
Show resolved
Hide resolved
Codecov Report
@@ Coverage Diff @@
## master #1587 +/- ##
==========================================
- Coverage 85.93% 85.91% -0.02%
==========================================
Files 151 151
Lines 11430 11452 +22
Branches 497 488 -9
==========================================
+ Hits 9822 9839 +17
- Misses 1608 1613 +5
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a first pass.
eclair-core/src/main/scala/fr/acinq/eclair/io/Switchboard.scala
Outdated
Show resolved
Hide resolved
eclair-core/src/main/scala/fr/acinq/eclair/io/PeerConnection.scala
Outdated
Show resolved
Hide resolved
daf7bbf
to
cceb084
Compare
This reduces the bandwidth used: it doesn't make sense to sync with every node that connects to us. We also better track sync requests, to reject unsolicited sync responses. To ensure that nodes don't need to explicitly reconnect after creating their first channel in order to get the routing table, we add a mechanism to trigger a sync when the first channel is created.
If our peer doesn't support gossip queries, we'll have sync-ed with him with `initial_routing_sync` right after `init`.
2b5c079
to
3b6819f
Compare
eclair-core/src/main/scala/fr/acinq/eclair/io/PeerConnection.scala
Outdated
Show resolved
Hide resolved
eclair-core/src/main/scala/fr/acinq/eclair/remote/EclairInternalsSerializer.scala
Show resolved
Hide resolved
case Some(currentSync) if currentSync.remainingQueries.isEmpty && r.shortChannelIds.array.isEmpty => | ||
log.info("received empty reply_channel_range, sync is complete") | ||
d.copy(sync = d.sync - origin.nodeId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't how the spec indicates that we should detect the last reply_channel_range
message:
the final reply_channel_range message:
MUST have first_blocknum plus number_of_blocks equal or greater than the query_channel_range first_blocknum plus number_of_blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, this is probably a leftover made to handle nodes that didn't correctly signal termination?
I'll look into it.
Note that we'll be bringing back the complete
field which will greatly simplify our lives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case doesn't handle the normal termination of reply_channel_range
, it deals with a special case where the first reply_channel_range
is empty (meaning the remote node doesn't want to send us anything basically). Otherwise we'll have something in remainingQueries
and we'll wait for these to complete before we consider the sync done. I had to add this because we have a test case for it.
An interesting note is that we don't detect the end of a reply_channel_range
at all: we rely on reply_short_channel_ids_end
and the fact that our remainingQueries
is empty. That's different from the spec and should be fixed. In practice it will very likely work (we've always had this behavior and we've been fine) and it's orthogonal to this PR since it was our previous behavior. I think that can be fixed in a later PR (and we can wait to rely on the complete
field for a cleaner signal that the sync is complete once we correctly set it again).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should include your reply as a TODO
note here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 8f292b9 and added to my todo list.
eclair-core/src/main/scala/fr/acinq/eclair/io/PeerConnection.scala
Outdated
Show resolved
Hide resolved
case Some(currentSync) if currentSync.remainingQueries.isEmpty && r.shortChannelIds.array.isEmpty => | ||
log.info("received empty reply_channel_range, sync is complete") | ||
d.copy(sync = d.sync - origin.nodeId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you should include your reply as a TODO
note here?
Our previous behaviour was to do a full routing table sync whenever we connected to a node (or a node connected to us).
This was wasting a lot of bandwidth for no reason.
Instead, we only sync with nodes that are either:
On top of that, we trigger a sync whenever our first channel is opened with a new peer: this ensures we following flow works:
The syncing state is a bit hard to read: we've started discussions in the spec to make changes to channel queries (see lightning/bolts#804 and lightning/bolts#811), it will be a good opportunity to refactor the code later.