Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ReplayMerge IPC support #409

Open
chrisejones opened this issue Feb 7, 2020 · 8 comments
Open

ReplayMerge IPC support #409

chrisejones opened this issue Feb 7, 2020 · 8 comments

Comments

@chrisejones
Copy link

chrisejones commented Feb 7, 2020

Hi @mjpt777 , we would like to use Aeron C++ client's ReplayMerge functionality with the IPC transport.

We found out that it doesn't currently support IPC in real-logic#839. Please could we talk about getting IPC support added?

Regarding the other issue we raised in that ticket relating to session-ids. Does that mean that every client needs to be configured with its own stream for replaying? Is there a reason for this limitation? In this example the session-id is used to avoid needing to do this. Please could we discuss making ReplayMerge do the same?

Also, are you aware of any other limitations of the current implementation of ReplayMerge?

Thanks, Chris

@mjpt777
Copy link
Collaborator

mjpt777 commented Feb 10, 2020

ReplayMerge is using MDS Subscripitons which merge two streams into the same image. As the live stream is typically UDP then IPC and UDP do not mix as the endpoints for MDS are only UDP in the current implementation.

We have discussed with other clients how we could extend the functionality of the Receiver to poll an IPC publication like a spy and merge this into a UDP image. We'd do this by adding support for IPC destinations that could be added. We think this would be a useful feature if someone wanted to sponsor it.

@mjpt777
Copy link
Collaborator

mjpt777 commented Feb 10, 2020

Using separate streams is necessary because each MDS subscription creates sockets for the transports that it is merging into the same image. These transports are not shared with other subscriptions so this can cause issues with unicast socket clashes. We have considered pooling transports across all subscription types but this changes the relationships between channel endpoints and subscriptions we currently have in the driver. Longer term we would like to do this as it provide efficiencies in reducing the socket reads and remove some limitations like we currently have for ReplayMerge. The client that sponsored the original work only have one live stream.

We would be happy discussing new features to better match your requirements.

@chrisejones
Copy link
Author

ok thanks, that's very interesting. I wasn't aware of the MDS functionality.

In the case where we are only using IPC MDS (if extended to support IPC) would add an overhead right?

Currently we're doing the merge by polling the live and replay subscriptions separately and merging using application level sequence numbers. Is there a better approach?

@mjpt777
Copy link
Collaborator

mjpt777 commented Feb 12, 2020

Adding IPC as the replay destination into MDS could be done quite efficiently and would fit well for a local archive being used for catching up to a live UDP multicast stream. This would be a lot more efficient than the replay happening over UDP as the trip out and back in via the network step could be avoid. The cost would by a copy from the IPC log buffer to the MDS image.

Is your live channel IPC or UDP?

@chrisejones
Copy link
Author

IPC

@mjpt777
Copy link
Collaborator

mjpt777 commented Feb 12, 2020

OK that is interesting. So IPC live stream with archive local, all IPC then?

@chrisejones
Copy link
Author

Yes, that's right. Actually in some cases we spy on a live UDP publication, but only the local clients use the archive. The remote clients do not need to replay missed messages.

@mjpt777
Copy link
Collaborator

mjpt777 commented Feb 12, 2020

For IPC the best way is to merge in the application like you are doing. In future we may offer a feature to make this more efficient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants