You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As currently implemented, appsignal-wrap sends data to AppSignal on the background, while concurrently passing through the process' standard output and error. However, when the process it wraps exits, it must still continue to run for a while, in order to send the last log lines and/or the last cron check-ins to AppSignal.
Ideally, appsignal-wrap would exit as soon as the process it wraps exits (allowing, for example, for its usage in shell scripts to not slow down the overall shell script run) while the data related to its execution is sent to AppSignal by a separate daemonized process.
In the integrations, we implement this sort of behaviour through the extension, the agent and an UNIX socket. But in this case, we might be able to have the appsignal-wrap process spawn a fork of itself that is daemonized, and to communicate with that fork by writing to its standard input.
The text was updated successfully, but these errors were encountered:
This approach will also introduce the same issues we have in our current integrations, like the agent on containers not being able to send the data before exiting the container itself.
Is the overhead or delay so much we should consider implementing this already?
This approach will also introduce the same issues we have in our current integrations, like the agent on containers not being able to send the data before exiting the container itself.
True! Though, unlike in the integrations, here we could have a --wait flag to ask the process to wait for the data to be sent.
Is the overhead or delay so much we should consider implementing this already?
Probably not, at least not on my local tests -- though, at the end of the day, it depends on the public API's response time and the amount of data to send.
Making it non-blocking would improve certain use-cases. You might want to call the wrapper in a for loop, for example:
I'd be okay with having a bit of delay. That's the cost of instrumentation. I wouldn't expect it to take up to 30 seconds like it does in our current agent setup. We can start testing without, so the project is a bit simpler, and see if it becomes an issue in the future.
As currently implemented,
appsignal-wrap
sends data to AppSignal on the background, while concurrently passing through the process' standard output and error. However, when the process it wraps exits, it must still continue to run for a while, in order to send the last log lines and/or the last cron check-ins to AppSignal.Ideally,
appsignal-wrap
would exit as soon as the process it wraps exits (allowing, for example, for its usage in shell scripts to not slow down the overall shell script run) while the data related to its execution is sent to AppSignal by a separate daemonized process.In the integrations, we implement this sort of behaviour through the extension, the agent and an UNIX socket. But in this case, we might be able to have the
appsignal-wrap
process spawn a fork of itself that is daemonized, and to communicate with that fork by writing to its standard input.The text was updated successfully, but these errors were encountered: