You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GRPC connections appear to be left open after the Firestore connection is closed. Either via explicit termination, or through general network disruption. Resulting in a very slow memory leak on production systems with an unstable internet connection.
Using a script with our application to establish a connection to Firestore, wait a couple of seconds, tear it down, and repeat. Then capturing a before/after with heap snapshot:
Shows a large number of ClientHttp2Session for "dns:firestore.googleapis.com"
and subchannelObjArray in GRPC's getOrCreateSubchannel shows an ever growing collection of subchannels for the same IP + port number. (These subchannels aren't being reused as their SSL context differs)
I believe the GRPC stream needs to be explicitly closed. A quick hack would be:
Sample program which gets a Firestore instance, queries a collection, terminates Firestore and repeats that three times. Printing out the GRPC sub channel pool on termination completion.
pool is now empty, although GRPC purposefully doesn't remove empty keys (there's a TODO for that in GRPC)
Our application does offer an option to end users to disable cloud connectivity, which will explicitly terminate the Firestore connection. But we believe that this can happen with general network disruptions or any other case where the GRPC wrapper's close is invoked (which leaves the underlying GRPC connection lingering).
The text was updated successfully, but these errors were encountered:
Operating System
Custom Alpine Linux v3.14 (kernel 6.1.50)
Browser Version
Node.js v18.18.2
Firebase SDK Version
10.5.2
Firebase SDK Product:
Firestore
Describe your project's tooling
Custom Webpacked TypeScript targeting es2020.
Node v18.18.2
@firebase/[email protected]
@grpc/[email protected]
[email protected]
[email protected]
[email protected]
Describe the problem
GRPC connections appear to be left open after the Firestore connection is closed. Either via explicit termination, or through general network disruption. Resulting in a very slow memory leak on production systems with an unstable internet connection.
Using a script with our application to establish a connection to Firestore, wait a couple of seconds, tear it down, and repeat. Then capturing a before/after with heap snapshot:
Shows a large number of ClientHttp2Session for "dns:firestore.googleapis.com"
and subchannelObjArray in GRPC's getOrCreateSubchannel shows an ever growing collection of subchannels for the same IP + port number. (These subchannels aren't being reused as their SSL context differs)
I believe the GRPC stream needs to be explicitly closed. A quick hack would be:
Steps and code to reproduce issue
Sample program which gets a Firestore instance, queries a collection, terminates Firestore and repeats that three times. Printing out the GRPC sub channel pool on termination completion.
npx ts-node -- test.ts
After the last iteration the Subchannel pool contains three entries:
After patching the close callback in openStream in grpc_connection.ts:
pool is now empty, although GRPC purposefully doesn't remove empty keys (there's a TODO for that in GRPC)
Our application does offer an option to end users to disable cloud connectivity, which will explicitly terminate the Firestore connection. But we believe that this can happen with general network disruptions or any other case where the GRPC wrapper's close is invoked (which leaves the underlying GRPC connection lingering).
The text was updated successfully, but these errors were encountered: