Originally, we had a single MongoDB replica set that we stored everything on. As we scaled, we realized two things:\n\n To solve for the first item, we now run multiple large scale Mongo deployments with a mix of replica sets and sharded replica sets (depends on the application activity for the given database). In solving for the second item, we now run multiple large Elasticsearch deployments to provide the majority of our rich searching functionality.\n\n
We also heavily use Redis across the entire platform for things like distributed locking, caching, and backing part of our job queuing layer. This has led to our most recent (and ongoing!) scaling challenge.","rawContent":"Originally, we had a single @{MongoDB}|tool:1030| replica set that we stored everything on. As we scaled, we realized two things:\n\n* A single Mongo replica set wasnât going to cut it for our many quickly growing collections\n* Analytics and rich searching donât scale well in Mongo.\n\nTo solve for the first item, we now run multiple large scale Mongo deployments with a mix of replica sets and sharded replica sets (depends on the application activity for the given database). In solving for the second item, we now run multiple large @{Elasticsearch}|tool:841| deployments to provide the majority of our rich searching functionality.\n\nWe also heavily use @{Redis}|tool:1031| across the entire platform for things like distributed locking, caching, and backing part of our job queuing layer. This has led to our most recent (and ongoing!) scaling challenge.","publishedAt":"2019-07-01T23:50:37Z","commentsCount":0,"private":false,"upvotesCount":13,"upvoted":false,"flagged":false,"bookmarked":false,"viewCount":71384,"draft":false,"createdAt":"2019-07-01T23:50:38Z","decisionType":null,"showAutoGeneratedTag":false,"permissions":{"type":"id","generated":true,"id":"$StackDecision:102368872902050922.permissions","typename":"Permissions"},"subjectTools":[],"fromTools":[],"toTools":[],"link":{"type":"id","generated":true,"id":"$StackDecision:102368872902050922.link","typename":"Link"},"company":{"type":"id","generated":false,"id":"Company:101231716851549171","typename":"Company"},"topics":[],"stack":null,"services":[{"type":"id","generated":false,"id":"Tool:101231773854388643","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773774313645","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773854768982","typename":"Tool"}],"user":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"rootComments":[],"__typename":"StackDecision","answers({\"first\":2})":{"type":"id","generated":true,"id":"$StackDecision:102368872902050922.answers({\"first\":2})","typename":"StackDecisionConnection"}},"$StackDecision:102368872902050922.permissions":{"edit":false,"delete":false,"__typename":"Permissions"},"$StackDecision:102368872902050922.link":{"url":"https://stackshare.io/mixmax/how-mixmax-uses-node-and-go-to-process-250m-events-a-day","title":"How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack","imageUrl":"","__typename":"Link"},"$StackDecision:102368872902050922.answers({\"first\":2})":{"count":0,"pageInfo":{"type":"id","generated":true,"id":"$StackDecision:102368872902050922.answers({\"first\":2}).pageInfo","typename":"PageInfo"},"edges":[],"__typename":"StackDecisionConnection"},"$StackDecision:102368872902050922.answers({\"first\":2}).pageInfo":{"hasNextPage":false,"endCursor":null,"__typename":"PageInfo"},"$StackProfile:101231790867738955.stackDecisions({\"currentStackOnly\":true,\"first\":5}).edges.0":{"node":{"type":"id","generated":false,"id":"StackDecision:102368872902050922","typename":"StackDecision"},"__typename":"StackDecisionEdge"},"StackDecision:102368848297033756":{"id":"102368848297033756","publicId":"102368848297033756","htmlContent":" Mixmax was originally built using Meteor as a single monolithic app. As more users began to onboard, we started noticing scaling issues, and so we broke out our first microservice: our Compose service, for writing emails and Sequences, was born as a Node.js service. Soon after that, we broke out all recipient searching and storage functionality to another Node.js microservice, our Contacts service. This practice of breaking out microservices in order to help our system more appropriately scale, by being more explicit about each microserviceâs responsibilities, continued as we broke out numerous more microservices.","rawContent":"Mixmax was originally built using @{Meteor}|tool:1162| as a single monolithic app. As more users began to onboard, we started noticing scaling issues, and so we broke out our first microservice: our Compose service, for writing emails and Sequences, was born as a @{Node.js}|tool:1011| service. Soon after that, we broke out all recipient searching and storage functionality to another Node.js microservice, our Contacts service. This practice of breaking out microservices in order to help our system more appropriately scale, by being more explicit about each microserviceâs responsibilities, continued as we broke out numerous more microservices.","publishedAt":"2019-07-01T23:44:22Z","commentsCount":0,"private":false,"upvotesCount":7,"upvoted":false,"flagged":false,"bookmarked":false,"viewCount":148129,"draft":false,"createdAt":"2019-07-01T23:44:22Z","decisionType":null,"showAutoGeneratedTag":false,"permissions":{"type":"id","generated":true,"id":"$StackDecision:102368848297033756.permissions","typename":"Permissions"},"subjectTools":[],"fromTools":[],"toTools":[],"link":{"type":"id","generated":true,"id":"$StackDecision:102368848297033756.link","typename":"Link"},"company":{"type":"id","generated":false,"id":"Company:101231716851549171","typename":"Company"},"topics":[],"stack":null,"services":[{"type":"id","generated":false,"id":"Tool:101231773920582426","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773845103618","typename":"Tool"}],"user":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"rootComments":[],"__typename":"StackDecision","answers({\"first\":2})":{"type":"id","generated":true,"id":"$StackDecision:102368848297033756.answers({\"first\":2})","typename":"StackDecisionConnection"}},"$StackDecision:102368848297033756.permissions":{"edit":false,"delete":false,"__typename":"Permissions"},"$StackDecision:102368848297033756.link":{"url":"https://stackshare.io/mixmax/how-mixmax-uses-node-and-go-to-process-250m-events-a-day","title":"How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack","imageUrl":"","__typename":"Link"},"Tool:101231773920582426":{"id":"101231773920582426","name":"Meteor","slug":"meteor","title":"An ultra-simple, database-everywhere, data-on-the-wire, pure-Javascript web framework","verified":true,"imageUrl":"https://img.stackshare.io/package_manager/1162/default_564ea7edc5b8ccdc8b186cb429da33275b74dca2.png","canonicalUrl":"/meteor","path":"/meteor","votes":1727,"fans":2248,"stacks":1867,"followingTool":false,"followContext":null,"__typename":"Tool"},"$StackDecision:102368848297033756.answers({\"first\":2})":{"count":0,"pageInfo":{"type":"id","generated":true,"id":"$StackDecision:102368848297033756.answers({\"first\":2}).pageInfo","typename":"PageInfo"},"edges":[],"__typename":"StackDecisionConnection"},"$StackDecision:102368848297033756.answers({\"first\":2}).pageInfo":{"hasNextPage":false,"endCursor":null,"__typename":"PageInfo"},"$StackProfile:101231790867738955.stackDecisions({\"currentStackOnly\":true,\"first\":5}).edges.1":{"node":{"type":"id","generated":false,"id":"StackDecision:102368848297033756","typename":"StackDecision"},"__typename":"StackDecisionEdge"},"StackDecision:102368880176019665":{"id":"102368880176019665","publicId":"102368880176019665","htmlContent":" A huge part of our continuous deployment practices is to have granular alerting and monitoring across the platform. To do this, we run Sentry on-premise, inside our VPCs, for our event alerting, and we run an awesome observability and monitoring system consisting of StatsD, Graphite and Grafana. We have dashboards using this system to monitor our core subsystems so that we can know the health of any given subsystem at any moment. This system ties into our PagerDuty rotation, as well as alerts from some of our Amazon CloudWatch alarms (weâre looking to migrate all of these to our internal monitoring system soon).","rawContent":"A huge part of our continuous deployment practices is to have granular alerting and monitoring across the platform. To do this, we run @{Sentry}|tool:191| on-premise, inside our VPCs, for our event alerting, and we run an awesome observability and monitoring system consisting of @{StatsD}|tool:932|, @{Graphite}|tool:1050| and @{Grafana}|tool:2645|. We have dashboards using this system to monitor our core subsystems so that we can know the health of any given subsystem at any moment. This system ties into our @{PagerDuty}|tool:107| rotation, as well as alerts from some of our @{Amazon CloudWatch}|tool:401| alarms (weâre looking to migrate all of these to our internal monitoring system soon).","publishedAt":"2019-07-01T23:52:27Z","commentsCount":0,"private":false,"upvotesCount":6,"upvoted":false,"flagged":false,"bookmarked":false,"viewCount":926509,"draft":false,"createdAt":"2019-07-01T23:52:29Z","decisionType":null,"showAutoGeneratedTag":false,"permissions":{"type":"id","generated":true,"id":"$StackDecision:102368880176019665.permissions","typename":"Permissions"},"subjectTools":[],"fromTools":[],"toTools":[],"link":{"type":"id","generated":true,"id":"$StackDecision:102368880176019665.link","typename":"Link"},"company":{"type":"id","generated":false,"id":"Company:101231716851549171","typename":"Company"},"topics":[],"stack":null,"services":[{"type":"id","generated":false,"id":"Tool:101231773501596063","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773813299908","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773860768398","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231774519122717","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773456856070","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773620969295","typename":"Tool"}],"user":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"rootComments":[],"__typename":"StackDecision","answers({\"first\":2})":{"type":"id","generated":true,"id":"$StackDecision:102368880176019665.answers({\"first\":2})","typename":"StackDecisionConnection"}},"$StackDecision:102368880176019665.permissions":{"edit":false,"delete":false,"__typename":"Permissions"},"$StackDecision:102368880176019665.link":{"url":"https://stackshare.io/mixmax/how-mixmax-uses-node-and-go-to-process-250m-events-a-day","title":"How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack","imageUrl":"","__typename":"Link"},"Tool:101231773813299908":{"id":"101231773813299908","name":"StatsD","slug":"statsd","title":"Simple daemon for easy stats aggregation","verified":false,"imageUrl":"https://img.stackshare.io/service/932/default_b8c7d49298132d46fbef113905bcba1896158113.png","canonicalUrl":"/statsd","path":"/statsd","votes":31,"fans":362,"stacks":305,"followingTool":false,"followContext":null,"__typename":"Tool"},"Tool:101231773860768398":{"id":"101231773860768398","name":"Graphite","slug":"graphite","title":"A highly scalable real-time graphing system","verified":false,"imageUrl":"https://img.stackshare.io/service/1050/graphite.png","canonicalUrl":"/graphite","path":"/graphite","votes":42,"fans":517,"stacks":390,"followingTool":false,"followContext":null,"__typename":"Tool"},"Tool:101231774519122717":{"id":"101231774519122717","name":"Grafana","slug":"grafana","title":"Open source Graphite & InfluxDB Dashboard and Graph Editor","verified":false,"imageUrl":"https://img.stackshare.io/service/2645/default_8f9d552b144493679449b16c79647da5787e808b.jpg","canonicalUrl":"/grafana","path":"/grafana","votes":415,"fans":17510,"stacks":17927,"followingTool":false,"followContext":null,"__typename":"Tool"},"Tool:101231773620969295":{"id":"101231773620969295","name":"Amazon CloudWatch","slug":"amazon-cloudwatch","title":"Monitor AWS resources and custom metrics generated by your applications and services","verified":false,"imageUrl":"https://img.stackshare.io/service/401/amazon-cloudwatch.png","canonicalUrl":"/amazon-cloudwatch","path":"/amazon-cloudwatch","votes":214,"fans":10489,"stacks":11580,"followingTool":false,"followContext":null,"__typename":"Tool"},"$StackDecision:102368880176019665.answers({\"first\":2})":{"count":0,"pageInfo":{"type":"id","generated":true,"id":"$StackDecision:102368880176019665.answers({\"first\":2}).pageInfo","typename":"PageInfo"},"edges":[],"__typename":"StackDecisionConnection"},"$StackDecision:102368880176019665.answers({\"first\":2}).pageInfo":{"hasNextPage":false,"endCursor":null,"__typename":"PageInfo"},"$StackProfile:101231790867738955.stackDecisions({\"currentStackOnly\":true,\"first\":5}).edges.2":{"node":{"type":"id","generated":false,"id":"StackDecision:102368880176019665","typename":"StackDecision"},"__typename":"StackDecisionEdge"},"StackDecision:102368864493037413":{"id":"102368864493037413","publicId":"102368864493037413","htmlContent":" As Mixmax began to scale super quickly, with more and more customers joining the platform, we started to see that the Meteor app was still having a lot of trouble scaling due to how it tried to provide its reactivity layer. To be honest, this led to a brutal summer of playing Galaxy container whack-a-mole as containers would saturate their CPU and become unresponsive. Iâll never forget hacking away at building a new microservice to relieve the load on the system so that weâd stop getting paged every 30-40 minutes. Luckily, weâve never had to do that again! After stabilizing the system, we had to build out two more microservices to provide the necessary reactivity and authentication layers as we rebuilt our Meteor app from the ground up in Node.js. This also had the added benefit of being able to deploy the entire application in the same AWS VPCs. Thankfully, AWS had also released their ALB product so that we didnât have to build and maintain our own websocket layer in Amazon EC2. All of our microservices, except for one special Go one, are now in Node with an nginx frontend on each instance, all behind AWS Elastic Load Balancing (ELB) or ALBs running in AWS Elastic Beanstalk.","rawContent":"As @{Mixmax}|tool:4043| began to scale super quickly, with more and more customers joining the platform, we started to see that the @{Meteor}|tool:1162| app was still having a lot of trouble scaling due to how it tried to provide its reactivity layer. To be honest, this led to a brutal summer of playing Galaxy container whack-a-mole as containers would saturate their CPU and become unresponsive. Iâll never forget hacking away at building a new microservice to relieve the load on the system so that weâd stop getting paged every 30-40 minutes. Luckily, weâve never had to do that again! After stabilizing the system, we had to build out two more microservices to provide the necessary reactivity and authentication layers as we rebuilt our Meteor app from the ground up in @{Node.js}|tool:1011|. This also had the added benefit of being able to deploy the entire application in the same AWS VPCs. Thankfully, AWS had also released their ALB product so that we didnât have to build and maintain our own websocket layer in @{Amazon EC2}|tool:18|. All of our microservices, except for one special @{Go}|tool:1005| one, are now in Node with an @{nginx}|tool:1052| frontend on each instance, all behind @{AWS Elastic Load Balancing (ELB)}|tool:2587| or ALBs running in @{AWS Elastic Beanstalk}|tool:210|.","publishedAt":"2019-07-01T23:48:00Z","commentsCount":1,"private":false,"upvotesCount":5,"upvoted":false,"flagged":false,"bookmarked":false,"viewCount":213836,"draft":false,"createdAt":"2019-07-01T23:48:29Z","decisionType":null,"showAutoGeneratedTag":false,"permissions":{"type":"id","generated":true,"id":"$StackDecision:102368864493037413.permissions","typename":"Permissions"},"subjectTools":[],"fromTools":[],"toTools":[],"link":{"type":"id","generated":true,"id":"$StackDecision:102368864493037413.link","typename":"Link"},"company":{"type":"id","generated":false,"id":"Company:101231716851549171","typename":"Company"},"topics":[],"stack":null,"services":[{"type":"id","generated":false,"id":"Tool:101231775336711203","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773920582426","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773845103618","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773405612798","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773842107538","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773861864225","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231774492574471","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773511551351","typename":"Tool"}],"user":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"rootComments":[{"type":"id","generated":false,"id":"Comment:105162844773704949","typename":"Comment"}],"__typename":"StackDecision","answers({\"first\":2})":{"type":"id","generated":true,"id":"$StackDecision:102368864493037413.answers({\"first\":2})","typename":"StackDecisionConnection"}},"$StackDecision:102368864493037413.permissions":{"edit":false,"delete":false,"__typename":"Permissions"},"$StackDecision:102368864493037413.link":{"url":"https://stackshare.io/mixmax/how-mixmax-uses-node-and-go-to-process-250m-events-a-day","title":"How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack","imageUrl":"","__typename":"Link"},"Tool:101231773405612798":{"id":"101231773405612798","name":"Amazon EC2","slug":"amazon-ec2","title":"Scalable, pay-as-you-go compute capacity in the cloud","verified":false,"imageUrl":"https://img.stackshare.io/service/18/amazon-ec2.png","canonicalUrl":"/amazon-ec2","path":"/amazon-ec2","votes":2548,"fans":43259,"stacks":48245,"followingTool":false,"followContext":null,"__typename":"Tool"},"Tool:101231773842107538":{"id":"101231773842107538","name":"Golang","slug":"golang","title":"An open source programming language that makes it easy to build simple, reliable, and efficient software","verified":true,"imageUrl":"https://img.stackshare.io/service/1005/O6AczwfV_400x400.png","canonicalUrl":"/golang","path":"/golang","votes":3296,"fans":17039,"stacks":22468,"followingTool":false,"followContext":null,"__typename":"Tool"},"Tool:101231774492574471":{"id":"101231774492574471","name":"AWS Elastic Load Balancing (ELB)","slug":"aws-elastic-load-balancing","title":"Automatically distribute your incoming application traffic across multiple Amazon EC2 instances","verified":false,"imageUrl":"https://img.stackshare.io/service/2587/aws-elastic-load-balancing.png","canonicalUrl":"/aws-elastic-load-balancing","path":"/aws-elastic-load-balancing","votes":59,"fans":11243,"stacks":12608,"followingTool":false,"followContext":null,"__typename":"Tool"},"Comment:105162844773704949":{"id":"105162844773704949","content":"You can consider using Go for a more misson critical services when extending the product. Recently, I moved from Node to Go when building a tool which will process a big amount of various files concurrently and the goroutines plus channels combo is pretty powerful. For a web application server I'd stick with Node.js for it's easy prototyping, wider ecosystem, good frameworks, but any CPU-heavy processing I'd pass to Go. Node just chokes when there's longer synchronous processing to be done.\n\nGo also fits quite well in distributed ecosystem with it's Circuit.","postedAt":"2020-11-06T10:14:25Z","upvoted":false,"flagged":false,"upvotesCount":0,"parentId":null,"user":{"type":"id","generated":false,"id":"User:105162747619504685","typename":"User"},"__typename":"Comment","replies":[]},"User:105162747619504685":{"id":"105162747619504685","path":"/wbator","imageUrl":"https://img.stackshare.io/user/787088/default_6fb5c97c621af8274998c823b01454f2688c134b.jpg","displayName":"Wojciech Bator","__typename":"User"},"$StackDecision:102368864493037413.answers({\"first\":2})":{"count":0,"pageInfo":{"type":"id","generated":true,"id":"$StackDecision:102368864493037413.answers({\"first\":2}).pageInfo","typename":"PageInfo"},"edges":[],"__typename":"StackDecisionConnection"},"$StackDecision:102368864493037413.answers({\"first\":2}).pageInfo":{"hasNextPage":false,"endCursor":null,"__typename":"PageInfo"},"$StackProfile:101231790867738955.stackDecisions({\"currentStackOnly\":true,\"first\":5}).edges.3":{"node":{"type":"id","generated":false,"id":"StackDecision:102368864493037413","typename":"StackDecision"},"__typename":"StackDecisionEdge"},"StackDecision:102368840671225012":{"id":"102368840671225012","publicId":"102368840671225012","htmlContent":" Building a communication platform means processing a TON of data. Our backend, built primarily in Node.js and Go, processes up to 250M events a day with 200k/minute at peak load. As the glue for an organizationâs communication, not only are we processing a huge number of internal events, but weâre also processing data from external sources like CRMs and ATSs totaling 3.2 million events and amounting to a data volume exceeding 14 GB each hour. We've already scaled our platform up 2x in the past 3 months and plan to grow another 10x this year, all while maintaining strict \"three 9's\" uptime that our customers expect, as they rely on Mixmax all day to get their work done.","rawContent":"Building a communication platform means processing a TON of data. Our backend, built primarily in @{Node.js}|tool:1011| and @{Go}|tool:1005|, processes up to 250M events a day with 200k/minute at peak load. As the glue for an organizationâs communication, not only are we processing a huge number of internal events, but weâre also processing data from external sources like CRMs and ATSs totaling 3.2 million events and amounting to a data volume exceeding 14 GB each hour. We've already scaled our platform up 2x in the past 3 months and plan to grow another 10x this year, all while maintaining strict \"three 9's\" uptime that our customers expect, as they rely on Mixmax all day to get their work done.","publishedAt":"2019-07-01T23:42:25Z","commentsCount":0,"private":false,"upvotesCount":4,"upvoted":false,"flagged":false,"bookmarked":false,"viewCount":9475,"draft":false,"createdAt":"2019-07-01T23:42:26Z","decisionType":null,"showAutoGeneratedTag":false,"permissions":{"type":"id","generated":true,"id":"$StackDecision:102368840671225012.permissions","typename":"Permissions"},"subjectTools":[],"fromTools":[],"toTools":[],"link":{"type":"id","generated":true,"id":"$StackDecision:102368840671225012.link","typename":"Link"},"company":{"type":"id","generated":false,"id":"Company:101231716851549171","typename":"Company"},"topics":[],"stack":null,"services":[{"type":"id","generated":false,"id":"Tool:101231773845103618","typename":"Tool"},{"type":"id","generated":false,"id":"Tool:101231773842107538","typename":"Tool"}],"user":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"rootComments":[],"__typename":"StackDecision","answers({\"first\":2})":{"type":"id","generated":true,"id":"$StackDecision:102368840671225012.answers({\"first\":2})","typename":"StackDecisionConnection"}},"$StackDecision:102368840671225012.permissions":{"edit":false,"delete":false,"__typename":"Permissions"},"$StackDecision:102368840671225012.link":{"url":"https://stackshare.io/mixmax/how-mixmax-uses-node-and-go-to-process-250m-events-a-day","title":"How Mixmax Uses Node and Go to Process 250M Events a day - Mixmax Tech Stack","imageUrl":"","__typename":"Link"},"$StackDecision:102368840671225012.answers({\"first\":2})":{"count":0,"pageInfo":{"type":"id","generated":true,"id":"$StackDecision:102368840671225012.answers({\"first\":2}).pageInfo","typename":"PageInfo"},"edges":[],"__typename":"StackDecisionConnection"},"$StackDecision:102368840671225012.answers({\"first\":2}).pageInfo":{"hasNextPage":false,"endCursor":null,"__typename":"PageInfo"},"$StackProfile:101231790867738955.stackDecisions({\"currentStackOnly\":true,\"first\":5}).edges.4":{"node":{"type":"id","generated":false,"id":"StackDecision:102368840671225012","typename":"StackDecision"},"__typename":"StackDecisionEdge"},"$Company:101231716851549171.team({\"after\":null,\"first\":10})":{"count":2,"pageInfo":{"type":"id","generated":true,"id":"$Company:101231716851549171.team({\"after\":null,\"first\":10}).pageInfo","typename":"PageInfo"},"edges":[{"type":"id","generated":true,"id":"$Company:101231716851549171.team({\"after\":null,\"first\":10}).edges.0","typename":"UserEdge"},{"type":"id","generated":true,"id":"$Company:101231716851549171.team({\"after\":null,\"first\":10}).edges.1","typename":"UserEdge"}],"__typename":"UserConnection"},"$Company:101231716851549171.team({\"after\":null,\"first\":10}).pageInfo":{"hasNextPage":false,"endCursor":"Mg","__typename":"PageInfo"},"$Company:101231716851549171.team({\"after\":null,\"first\":10}).edges.0":{"node":{"type":"id","generated":false,"id":"User:101892157471449722","typename":"User"},"__typename":"UserEdge"},"$Company:101231716851549171.team({\"after\":null,\"first\":10}).edges.1":{"node":{"type":"id","generated":false,"id":"User:101232027451512284","typename":"User"},"__typename":"UserEdge"}}
\n
Company Members