Yeah, performance engineering is half the time reading flame charts and trying to decipher what the fuck is going on in your code! At Convoy, Pyroscope has been our go-to tool at Convoy for debugging these nasty issues, from spotting O(n^2) algorithms to memory leaks, etc. Memory leaks are even more annoying to debug because guess what? The AWS console doesn't provide memory graphs. Just brilliant. We use OTel with SigNoz to spot redundant database connect calls. For example, we found that our database driver wasn't using the connection pool even though the documentation claimed otherwise.
Subomi Oluwalanaâs Post
More Relevant Posts
-
When managing Kubernetes resources, graceful resource deletion is crucial to avoid stale data. This article discusses using predicates in controller runtime to filter events and perform cleanup before deletion. More: https://lnkd.in/gTFir6Bd
To view or add a comment, sign in
-
Did you know that some processes in Spark, such as file system operations like moving a file from one location to another, are single-threaded? Regardless of the size of your cluster, these operations will process one file at a time and will only utilize the driver node. Therefore, it's important to know that we can improve this process by introducing multithreading. A simple for loop can be easily implemented as a multithreaded function using the concurrent.futures Threading library. For more details, follow the link below: https://lnkd.in/gWZ9KJmU #dataengineer
To view or add a comment, sign in
-
Vitess 21 is here, and comes with a number of new features including: - Enhanced query compatibility - Improved cluster management - Expanded VReplication capabilities - Experimental support for atomic distributed transactions and recursive CTEs https://lnkd.in/gYSX3njA
Announcing Vitess 21 â PlanetScale
planetscale.com
To view or add a comment, sign in
-
Big learning for me as CTO: if you run a serious API, you need a dependency graph. Clients want to send their API calls all at once, not waiting for things to finish before submitting the next call. A dependency graph lets them do that - tell you about all the config at once - and then implies how to resolve what they've requested. After evaluating all the options, we picked the simplest: a few tables and careful row-level locking in Postgres. We also looked at: * Prefect but followed the Burlington Coat Factory Rule (more on that later) * Pachyderm Inc. (Acquired by HPE) but it was overkill * Temporal Technologies didn't exist yet
To view or add a comment, sign in
-
https://lnkd.in/dPaGiNEz [ related post: https://lnkd.in/d7rn7UyR ] << ...Bento is a high performance and resilient stream processor, able to connect various sources and sinks in a range of brokering patterns and perform hydration, enrichments, transformations and filters on payloads... >>
GitHub - warpstreamlabs/bento: Fancy stream processing made operationally mundane. This repository is a fork of the original project before the license was changed.
github.com
To view or add a comment, sign in
-
Step Function concurrent executions are a thingâand they are especially important to deal with when running tasks that touch databases, where concurrent jobs could cause system degradation. I played around with implementing semaphore locks with DynamoDB entirely with step function native state language. This solution lets me run Iceberg VACUUM jobs on a schedule without worrying about multiple jobs conflicting. Gist of a CloudFormation template that deploys this: https://lnkd.in/gvhXb7B4 #iceberg #stepfunctions #awscommunity
To view or add a comment, sign in
-
Finished up the basic memory management and evictor for zygotes in sockd! Sockd is nearing v1 release. Lots of testing still to do and maybe some API changes before the official release. This will be a a lot of work and I'm a little burnt out on it at the moment so I'm going to transition to another project for a bit before I return to it. Check it out at https://lnkd.in/e8bT3kww My new project is quackML, a full AI/ML engine for DuckDB inspired by PostgresML. My intention for this is to be a full-service in-process AI/ML data stack. I've long believed separating your data engine from your models was at best wasteful and at worst actively harmful. DuckDB seems like a perfect place to combine data with models. Imagine training your models in-process with plain sql, writing the db file to s3 then pulling it to serve it even at the edge with DuckDB WASM capabilities. A well-designed serverless system (maybe using sockd containers) could make a viable, cheap, serverless inference system possible. What do you all think about this? Do you think this a good idea or have I spent too much time thinking about a solution in search of a problem?
GitHub - parkerdgabel/sockd: Sock-runtime is a container runtime optimized for serverless workloads based on SOCK containers.
github.com
To view or add a comment, sign in
-
ð¹ ð§ð®ð¶ð»ðð are applied to nodes, indicating that no new pods should be scheduled on the tainted nodes unless they tolerate the taint and are commonly used to repel certain types of workloads ð³ð¿ð¼ðº ðð½ð²ð°ð¶ð³ð¶ð° ð»ð¼ð±ð²ð, ensuring that nodes are reserved for specific purposes (or) workloads. ð¹ ð§ð¼ð¹ð²ð¿ð®ðð¶ð¼ð»ð are applied to pods, allowing them to tolerate (or) ignore the specified taints on nodes. ð¹ Tolerations are used to ðð½ð²ð°ð¶ð³ð ððµð®ð ð® ð½ð¼ð± can be scheduled on nodes with specific taints, even if the taint would normally repel such pods. â ð¹ð®ð¯ð²ð¹ð: key-value pairs used to identify and categorize Kubernetes objects. â ðð²ð¹ð²ð°ðð¼ð¿ð: select a specific subset of objects based on their labels. â ð®ð»ð»ð¼ðð®ðð¶ð¼ð»: key-value pairs used to provide additional information about an object. â ðºð®ðð°ðµðð®ð¯ð²ð¹ð: used to select and group Kubernetes objects, ensuring pods are scheduled on nodes with specific labels. â ð»ð¼ð±ð²ð¦ð²ð¹ð²ð°ðð¼ð¿: specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. â ð¡ð¼ð±ð² ð®ð³ð³ð¶ð»ð¶ðð: is a set of rules used by the scheduler to determine where a pod can be placed.
To view or add a comment, sign in
-
ð Do You Know? What is cAdvisor + Grafana? ð Imagine cAdvisor and Grafana like Gabbar and Samba from Sholay! ð cAdvisor acts like Samba, constantly keeping track of your containers' performance - monitoring CPU usage, memory, network, and disk consumption. It helps you understand how your containers are behaving in real-time. But thatâs not enough! Grafana is like Gabbarâs binoculars, taking all that data from cAdvisor and turning it into detailed, visually appealing dashboards. With Grafana, you can easily see trends, spot issues, and monitor the health of your containers over time. It gives you the big picture and the small details in one glance, helping you make smart decisions fast. So, the next time youâre wondering about your container performance, just think: "Arre O Samba, cAdvisor kya keh raha hai?" and let Grafana show you the whole story! ð Full Documentation: https://lnkd.in/dxMQcaSh You can run a single cAdvisor to monitor the whole machine. Simply run: VERSION=v0.49.1 # Use the latest release version sudo docker run \ Â --volume=/:/rootfs:ro \ Â --volume=/var/run:/var/run:ro \ Â --volume=/sys:/sys:ro \ Â --volume=/var/lib/docker/:/var/lib/docker:ro \ Â --volume=/dev/disk/:/dev/disk:ro \ Â --publish=<YourPort>:<YourPort> \ Â --detach=true \ Â --name=cadvisor \ Â --privileged \ Â --device=/dev/kmsg \ #DevOps #Containers #Monitoring #cAdvisor #Grafana #Gabbar #Samba #SholayVibes
To view or add a comment, sign in
-
ð¤ Yes, I'm Back in Batch!!! â¡ ð Blast from the past: I've had to rebuild my stupid Neo4j database a few times, and it's tedious/takes forever. Solution? Batching of course? I used to batch a lot of things in the "olden days," as a survival skill to cope with compute/storage/io bottlenecks...This weekend I learned how to do it with Neo4j to either: 1) shovel a ton of data into it, or 2) work with large queries inside the DB itself... This was a survival strategy, and I guess it still is. Good to know I'm still in vogue with batches ð¤£
To view or add a comment, sign in