Planet PostgreSQLhttps://planet.postgresql.orgPlanet PostgreSQLen-usTue, 17 Mar 2026 06:53:41 +0000Hamza Sajawal: pgNow Instant PostgreSQL Performance Diagnostics in Minuteshttps://postgr.es/p/7u-<div class="elementor elementor-30035">
<div class="elementor-element elementor-element-6af0e46d e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-5003493e elementor-widget elementor-widget-text-editor">
<p>
<b>pgNow</b> <span class="c1">is a lightweight PostgreSQL diagnostic tool developed by Redgate that provides quick visibility into database performance without requiring agents or complex setup. It connects directly to a PostgreSQL instance and delivers real-time insights into query workloads, active sessions, index usage, configuration health, and vacuum activity, helping DBAs quickly identify performance bottlenecks. Because it runs as a simple desktop application, pgNow is particularly useful for quick troubleshooting and point-in-time diagnostics when a full monitoring platform is not available. </span>
</p>
<p>
<span class="c1">The tool is currently</span> <b>free to use</b><span class="c1">, and its development is actively maintained by Redgate, with potential future enhancements expected as the project evolves. It analyzes workload behavior using PostgreSQL system views and extensions such as pg_stat_activity and pg_stat_statements.</span>
</p>
</div>
<div class="elementor-element elementor-element-90e11aa e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-1c07acb elementor-widget elementor-widget-image">
<img alt="" class="attachment-large size-large wp-image-30037" height="960" src="https://stormatics.tech/wp-content/uploads/2026/03/pgnow-architecture-683x1024.png" width="640" />
</div>
<div class="elementor-element elementor-element-3b9c8f7 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-4bc6d07 elementor-widget elementor-widget-text-editor">
<h1>
<span class="c1">Prerequisites</span>
</h1>
<h3 class="c2">
Enable pg_stat_statements in PostgreSQL
</h3>
<p>
<span class="c1">Most PostgreSQL distributions already include the pg_stat_statements extension. You only need to enable it in shared_preload_libraries and create the extension in the database</span>
</p>
<pre><b>Create the extension</b><br /><b>CREATE EXTENSION pg_stat_statements;</b><br /><b>Verify the Extension</b></pre>
</div>
<div class="elementor-element elementor-element-4065074 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-a4264b0 elementor-widget elementor-widget-image">
<a href="https://resources.stormatics.tech/webinar-advanced-indexes-in-postgresql-smart-indexing-for-faster-queries"><img alt="" class="attachment-large size-large wp-image-30038" height="75" src="https://stormatics.tech/wp-content/uploads/2026/03/Verify-extension.webp" width="640" /></a>
</div>
<div class="elementor-element elementor-element-1c2c018 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-2fa054d elementor-widget elementor-widget-text-editor">
<h3 class="c2">
Configure Statement Tracking:
</h3>
<p>
<span class="c1">Set the tracking level to capture all statements:</span>
</p>
<pre><b>ALTER SYSTEM SET pg_stat_statements.track = 'all';</b><br /><b>Reload configurations:</b><br /><b>SELECT pg_reload_conf();</b></pre>
<p>
<span class="c1">Verify the setting:</span>
</p>
</div>
<div class="elementor-element elementor-element-b323211 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-8fcf1fb elementor-widget elementor-widget-image">
<img alt="" class="attachment-large size-large wp-image-30058" height="76" src="https://stormatics.tech/wp-content/uploads/2026/03/image-3.webp" width="274" />
</div>
<div class="elementor-element elementor-element-9286916 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-65d9d22 elementor-widget elementor-widget-text-editor">
<pre><span class="c1">CREATE</span> <span class="c1">USER</span><span class="c1"> monitor_user </span><span class="c1">WITH</span> <span class="c1">PASSWORD</span> <span class="c1">'123#abc'</span><span class="c1">;</span><br /><br /><span class="c3"><span class="c1"> </span><span class="c1">-- Grant connection permission</span></span><br /><br /><span class="c1">GRANT</span> <span class="c1">CONNECT</span> <span class="c1">ON</span> <span class="c1">DATABASE</span><span class="c1"> myoddodb </span><span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c3"><span class="c1"> </span><span class="c1">-- Grant usage on schema (adjust schema name if needed)</span></span><br /><br /><span class="c1">GRANT</span><span class="c1"> USAGE </span><span class="c1">ON</span> <span class="c1">SCHEMA</span> <span class="c1">public</span> <span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c4">-- Grant specific table permissions for monitoring</span><br /><br /><span class="c1">GRANT</span> <span class="c1">SELECT</span> <span class="c1">ON</span><span class="c1"> pg_stat_activity </span><span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c1">GRANT</span> <span class="c1">SELECT</span> <span class="c1">ON</span><span class="c1"> pg_stat_database </span><span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c1">GRANT</span> <span class="c1">SELECT</span> <span class="c1">ON</span><span class="c1"> pg_stat_all_tables </span><span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c1">GRANT</span> <span class="c1">SELECT</span> <span class="c1">ON</span><span class="c1"> pg_stat_user_tables </span><span class="c1">TO</span><span class="c1"> monitor_user</span><span class="c1">;</span><br /><br /><span class="c1"></span></pre></div></div>[...]Tue, 17 Mar 2026 06:53:41 +0000https://postgr.es/p/7u-Bruce Momjian: COMMENT to the MCP Rescuehttps://postgr.es/p/7uZ<p>
The <a class="txt2html c1" href="https://www.postgresql.org/docs/current/sql-comment.html">COMMENT</a> command has been in Postgres for decades. It allows text descriptions to be attached to almost any database object. During its long history, it was mostly seen as a nice-to-have addition to database schemas, allowing administrators and developers to more easily understand the schema. Tools like <a class="txt2html c1" href="https://www.pgadmin.org/docs/pgadmin4/9.11/table_dialog.html">pgAdmin</a> allow you to assign and view comments on database objects.
</p>
<p>
Now, in the AI era, there is something else that needs to understand database schemas — <a class="txt2html c1" href="https://thenewstack.io/why-the-model-context-protocol-won/">MCP</a> clients. Without database object comments, MCP clients can only use the database schemas, object names, and constraints. With database comments, database users can supply valuable information to allow MCP clients to more effectively match schema objects to user requests and potentially generate better SQL queries. If database users don't want do add such comments, it might be possible for generative AI to create appropriate comments, perhaps by analyzing data in the tables.
</p>
Mon, 16 Mar 2026 22:15:01 +0000https://postgr.es/p/7uZJobin Augustine: What Is in pg_gather Version 33 ?https://postgr.es/p/7uYIt started as a humble personal project, few years back. The objective was to convert all my PostgreSQL notes and learning into a automatic diagnostic tool, such that even a new DBA can easily spot the problems. The idea was simple, a simple tool which don’t need any installation but do all possible analysis and […]
Mon, 16 Mar 2026 17:03:18 +0000https://postgr.es/p/7uYCornelia Biacsics: Contributions for week 10, 2026https://postgr.es/p/7uV<p>
On Tuesday March 10, 2026 <a href="https://www.meetup.com/postgresbe/events/313227373/">PUG Belgium met for the March edition</a>, organized by Boriss Mejias and Stefan Fercot.
</p>
<p>
Speakers:
</p>
<ul>
<li>Esteban Zimanyi
</li>
<li>Thijs Lemmens
</li>
<li>Yoann La Cancellera
</li>
</ul>
<p>
Robert Haas organized a Hacking Workshop on Tuesday March 10, 2026. Tomas Vondra discussed questions about one of his talks.
</p>
<p>
<a href="https://luma.com/5pglgx8h">PostgreSQL Edinburgh meetup Mar 2026</a> met on Thursday March 12, 2026
</p>
<p>
Speakers:
</p>
<ul>
<li>Radim Marek
</li>
<li>Jimmy Angelakos
</li>
</ul>
<p>
<a href="https://eventyay.com/e/88882f3e">FOSSASIA Summit 2026</a> took place from Sunday March 8 - Tuesday March 10, 2026 in Bangkok.
</p>
<p>
PostgreSQL speakers:
</p>
<ul>
<li>Koji Annoura
</li>
<li>Charly Batista
</li>
<li>Gary Evans
</li>
<li>Joe Conway
</li>
<li>Suraj Kharage
</li>
<li>Robert Treat
</li>
<li>Sameer Kumar
</li>
<li>Roneel Kumar
</li>
<li>Sivaprasad Murali
</li>
<li>Yugo Nagata
</li>
<li>Denis Smirnov
</li>
<li>Vaibhav Dalvi
</li>
<li>Gyeongseon Park
</li>
<li>Bo Peng
</li>
<li>Brian McKerr
</li>
<li>Chris Travers
</li>
<li>Jirayut Nimsaeng
</li>
<li>Gilles Darold
</li>
<li>Rajni Baliyan
</li>
</ul>
<p>
<a href="https://live.pgconf.in/">PostgreSQL Conference India</a> took place in Bengaluru (India) from March 11 - March 13, 2026.
</p>
<p>
Organizers:
</p>
<ul>
<li>Pavan Deolasee
</li>
<li>Ashish Kumar Mehra
</li>
<li>Nikhil Sontakke
</li>
<li>Hari Kiran
</li>
<li>Rushabh Lathia
</li>
</ul>
<p>
Talk Selection Committee:
</p>
<ul>
<li>Amul Sul
</li>
<li>Dilip Kumar
</li>
<li>Marc Linster
</li>
<li>Thomas Munro
</li>
<li>Vigneshwaran c
</li>
</ul>
<p>
Speakers:
</p>
<ul>
<li>Abhijeet Rajurkar
</li>
<li>Aditya Duvuri
</li>
<li>Ajit Awekar
</li>
<li>Amit Kumar Singh
</li>
<li>Amogh Bharadwaj
</li>
<li>Amul Sul
</li>
<li>Andreas Scherbaum
</li>
<li>Ashutosh Bapat
</li>
<li>Avinash Vallarapu
</li>
<li>Boopathi Parameswaran
</li>
<li>Claire Giordano
</li>
<li>Danish Khan
</li>
<li>Deepak R Mahto
</li>
<li>Dilip Kumar
</li>
<li>Divya Bhargov
</li>
<li>Dr. M. J. Shankar Raman
</li>
<li>Franck Pachot
</li>
<li>Hari Kiran
</li>
<li>Hari Prasad
</li>
<li>Harish Perumal
</li>
<li>Jayant Haritsa
</li>
<li>Jim Mlodgenski
</li>
<li>Jobin Augustine
</li>
<li>Joe Conway
</li>
<li>Kanthanathan S
</li>
<li>Kevin Biju
</li>
<li>Koji Annoura
</li>
<li>Kranthi Kiran Burada
</li>
<li>Lalit Choudhary
</li>
<li>Michael Zhilin
</li>
<li>Mithun Chicklore Yogendra
</li>
<li>Mohit Agarwal
</li>
<li>NarendraSingh Tawar
</li>
<li>Neel Patel
</li>
<li>Neeta Goel
</li>
<li>Nikhil Chawla
</li>
<li>Nikhil Sontakke
</li>
<li>Nishad Mankar
</li>
<li>Palak </li></ul>[...]Mon, 16 Mar 2026 08:20:47 +0000https://postgr.es/p/7uVRichard Yen: Learning AI Fast with pgEdge's RAGhttps://postgr.es/p/7uW<h1 id="introduction">
Introduction
</h1>
<p>
If you’ve been paying attention to the technology landscape recently, you’ve probably noticed that AI is <strong>everywhere</strong>. New frameworks, new terminology, and a dizzying array of acronyms and jargon: <strong>LLM</strong>, <strong>RAG</strong>, <strong>embeddings</strong>, <strong>vector databases</strong>, <strong>MCP</strong>, and more.
</p>
<p>
Honestly, it’s been difficult to figure out where to start. Many tutorials either dive deep into machine learning theory (Bayesian transforms?) or hide everything behind a single API call to a hosted model. Neither approach really explains how these systems actually work.
</p>
<p>
Recently I spent some time experimenting with the <a href="https://www.pgedge.com">pgEdge</a> AI tooling after hearing Shaun Thomas’ talk at a <a href="https://prairiepostgres.org/">PrairiePostgres</a> meetup. He talked about how to set up the various components of an AI chatbot system, starting from ingesting documents into a Postgres database, vectorizing the text, setting up a RAG and then an MCP server.
</p>
<p>
When I got home I wanted to try it out for myself – props to the pgEdge team for making it all free an open-source! What surprised me most was not just that everything worked, but how easy it was to get a complete AI retrieval pipeline running locally. More importantly, it turned out to be one of the clearest ways I’ve found to understand how modern AI systems are constructed behind the scenes. Thanks so much, Shaun!
</p>
<hr />
<h1 id="the-pgedge-ai-components">
The pgEdge AI Components
</h1>
<p>
The pgEdge AI ecosystem provides several small tools that fit together naturally. I’ll go through them real quickly here
</p>
<ul>
<li>
<a href="https://github.com/pgEdge/doc-converter">Doc Converter</a> – The doc-converter normalizes documents into a format that is easy to process downstream. Whether the input is PDF, HTML, Markdown, or plain text, the converter produces clean text output suitable for ingestion.
</li>
<li>
<a href="https://github.com/pgEdge/pgedge-vectorizer">Vectorizer</a> – The vectorizer handles the process of converting text chunks into embeddings. These embeddings are numeric representations of text that capture semantic meaning. Once generated, they can be stored inside PostgreSQL using <a href="https://github.com/pgvector/pgvector">pgvector</a> and queried with similarity search.
</li>
<li>
<a href="https://github.com/pgEdge/pgedge-rag-server">Retrieval-Augmented Generation (RAG) Server</a> – The R</li></ul>[...]Mon, 16 Mar 2026 08:00:00 +0000https://postgr.es/p/7uWDave Page: AI Features in pgAdmin: AI Insights for EXPLAIN Planshttps://postgr.es/p/7uX<p>
This is the third and final post in a series covering the new AI functionality in <a href="https://www.pgadmin.org/">pgAdmin 4</a>. In the <a href="https://www.pgedge.com/blog/ai-features-in-pgadmin-configuration-and-reports">first post</a>, I covered LLM configuration and the AI-powered analysis reports, and in the <a href="https://www.pgedge.com/blog/ai-features-in-pgadmin-the-ai-chat-agent">second</a>, I introduced the AI Chat agent for natural language SQL generation. In this post, I'll walk through the AI Insights feature, which brings LLM-powered analysis to PostgreSQL EXPLAIN plans.Anyone who has spent time optimising PostgreSQL queries knows that reading EXPLAIN output is something of an acquired skill. pgAdmin has long provided a graphical EXPLAIN viewer that makes the plan tree easier to navigate, along with analysis and statistics tabs that surface key metrics, but interpreting what you're seeing and deciding what to do about it still requires a solid understanding of the query planner's behaviour. The AI Insights feature aims to bridge that gap by providing an expert-level analysis of your query plans, complete with actionable recommendations.
</p>
<h2>
Where to Find It
</h2>AI Insights appears as a fourth tab in the EXPLAIN results panel, alongside the existing Graphical, Analysis, and Statistics tabs. It's only visible when an LLM provider has been configured, so if you don't see it, check that you've set up a provider in Preferences (as described in the first post). The tab header simply reads 'AI Insights'.To use it, run a query with EXPLAIN (or EXPLAIN ANALYZE for the most useful results, since actual execution timings give the AI much more to work with), and then click on the AI Insights tab. The analysis starts automatically when you switch to the tab, or you can trigger it manually with the Analyze button.<img src="https://a.storyblok.com/f/187930/950x887/76d389b7e7/picture1.png" />
<h2>
What the Analysis Provides
</h2>The AI Insights analysis produces three sections:
<h3>
Summary
</h3>A concise paragraph providing an overall assessment of the query plan's performance characteristics. This gives you a quick sense of whether the plan is generally healthy or has significant issues worth investigating. For well-optimised queries, the summary will confirm that the plan looks reasonable; for problematic o[...]Mon, 16 Mar 2026 06:31:22 +0000https://postgr.es/p/7uXPavel Luzanov: PostgreSQL 19: part 4 or CommitFest 2026-01https://postgr.es/p/7uU<p>
Continuing the series of CommitFest 19 reviews, today we’re covering the January 2026 CommitFest.
</p>
<p>
The highlights from previous CommitFests are available here: <a href="https://postgrespro.com/blog/pgsql/5972724">2025-07</a>, <a href="https://postgrespro.com/blog/pgsql/5972743">2025-09</a>, <a href="https://postgrespro.com/blog/===FIXME===">2025-11</a>.
</p>
<ul>
<li>Partitioning: merging and splitting partitions
</li>
<li>pg_dump[all]/pg_restore: dumping and restoring extended statistics
</li>
<li>file_fdw: skipping initial rows
</li>
<li>Logical replication: enabling and disabling WAL logical decoding without server restart
</li>
<li>Monitoring logical replication slot synchronization delays
</li>
<li>pg_available_extensions shows extension installation directories
</li>
<li>New function pg_get_multixact_stats: multixact usage statistics
</li>
<li>Improvements to vacuum and analyze progress monitoring
</li>
<li>Vacuum: memory usage information
</li>
<li>vacuumdb --dry-run
</li>
<li>jsonb_agg optimization
</li>
<li>LISTEN/NOTIFY optimization
</li>
<li>ICU: character conversion function optimization
</li>
<li>The parameter standard_conforming_strings can no longer be disabled
</li>
</ul>
<p>
...
</p>
Mon, 16 Mar 2026 00:00:00 +0000https://postgr.es/p/7uUAshutosh Bapat: Professional karmahttps://postgr.es/p/7uT<p>
In the very early days of my career, an incident made me realise that perfoming my job irresponsibily will affect me adversely, not because it will affect my position adversely, but because it can affect my life otherwise also. I was part a team that produced a software used by a financial institution where I held my account. A bug in the software caused a failure which made several accounts, including my bank account, inaccessible! Fortunately I wasn't the one who introduced that bug and neither was other software engineer working on the product. It has simply crept through the cracks that the age-old software had developed as it went through many improvements. Something that happens to all the architectures, software or otherwise in the world. That was an enlightening and eve opening experience. But professional karma is not always bad; many times it's good. When the humble work I do for earning my living also improves my living, it gives me immense satisfaction. It means that it's also improving billions of lives that way across the globe.
</p>
<p>
When I was studying post-graduation in IIT Bombay, I often travelled by train - local and intercity. The online ticketing system for long distant trains was still in its early stages. Local train tickets were still issued at stations and getting one required standing in a long queue. Fast forward to today, you can buy a local train ticket on a mobile App or at a kiosk at the station by paying online through UPI. In my recent trip to IIT Bombay I bought such a ticket using GPay in a few seconds. And know what, UPI uses PostgreSQL as an OLTP database in its system. I didn't have to go through the same experience thank to the same education and the work I am doing. Students studying in my alma-matter no more have to go through the same painful experience now, thanks to many PostgreSQL contributors who once were students and might have similar painful experiences in their own lives.
</p>
<div class="separator c2">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3gfVoagA29_N1QXwEGbKfULOQo25mDpr0ek4npRV6FoxqmcHHtwqjWkgDiY2Fmk_5HO6bsokp4ULHI3Udgdo_lSYBDsByXr_Y-W6qxuv8y_mY_e9FW-4PlYY27q4hZheC7T0Ft7MoeAKqmmLVi5NzQfmWcA7G4Me1gJTMaeVtR8Zno8qUu2Ef4sqxB3U/s1280/1773395145724.jpeg" class="c1"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3gfVoagA29_N1QXwEGbKfULOQo25mDpr0ek4npRV6FoxqmcHHtwqjWkgDiY2Fmk_5HO6bsokp4ULHI3Udgdo_lSYBDsByXr_Y-W6qxuv8y_mY_e9FW-4PlYY27q4hZheC7T0Ft7MoeAKqmmLVi5NzQfmWcA7G4Me1gJTMaeVtR8Zno8qUu2Ef4sqxB3U/w400-h300/1773395145724.jpeg" width="400" /></a>
</div><br />
<p>
<br />
</p>
<p>
In PGConf.India, <a href="https://www.linkedin.com/in/kojiannoura/">Koji Annoura</a>, who is a Graph database expert talked about o</p>[...]Sat, 14 Mar 2026 05:48:00 +0000https://postgr.es/p/7uTShane Borden: More Obscure Things That Make It Go “Vacuum” in PostgreSQLhttps://postgr.es/p/7uS<p class="wp-block-paragraph">
I previously blogged about ensuring that the “ON CONFLICT” directive is used in order to avoid vacuum from having to do additional work. I also later demonstrated the characteristics of how the use of the MERGE statement will accomplish the same thing.<br />
<br />
You can read the original blogs here <a href="https://stborden.wordpress.com/2024/06/18/reduce-vacuum-by-using-on-conflict-directive/" rel="noreferrer noopener" target="_blank">Reduce Vacuum by Using “ON CONFLICT” Directive</a> and here <a href="https://shaneborden.com/2024/09/04/follow-up-reduce-vacuum-by-using-on-conflict-directive/">Follow-Up: Reduce Vacuum by Using “ON CONFLICT” Directive</a>
</p>
<p class="wp-block-paragraph">
Now in another recent customer case, I was chasing down why the application was invoking 10s of thousands of Foreign Key and Constraint violations per day and I began to wonder, if these kinds of errors also caused additional vacuum as described in those previous blogs. Sure enough it <strong>DEPENDS</strong>.
</p>
<p class="wp-block-paragraph">
Let’s set up a quick test to demonstrate:
</p>
<div class="wp-block-syntaxhighlighter-code">
<pre class="brush: sql; title: ; notranslate">
/* Create related tables: */
CREATE TABLE public.uuid_product_value (
id int PRIMARY KEY,
pkid text,
value numeric,
product_id int,
effective_date timestamp(3)
);
CREATE TABLE public.uuid_product (
product_id int PRIMARY KEY
);
ALTER TABLE uuid_product_value
ADD CONSTRAINT uuid_product_value_product_id_fk
FOREIGN KEY (product_id)
REFERENCES uuid_product (product_id) ON DELETE CASCADE;
/* Insert some mocked up data */
INSERT INTO public.uuid_product VALUES (
generate_series(0,200));
INSERT INTO public.uuid_product_value VALUES (
generate_series(0,10000),
gen_random_uuid()::text,
random()*1000,
ROUND(random()*100),
current_timestamp(3));
/* Vacuum Analyze Both tables */
VACUUM (VERBOSE, ANALYZE) uuid_product;
VACUUM (VERBOSE, ANALYZE) uuid_product_value;
/* Verify that there are no dead tuples: */
SELECT
schemaname,
relname,
n_live_tup,
n_dead_tup
FROM
pg_stat_all_tables
WHERE
relname in ('uuid_product_value', 'uuid_product');
schemaname | relname | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
public | uuid_product_value | 10001 | 0
public</pre></div>[...]Fri, 13 Mar 2026 15:51:40 +0000https://postgr.es/p/7uSShaun Thomas: Using Patroni to Build a Highly Available Postgres Cluster—Part 2: Postgres and Patronihttps://postgr.es/p/7uR<p>
Welcome to Part two of our series about building a High Availability Postgres cluster using <a href="https://www.pgedge.com/blog/using-patroni-to-build-a-highly-available-postgres-clusterpart-1-etcd">Patroni! Part one</a> focused entirely on establishing the DCS using etcd, providing the critical layer that Patroni uses to store metadata and guarantee its leadership token uniqueness across the cluster.With this solid foundation, it's now time to build the next layer in our stack: Patroni itself. Patroni does the job of managing the Postgres service and provides a command interface for node administration and monitoring. Technically the Patroni cluster is complete at the end of this article, but stick around for part three where we add the routing layer that brings everything together.Hopefully you still have the three VMs where you installed etcd. Those will be the same place where everything else happens, so if you haven’t already gone through the steps in part one, come back when you’re ready.Otherwise, let’s get started!
</p>
<h2>
Installing Postgres
</h2>The Postgres community site has an incredibly thorough page dedicated to <a class="c1" href="https://www.postgresql.org/download/">installation on various platforms</a>. For the sake of convenience, this guide includes a simplified version of the Debian instructions. Perform these steps on all three servers.Start by setting up the PGDG repository:Then install your favorite version of Postgres. For the purposes of this guide, we’re also going to stop Postgres and drop the initial cluster the Postgres package creates. Patroni will recreate all of this anyway, and it should be in control.It’s also important to completely disable the default Postgres service since Patroni will be in charge:Finally, install the version of Patroni included in the PGDG repositories. This should be available on supported platforms like Debian and RedHat variants, but if it isn’t, you may have to resort to the <a class="c1" href="https://patroni.readthedocs.io/en/master/installation.html">official installation instructions</a>.Once that command completes, we should have three fresh VMs ready for configuration.
<h2>
Configuring Patroni the easy way
</h2>The Debian Patroni package provides a tool called that transforms a Patroni template into a configur[...]Fri, 13 Mar 2026 06:12:14 +0000https://postgr.es/p/7uRDeepak Mahto: PGConf India 2026: PostgreSQL Query Tuning: A Foundation Every Database Developer Should Buildhttps://postgr.es/p/7uP<p class="wp-block-paragraph">
Most PostgreSQL tuning advice that folks chase is quick fixes but not on understanding what made planners choose an path or join over others optimal path. !
</p>
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p class="wp-block-paragraph">
<strong>Tuning should not start with Analyze on tables involved in the Query but with intend what is causing the issue and why planner is not self sufficient to choose the optimal path.</strong>
</p>
</blockquote>
<p class="wp-block-paragraph">
Most fixes we search for SQL tuning are around,
</p>
<pre class="wp-block-preformatted">Add an index. <br />Rewrite the query. <br />Bump work_mem. <br />Done.</pre>
<p class="wp-block-paragraph">
Except it’s not done. The same problem comes back, different query, different table, same confusion.
</p>
<h2 class="wp-block-heading">
The Real Problem
</h2>
<p class="wp-block-paragraph">
A slow query is a symptom. Statistics, DDL, query style, and PG version are the actual culprit’s.
</p>
<p class="wp-block-paragraph">
Before you touch anything, you need to answer five questions — in order:
</p>
<ul class="wp-block-list">
<li>Find it — which query actually hurts the most right now?
</li>
<li>Read the plan — what is the planner doing and where is it wrong?
</li>
<li>Check statistics — is the planner even working with accurate data?
</li>
<li>Check the DDL — is your schema helping or hiding the answer?
</li>
<li>Check GUCs & version — are the defaults silently working against you?
</li>
</ul>
<figure class="wp-block-image size-large">
<a href="https://databaserookies.wordpress.com/wp-content/uploads/2026/03/image-2.png"><img alt="" class="wp-image-3600" height="515" src="https://databaserookies.wordpress.com/wp-content/uploads/2026/03/image-2.png?w=1024" width="1024" /></a>
<figcaption class="wp-element-caption">
5-Dimension SQL Tuning Framework
</figcaption>
</figure>
<p class="wp-block-paragraph">
Most developers skip straight to question two. Many skip to indexes without asking any question at all.
</p>
<h2 class="wp-block-heading">
What I Covered at PGConf India 2026
</h2>
<p class="wp-block-paragraph">
I presented this framework at PGConf India yesterday, a room full of developers and DBA , sharp questions, and a lot of “I’ve hit exactly this” moments.
</p>
<p class="wp-block-paragraph">
The slides cover core foundations for approaching Query Tuning and production gotchas including partition pruning, SARGability, CTE fences, and correlated column statistics.
</p>
<p class="wp-block-paragraph">
<a href="https://docs.google.com/presentation/d/1B9aZiZYscOaha37NWSVjeZyF06KNpkaXru5JvSaDctE/edit?usp=sharing" rel="noreferrer noopener" target="_blank">Slide – PostgreSQL Query Tuning: A Foundation Every Database Developer Should Build</a>
</p>
<figure class="wp-block-image size-large">
<a href="https://databaserookies.wordpress.com/wp-content/uploads/2026/03/image-1.png"><img alt="" class="wp-image-3596" height="588" src="https://databaserookies.wordpress.com/wp-content/uploads/2026/03/image-1.png?w=1024" width="1024" /></a>
</figure>
Fri, 13 Mar 2026 01:12:23 +0000https://postgr.es/p/7uPPavel Luzanov: PostgreSQL 19: part 3 or CommitFest 2025-11https://postgr.es/p/7uQ<p>
This article reviews the November 2025 CommitFest.
</p>
<p>
For the highlights of the previous two CommitFests, check out our last posts: <a href="https://postgrespro.com/blog/pgsql/5972724">2025-07</a>, <a href="https://postgrespro.com/blog/pgsql/5972743">2025-09</a>.
</p>
<ul>
<li>Planner: eager aggregation
</li>
<li>Converting COUNT(1) and COUNT(not_null_col) to COUNT(*)
</li>
<li>Parallel TID Range Scan
</li>
<li>COPY … TO with partitioned tables
</li>
<li>New function error_on_null
</li>
<li>Planner support functions for optimizing set-returning functions (SRF)
</li>
<li>SQL-standard style functions with temporary objects
</li>
<li>BRIN indexes: using the read stream interface for vacuuming
</li>
<li>WAIT FOR: waiting for synchronization between replica and primary
</li>
<li>Logical replication of sequences
</li>
<li>pg_stat_replication_slots: a counter for memory limit exceeds during logical decoding
</li>
<li>pg_buffercache: buffer distribution across OS pages
</li>
<li>pg_buffercache: marking buffers as dirty
</li>
<li>Statistics reset time for individual relations and functions
</li>
<li>Monitoring the volume of full page images written to WAL
</li>
<li>New parameter log_autoanalyze_min_duration
</li>
<li>psql: search path in the prompt
</li>
<li>psql: displaying boolean values
</li>
<li>pg_rewind: skip copying WAL segments already present on the target server
</li>
<li>pgbench: continue running after SQL command errors
</li>
</ul>
<p>
...
</p>
Fri, 13 Mar 2026 00:00:00 +0000https://postgr.es/p/7uQVibhor Kumar: Transparent Column Encryption in PostgreSQL: Security Without Changing Your SQLhttps://postgr.es/p/7uO<figure class="wp-block-image size-large">
<a href="https://vibhorkumar.wordpress.com/wp-content/uploads/2026/03/gemini_generated_image_f03d4tf03d4tf03d.png"><img alt="" class="wp-image-1360" height="558" src="https://vibhorkumar.wordpress.com/wp-content/uploads/2026/03/gemini_generated_image_f03d4tf03d4tf03d.png?w=1024" width="1024" /></a>
</figure>
<p class="wp-block-paragraph">
There is a moment in many database reviews when the room becomes a little too quiet.
</p>
<p class="wp-block-paragraph">
Someone asks:
</p>
<p class="wp-block-paragraph">
<strong>“Which columns in this database are encrypted?”</strong>
</p>
<p class="wp-block-paragraph">
At first, the answers sound reassuring.
</p>
<p class="wp-block-paragraph">
“We use TLS.”
</p>
<p class="wp-block-paragraph">
“The disks are encrypted.”
</p>
<p class="wp-block-paragraph">
“The application handles sensitive fields.”
</p>
<p class="wp-block-paragraph">
And then the real picture starts to emerge.
</p>
<p class="wp-block-paragraph">
Some values are encrypted in one service but not another.
</p>
<p class="wp-block-paragraph">
Some migrations remembered to apply encryption.
</p>
<p class="wp-block-paragraph">
Some scripts did not.
</p>
<p class="wp-block-paragraph">
Some backups are safe in theory, but no one wants to test that theory the hard way.
</p>
<p class="wp-block-paragraph">
That is the uncomfortable truth of database security:
</p>
<p class="wp-block-paragraph">
<strong>encryption is often present, but not always enforced where the data actually lives.</strong>
</p>
<p class="wp-block-paragraph">
That is exactly the problem I wanted to explore with the PostgreSQL extension:
</p>
<p class="wp-block-paragraph">
<strong>column_encrypt</strong>: <a href="https://github.com/vibhorkum/column_encrypt">https://github.com/vibhorkum/column_encrypt</a>
</p>
<p class="wp-block-paragraph">
This extension provides <strong>transparent column-level encryption</strong> using custom PostgreSQL datatypes so developers can read and write encrypted columns <strong>without changing their SQL queries</strong>.
</p>
<p class="wp-block-paragraph">
And perhaps the most human part of this project is this:
</p>
<p class="wp-block-paragraph">
<strong>the idea for this project started back in 2016.</strong>
</p>
<p class="wp-block-paragraph">
It stayed with me for years as one of those engineering ideas that never quite leaves your mind — the thought that PostgreSQL itself could enforce encryption at the column level.
</p>
<p class="wp-block-paragraph">
Now I’ve finally decided to release it.
</p>
<p class="wp-block-paragraph">
This is the <strong>first public version</strong>. It’s a starting point — useful, practical, and hopefully something the PostgreSQL community can explore and build upon.
</p>
<h2 class="wp-block-heading">
<strong>Why This Matters</strong>
</h2>
<p class="wp-block-paragraph">
Encryption conversations often focus first on infrastructure.
</p>
<ul class="wp-block-list">
<li>We encrypt disks.
</li>
<li>We use TLS connections.
</li>
<li>We protect credentials.
</li>
</ul>
<p class="wp-block-paragraph">
All of these are important.
</p>
<p class="wp-block-paragraph">
But once data is inside the database, a different question matters:
</p>
<p class="wp-block-paragraph">
<strong>What happens if someone gains access to the database itself?</strong>
</p>
<p class="wp-block-paragraph">
That access might come from:
</p>
<ul class="wp-block-list">
<li>a leaked backup
</li>
<li>an overprivileged account
</li>
<li>a dump file
</li>
<li>a compromised service
</li>
<li>an operational mista</li></ul>[...]Thu, 12 Mar 2026 15:19:49 +0000https://postgr.es/p/7uORichard Yen: Debugging RDS Proxy Pinning: How a Hidden JIT Toggle Created Thousands of Pinned Connectionshttps://postgr.es/p/7uN<h1 id="introduction">
Introduction
</h1>
<p>
When using AWS RDS Proxy, the goal is to achieve connection multiplexing – many client connections share a much smaller pool of backend PostgreSQL connections, givng more resources per connection and keeping query execution running smoothly.
</p>
<p>
However, if the proxy detects that a session has changed internal state in a way it cannot safely track, it <strong>pins</strong> the client connection to a specific backend connection. Once pinned, that connection can never be multiplexed again. This was the case with a recent database I worked on.
</p>
<p>
In this case, we observed the following:
</p>
<ul>
<li>extremely high CPU usage
</li>
<li>relatively high LWLock wait times
</li>
<li>OOM killer activity on the database, maybe once every day or two
</li>
<li>thousands of active connections
</li>
</ul>
<p>
What was strange about it all was that the queries involved were relatively simple, with max just one join.
</p>
<hr />
<h1 id="finding-the-pinning-source">
Finding the Pinning Source
</h1>
<p>
To get to the root cause, one option was to look in <code class="language-plaintext highlighter-rouge">pg_stat_statements</code>. However, that approach had two problems:
</p>
<ol>
<li>Getting a clean snapshot of the statistics while thousands of queries were being actively processed would be tricky.
</li>
<li>
<code class="language-plaintext highlighter-rouge">pg_stat_statements</code> normalizes queries and does not expose the values passed to parameter placeholders.
</li>
</ol>
<p>
Instead, to see the actual parameters, we briefly enabled <code class="language-plaintext highlighter-rouge">log_statement = 'all'</code>. This immediately surfaced something interesting in the logs, which could be downloaded and reviewed on my own time and pace.
</p>
<p>
What we saw were statements like <code class="language-plaintext highlighter-rouge">SELECT set_config($2,$1,$3)</code> with parameters related to JIT configuration – that was the first real clue.
</p>
<hr />
<h1 id="getting-to-the-bottom">
Getting to the Bottom
</h1>
<p>
After tracing the behavior through the stack, the root cause turned out to be surprisingly indirect. The application created new connections through SQLAlchemy’s asyncpg dialect, and we needed to drill down into that driver’s behavior.
</p>
<hr />
<h3 id="step-1--reviewing-how-sqlalchemy-registers-json-codecs">
Step 1 – Reviewing how SQLAlchemy registers JSON codecs
</h3>
<p>
During connection initialization, SQLAlchemy runs an <code class="language-plaintext highlighter-rouge">on_connect</code> hook:
</p>
<div class="language-python highlighter-rouge highlight">
<pre class="highlight"><code><span class="k">def</span> <span class="nf">connect</span><span class="p">(</span><span class="n">conn</span><span class="p">):</span>
</code></pre></div>[...]Thu, 12 Mar 2026 08:00:00 +0000https://postgr.es/p/7uNgabrielle roth: SCaLE23xhttps://postgr.es/p/7uLI’m back from Pasadena after SCaLE23x and another installment of PostgreSQL@SCaLE! It was really just wonderful this year, seeing old friends and making new ones, talking to people and soaking up knowledge. I’m looking forward to implementing what I learned. Expo Hall:We had a lot of booth volunteers this year. Thank you all so much; […]
Thu, 12 Mar 2026 00:38:49 +0000https://postgr.es/p/7uLBruce Momjian: The MySQL Shadowhttps://postgr.es/p/7uJ<p>
For much of Postgres's <a class="txt2html c1" href="https://www.postgresql.org/docs/current/history.html">history,</a> it has lived in the shadow of other relational systems, and for a time even in <a class="txt2html c1" href="https://momjian.us/main/blogs/pgblog/2013.html#March_5_2013">the shadow</a> of <a class="txt2html c1" href="https://momjian.us/main/writings/pgsql/central.pdf#page=21">NoSQL</a> systems. Those shadows have faded, but it is helpful to reflect on this outcome.
</p>
<p>
On the proprietary side, <a class="txt2html c1" href="https://momjian.us/main/writings/pgsql/forever.pdf#page=14">most database products</a> are now in maintenance mode. The only database to be consistently compared to Postgres was Oracle. Long-term, Oracle was never going to be able to compete against an open source development team, just like Sun's Solaris wasn't able to <a class="txt2html c1" href="https://arstechnica.com/information-technology/2009/04/oracle-acquires-sun-ars-explores-the-impact-on-open-source/">compete</a> against open source Linux. Few people would choose Oracle's database today, so it is effectively in legacy mode. The Oracle shadow is clearly fading. In fact, almost all enterprise infrastructure software is open source today.
</p>
<p>
The MySQL shadow is more complex. MySQL is not proprietary, since it is distributed as open source, so it had the potential to ride the open source wave into the enterprise, and it clearly did from the mid-1990s to the mid-2000s. However, something changed, and MySQL has been in steady <a class="txt2html c1" href="https://analyticsindiamag.com/ai-trends/the-end-of-mysql-as-we-knew-it">decline</a> for decades. Looking back, people want to ascribe a reason for the decline:
</p>
<ul>
<li>Sun <a class="txt2html c1" href="https://www.techpowerup.com/49858/sun-acquires-mysql-developer-of-the-worlds-most-popular-open-source-database">buying</a> MySQL AB
</li>
<li>Oracle <a class="txt2html c1" href="https://en.wikipedia.org/wiki/Acquisition_of_Sun_Microsystems_by_Oracle_Corporation">buying</a> Sun
</li>
<li>Poor <a class="txt2html c1" href="https://www.theregister.com/2025/09/11/oracle_slammed_for_mysql_job/">stewardship</a> of MySQL by Oracle, including recent layoffs
</li>
</ul>
<p>
<a href="https://momjian.us/main/blogs/pgblog/2026.html#March_11_2026">Continue Reading »</a>
</p>
Wed, 11 Mar 2026 14:15:02 +0000https://postgr.es/p/7uJVibhor Kumar: Beyond Features: What a PostgreSQL Strategy Discussion Taught Me About Calm, Modern Platformshttps://postgr.es/p/7uI<figure class="wp-block-image size-large">
<a href="https://vibhorkumar.wordpress.com/wp-content/uploads/2026/03/gemini_generated_image_bu5x4pbu5x4pbu5x.png"><img alt="" class="wp-image-1348" height="571" src="https://vibhorkumar.wordpress.com/wp-content/uploads/2026/03/gemini_generated_image_bu5x4pbu5x4pbu5x.png?w=1024" width="1024" /></a>
</figure>
<p class="wp-block-paragraph">
Last December, I was part of a long enterprise discussion centered on PostgreSQL.
</p>
<p class="wp-block-paragraph">
On paper, it looked familiar: a new major release, high availability and scale, Aurora migration, monitoring, operational tooling, and the growing conversation around AI-assisted operations.
</p>
<p class="wp-block-paragraph">
The usual ingredients were all there.
</p>
<p class="wp-block-paragraph">
But somewhere in the middle of that day, the tone of the room changed.
</p>
<p class="wp-block-paragraph">
It did not change when we talked about new PostgreSQL capabilities. It changed when the conversation moved to upgrades, patching, monitoring quality, and operational control.
</p>
<p class="wp-block-paragraph">
That was the moment I realized this was not really a feature discussion.
</p>
<p class="wp-block-paragraph">
It was a trust discussion.
</p>
<p class="wp-block-paragraph">
Not trust in PostgreSQL as a database. That question is mostly behind us.
</p>
<p class="wp-block-paragraph">
It was trust in something more practical: can this platform evolve without exhausting the team responsible for it? Can it scale without becoming harder to reason about? Can it be upgraded without becoming a quarterly trauma ritual? Can it be monitored without operators drowning in false signals? Can it support modernization without making every change feel dangerous?
</p>
<p class="wp-block-paragraph">
That, to me, is where the PostgreSQL conversation has matured.
</p>
<p class="wp-block-paragraph">
A modern PostgreSQL platform is not defined only by what it can do. It is defined by how calmly it can change.
</p>
<h2 class="wp-block-heading">
<strong>Why this matters now</strong>
</h2>
<p class="wp-block-paragraph">
This matters because PostgreSQL is no longer entering the enterprise through side doors. In many organizations, it is already trusted with serious workloads and is increasingly central to modernization plans.
</p>
<p class="wp-block-paragraph">
That changes the questions.
</p>
<p class="wp-block-paragraph">
A few years ago, teams often asked whether PostgreSQL was ready for enterprise use. Today, the better question is whether the <strong>operating model around PostgreSQL</strong> is ready for enterprise reality.
</p>
<p class="wp-block-paragraph">
Because the database can be strong while the surrounding practice is weak.
</p>
<p class="wp-block-paragraph">
That is where many teams struggle. They like PostgreSQL, but lag on upgrades. They have HA designs, but unclear failure playbooks. They have monitoring, but poor signal qualit</p>[...]Wed, 11 Mar 2026 13:36:44 +0000https://postgr.es/p/7uIFloor Drees: The Future of Postgres on the agenda: EDB’s PGConf.dev Previewhttps://postgr.es/p/7uMPGConf.dev is heading to Vancouver, Canada, from May 19–22, bringing together the users, developers, and community organizers driving the future of PostgreSQL. EDB is proud to be a Gold-level sponsor this year, with our own Robert Haas serving as an organizer and Jacob Champion contributing to the Program Committee. Following a highly successful Call for Papers, we’ve put together this preview of the EDB-led sessions you won't want to miss.
Wed, 11 Mar 2026 12:29:11 +0000https://postgr.es/p/7uMLukas Fittl: The Dilemma of the ‘AI DBA’https://postgr.es/p/7uGLike many in the industry, my perspective on AI tools has shifted considerably over the past year, specifically when it comes to software engineering tasks. Going from “this is nice, but doesn’t really solve complex tasks for me” to “this actually works pretty well for certain use cases.” But the more capable these tools become, the sharper one dilemma gets: you can hand off the work, but an AI agent won’t ultimately be responsible when the database goes down and your app stops working. For…
Wed, 11 Mar 2026 00:00:00 +0000https://postgr.es/p/7uGLætitia AVROT: work_mem: it's a trap!https://postgr.es/p/7uHMy friend Henrietta Dombrovskaya pinged me on Telegram. Her production cluster had just been killed by the OOM killer after eating 2 TB of RAM. work_mem was set to 2 MB. Something didn’t add up. Hetty, like me, likes playing with monster hardware. 2 TB of RAM is not unusual in her world. But losing the whole cluster to a single query during peak operations is a very different kind of problem from a 3am outage.
Wed, 11 Mar 2026 00:00:00 +0000https://postgr.es/p/7uHVirender Singla: The Part of PostgreSQL We Discuss the Most — 2https://postgr.es/p/7uC<h4>
<strong>PostgreSQL and Oracle Implementation</strong>
</h4>
<p>
In the <a href="https://medium.com/%40virender-cse/the-part-of-postgresql-we-discuss-the-most-1-6c69c9d15f16">Part 1</a>, we explored the general concepts of MVCC and the implications of storing data snapshots either out-of-place or within heap storage, we can now map these methodologies to specific database engines.
</p>
<p>
The PostgreSQL MVCC implementation aligns with the DatabaseI model, whereas Oracle and MySQL are closely related to the DatabaseO model. Specifically, Oracle utilizes block versioning and stores older versions in a separate storage area known as UNDO, while PostgreSQL employs row versioning.
</p>
<p>
These engines further optimize their respective in-place or out-of-place MVCC strategies:
</p>
<ul>
<li>
<strong>Oracle (DatabaseO) Delta Storage:</strong> To improve efficiency, Oracle avoids copying an entire block to UNDO. Instead, it only stores the modified columns as a “delta.” Consequently, when a query requires an older image, the engine applies this delta to the current heap block to reconstruct the previous state.
</li>
<li>
<strong>PostgreSQL (DatabaseI) Visibility Map (VM):</strong> To mitigate the overhead of scanning the entire heap for garbage collection, PostgreSQL uses a <a href="https://www.postgresql.org/docs/current/storage-vm.html">Visibility Map</a>. This data structure maintains per-block information of heap, allowing the garbage collector to identify specific blocks containing garbage instead of performing a full table scan.
</li>
<li>
<strong>Heap Only Tuple (HOT) Optimization:</strong> PostgreSQL addresses continuous index churn caused by new physical address (ctid) through <a href="https://www.postgresql.org/docs/current/storage-hot.html">HOT</a> optimization. If a new row version fits within the same block as the previous version, the indexes are not updated. Instead, index access lands on the heap block, accessing the old version, which then chains directly to the new version within the same block. Note that it’s still a single block fetch.
</li>
<li>
<strong>Row Locking Mechanism:</strong> PostgreSQL utilizes the visibility counters to manage row locking as well, whereas Oracle employs a distinct data structure located in the block header for this purpose.
</li>
<li>
<strong>Handling Multiple Data Versions:</strong> When a row undergoes multiple updates, Oracle maintai</li></ul>[...]Tue, 10 Mar 2026 17:27:35 +0000https://postgr.es/p/7uCVirender Singla: The Part of PostgreSQL We Discuss the Most — 1https://postgr.es/p/7uD<p>
Early in my PostgreSQL journey, I often sensed that a conversation between two Postgres professionals inevitably revolves around vacuuming. That <strong>lighthearted</strong> observation still remains relevant, as my LinkedIn feeds are often filled with discussions around vacuuming and comparing PostgreSQL’s Multi-Version Concurrency Control (MVCC) implementation to other engines like Oracle or MySQL. Given that people are naturally drawn to the most complex components of a system, I will continue this journey by exploring a detailed comparison of these database architectures focused on the MVCC implementations.
</p>
<h3>
<strong>What is MVCC?</strong>
</h3>
<p>
Stone age databases relied on strict locking mechanisms to handle concurrency, which proved inefficient under heavy load. In these traditional models, a read operation required a shared lock that prevented other transactions from updating the record. Conversely, write operations required exclusive locks that blocked incoming reads. This resulted in significant lock contention, where <strong>readers blocked writers and writers blocked readers</strong>.
</p>
<p>
To solve this, RDBMS implemented MVCC. The idea was very simple. Rather than overwriting data immediately, maintain multiple versions of data simultaneously. This allows transactions to view a consistent snapshot of the database as it existed at a specific point in time. <strong>For instance,</strong> if User 1 starts reading a table just before User 2 starts modifying a record, User 1 sees the original version of the data without hindering User 2’s progress. Without MVCC, the system would be forced to either serialize all access — making User 2 wait — or risk data consistency anomalies like dirty or non-repeatable reads where User 1 sees uncommitted changes that might eventually be rolled back.
</p>
<p>
Database engines utilize various architectures to manage this data versioning. A particularly notable point of discussion is the comparison between “in-place” and “out-of-place” data versioning techniques. Let’s examine these approaches more closely.
</p>
<h3>
<strong>Explaining In-Place and Out-of</strong></h3>[...]Tue, 10 Mar 2026 17:26:58 +0000https://postgr.es/p/7uDFloor Drees: Shaping SQL in São Paulohttps://postgr.es/p/7uELast week, EDB engineers Matheus Alcantara and Euler Taveira attended the ISO/IEC SQL Standards Committee meeting in São Paulo as invited guests, supported remotely by veteran member Peter Eisentraut. The duo compared the collaborative environment to a PostgreSQL "Commitfest," where technical papers are proposed, debated, and refined much like code patches.
Tue, 10 Mar 2026 13:37:56 +0000https://postgr.es/p/7uEAndrew Dunstan: Validating the shape of your JSON datahttps://postgr.es/p/7uB<p>
One of the great things about PostgreSQL's jsonb type is the flexibility it gives you — you can store whatever structure you need without defining columns up front. But that flexibility comes with a trade-off: there's nothing stopping bad data from getting in. You can slap a CHECK constraint on a jsonb column, but writing validation logic in SQL or PL/pgSQL for anything beyond the trivial gets ugly fast.
</p>
<p>
I've been working on a PostgreSQL extension called <code>json_schema_validate</code> that solves this problem by letting you validate JSON and JSONB data against <a href="https://json-schema.org/" target="_blank">JSON Schema</a> specifications directly in the
</p>
Tue, 10 Mar 2026 10:13:17 +0000https://postgr.es/p/7uBDave Page: AI Features in pgAdmin: The AI Chat Agenthttps://postgr.es/p/7uK<p>
This is the second in a series of three blog posts covering the new AI functionality in <a class="c1" href="https://www.pgadmin.org/">pgAdmin 4</a>. In the <a href="https://www.pgedge.com/blog/ai-features-in-pgadmin-configuration-and-reports">first post</a>, I covered LLM configuration and the AI-powered analysis reports. In this post, I'll introduce the AI Chat agent in the query tool, and in the third, I'll explore the AI Insights feature for EXPLAIN plan analysis.If you've ever found yourself staring at a database schema you didn't design, trying to work out the right joins to answer a seemingly simple question, you'll appreciate what the AI Chat agent brings to pgAdmin's query tool. Rather than having to alt-tab to an external AI service, paste in your schema, describe what you need, and then copy the resulting SQL back into your editor, the entire conversation now happens within the query tool itself, with full awareness of your actual database structure.
</p>
<h2>
Finding the AI Assistant
</h2>The AI Chat agent appears as a new tab alongside the Query and Query History tabs in the left panel of the query tool. It's labelled 'AI Assistant' and is only visible when an LLM provider has been configured (as described in the first post in this series). The panel header shows which LLM provider and model are currently active, so you always know what's generating your responses.<img src="https://a.storyblok.com/f/187930/950x713/021a274608/picture1.png" />
<h2>
Natural Language to SQL
</h2>The core capability of the AI Chat agent is translating natural language questions into SQL queries. You type what you want to know in plain English (or whatever language you're comfortable with), and the assistant generates the corresponding SQL, complete with an explanation of what it does and why it was written that way.For example, you might type something like:The assistant will first inspect your database schema to understand the available tables and relationships, then generate an appropriate query. The response includes both the SQL and a brief explanation, so you can understand what the query is doing before you run it.What makes this particularly useful is that the assistant doesn't just guess at your schema; it actively inspects the database using a[...]Tue, 10 Mar 2026 05:44:17 +0000https://postgr.es/p/7uKYuwei Xiao: Introducing pg_duckpipe: Real-Time CDC for Your Lakehousehttps://postgr.es/p/7uAAutomatically keep a fast, analytical copy of your PostgreSQL tables, updated in real time with no external tools needed.
Tue, 10 Mar 2026 00:00:00 +0000https://postgr.es/p/7uAUmair Shahid: Thinking of PostgreSQL High Availability as Layershttps://postgr.es/p/7uy<div class="elementor elementor-29874">
<div class="elementor-element elementor-element-6af0e46d e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-5003493e elementor-widget elementor-widget-text-editor">
<p>
<span class="c1">High availability for PostgreSQL is often treated as a single, big, dramatic decision: “Are we doing HA or not?”</span>
</p>
<p>
<span class="c1">That framing pushes teams into two extremes:</span>
</p>
<ul>
<li class="c2">
<span class="c1">a “hero architecture” that costs a lot and still feels tense to operate, or</span>
</li>
<li class="c2">
<span class="c1">a minimalistic architecture that everyone hopes will just keep running.</span>
</li>
</ul>
<p>
<span class="c1">A calmer way to design this is to treat HA and DR as layers. You start with a baseline, then add specific capabilities only when your RPO/RTO and budget justify them.</span>
</p>
<p>
<span class="c1">Let us walk through the layers from “single primary” to “multi-site DR posture”.</span>
</p>
<h2>
<span class="c1">Start with outcomes</span>
</h2>
<p>
<span class="c1">Before topology, align on three things:</span>
</p>
<p>
<span class="c1">1. Failure scope</span>
</p>
<ul>
<li>
<span class="c1">A database host fails</span>
</li>
<li>
<span class="c1">A zone or data center goes away</span>
</li>
<li>
<span class="c1">A full region outage happens</span>
</li>
<li>
<span class="c1">Human error</span>
</li>
</ul>
<p>
<span class="c1">2. RPO (Recovery Point Objective)</span>
</p>
<ul>
<li class="c2">
<span class="c1">We can tolerate up to 15 minutes of data loss</span>
</li>
<li class="c2">
<span class="c1">We want close to zero</span>
</li>
</ul>
<p>
<span class="c1">3. RTO (Recovery Time Objective)</span>
</p>
<ul>
<li class="c2">
<span class="c1">We can be back in 30 minutes</span>
</li>
<li class="c2">
<span class="c1">We want service back in under 2 minutes</span>
</li>
</ul>
<p>
<span class="c1">Here is my stance (and it saves money!): You get strong availability outcomes by layering in the right order.</span>
</p>
<h2>
<span class="c1">Layer 0 – Single primary (baseline, no backups)</span>
</h2>
<p>
<span class="c1">This is the baseline: one PostgreSQL primary in one site. All reads and writes go to it.</span>
</p>
<p>
<span class="c1">That is it. No replicas. No archiving. No backup flow in this model.</span>
</p>
</div>
<div class="elementor-element elementor-element-90e11aa e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-1c07acb elementor-widget elementor-widget-image">
<a href="https://resources.stormatics.tech/improving-postgres-performance-with-partitioning"><img alt="" class="attachment-large size-large wp-image-29876" height="360" src="https://stormatics.tech/wp-content/uploads/2026/03/1-1024x576.webp" width="640" /></a>
</div>
<div class="elementor-element elementor-element-3b9c8f7 e-flex e-con-boxed e-con e-parent e-con-inner elementor-element elementor-element-4bc6d07 elementor-widget elementor-widget-text-editor">
<p>
<span class="c1">What you get:</span>
</p>
<ul>
<li class="c2">
<span class="c1">simplicity</span>
</li>
<li class="c2">
<span class="c1">low cost</span>
</li>
<li class="c2">
<span class="c1">low operational overhead</span>
</li>
</ul>
<p>
<span class="c1">What it means operationally:</span>
</p>
<ul>
<li class="c2">
<span class="c1">Your “recovery plan” is effectively “rebuild and rehydrate from wherever you can” (which might be</span></li></ul></div></div>[...]Mon, 09 Mar 2026 14:03:16 +0000https://postgr.es/p/7uyCornelia Biacsics: Contributions for week 9, 2026https://postgr.es/p/7ux<p>
The community met on Wednesday, March 4, 2026 for the <a href="https://www.meetup.com/postgresql-user-group-nrw/events/313229102/">7. PostgreSQL User Group NRW MeetUp (Cologne, ORDIX AG)</a>. It was organised by Dirk Krautschick and Andreas Baier.
</p>
<p>
Speakers:
</p>
<ul>
<li>Robin Riel
</li>
<li>Jan Karremans
</li>
</ul>
<p>
<a href="https://www.meetup.com/postgresql-meetup-berlin/events/313412510/">PostgreSQL Berlin March 2026 Meetup</a> took place on March 5, 2026 organized by Andreas Scherbaum and Sergey Dudoladov.
</p>
<p>
Speakers:
</p>
<ul>
<li>Andreas Scherbaum
</li>
<li>Tudor Golubenco
</li>
<li>Narendra Tawar
</li>
<li>Kai Wagner
</li>
</ul>
<p>
Kai Wagner wrote about his experience at the meetup <a href="https://www.linkedin.com/pulse/postgresql-berlin-meetup-march-2026-kai-wagner-dvwqf/">PostgreSQL Berlin Meetup - March 2026</a>
</p>
<p>
Andreas Scherbaum <a href="https://andreas.scherbaum.la/post/2026-03-06_postgresql-berlin-march-2026-meetup/">wrote a blog posting about the Meetup</a>.
</p>
<p>
<a href="https://www.socallinuxexpo.org/scale/23x">SCALE 23x</a> (March 5-8, 2026) had a dedicated PostgreSQL track, filled by the following contributions
</p>
<p>
Trainings:
</p>
<ul>
<li>Elizabeth Christensen
</li>
<li>Devrim Gunduz
</li>
<li>Ryan Booz
</li>
</ul>
<p>
Talks:
</p>
<ul>
<li>Nick Meyer
</li>
<li>Tristan Ahmadi
</li>
<li>Alexandra Wang
</li>
<li>Christophe Pettus
</li>
<li>Max Englander
</li>
<li>Magnus Hagander
</li>
<li>Bruce Momjian
</li>
<li>Robert Treat
</li>
<li>Payal Singh
</li>
<li>German Eichberger
</li>
<li>Jimmy Angelakos
</li>
<li>Justin Frye
</li>
</ul>
<p>
SCALE 23x PostgreSQL Booth volunteers:
</p>
<ul>
<li>Bruce Momjian
</li>
<li>Christine Momjian
</li>
<li>Gabrielle Roth
</li>
<li>Jennifer Scheuerell
</li>
<li>Magnus Hagander
</li>
<li>Devrim Gunduz
</li>
<li>Elizabeth Garret Christensen
</li>
<li>Robert Treat
</li>
<li>Pavlo Golub
</li>
<li>Phill Vacca
</li>
<li>Jimmy Angelakos
</li>
<li>Erika Miller
</li>
<li>Aya Griswold
</li>
<li>Alex Wood
</li>
<li>Donald Wong
</li>
<li>Derya Gumustel
</li>
</ul>
Mon, 09 Mar 2026 10:31:43 +0000https://postgr.es/p/7uxDave Page: AI Features in pgAdmin: Configuration and Reportshttps://postgr.es/p/7uz<p>
This is the first in a series of three blog posts covering the new AI functionality coming in <a class="c1" href="https://www.pgadmin.org/">pgAdmin 4</a>. In this post, I'll walk through how to configure the LLM integration and introduce the AI-powered analysis reports; in the second, I'll cover the AI Chat agent in the query tool; and in the third, I'll explore the AI Insights feature for EXPLAIN plan analysis.Anyone who manages PostgreSQL databases in a professional capacity knows that keeping on top of security, performance, and schema design is an ongoing endeavour. You might have a checklist of things to review, or perhaps you rely on experience and intuition to spot potential issues, but it is all too easy for something to slip through the cracks, especially as databases grow in complexity. We've been thinking about how AI could help with this, and I'm pleased to introduce a suite of AI-powered features in pgAdmin 4 that bring large language model analysis directly into the tool you already use every day.
</p>
<h2>
Configuring the LLM Integration
</h2>Before any of the AI features can be used, you'll need to configure an LLM provider. pgAdmin supports four providers out of the box, giving you flexibility to choose between cloud-hosted models and locally-running alternatives:
<ul>
<li>Anthropic
</li>
<li> (Claude models)
</li>
</ul>
<ul>
<li>OpenAI
</li>
<li> (GPT models)
</li>
</ul>
<ul>
<li>Ollama
</li>
<li> (locally-hosted open-source models)
</li>
</ul>
<ul>
<li>Docker Model Runner
</li>
<li> (built into Docker Desktop 4.40 and later)
</li>
</ul>
<h3>
Server Configuration
</h3>At the server level, there is a master switch in (or, more typically, ) that controls whether AI features are available at all:When is set to , all AI functionality is hidden from users and cannot be enabled through preferences. This gives administrators full control over whether AI features are permitted in their environment, which is particularly important in organisations with strict data governance policies.Below the master switch, you'll find default configuration for each provider:For the cloud providers (Anthropic and OpenAI), API keys are read from files on di[...]Mon, 09 Mar 2026 05:31:29 +0000https://postgr.es/p/7uzRadim Marek: Production Query Plans Without Production Datahttps://postgr.es/p/7uw<p>
In the <a href="https://boringsql.com/posts/postgresql-statistics/">previous article</a> we covered how the PostgreSQL planner reads <code>pg_class</code> and <code>pg_statistic</code> to estimate row counts, choose join strategies, and decide whether an index scan is worth it. The message was clear: when statistics are wrong, everything else goes with it.
</p>
<div class="sidenote">
Streaming replication provides bit-to-bit replication, so all replicas share the same statistics with primary server.
</div>But there was one thing we didn't talk about. Statistics are specific to the database cluster that generated them. The primary way to populate them is `ANALYZE` which requires the actual data.
<p>
PostgreSQL 18 changed that. Two new functions: <code>pg_restore_relation_stats</code> and <code>pg_restore_attribute_stats</code> write numbers directly into the catalog tables. Combined with <code>pg_dump --statistics-only</code>, you can treat optimizer statistics as a deployable artifact. Compact, portable, plain SQL.
</p>
<p>
The feature was <a href="https://www.cybertec-postgresql.com/en/preserve-optimizer-statistics-during-major-upgrades-with-postgresql-v18/" rel="external">driven by the upgrade use case</a>. In the past, major version upgrades used to leave <code>pg_statistic</code> empty, forcing you to run <code>ANALYZE</code>. Which might take hours on large clusters. With PostgreSQL 18 upgrades now transfer statistics automatically. But that's just the beginning. The same logic lets you export statistics from production and inject them anywhere - test database, local debugging, or as part of CI pipelines.
</p>
<h2 id="the-problem">
The problem<a class="zola-anchor" href="https://boringsql.com/posts/portable-stats/%23the-problem"></a>
</h2>
<p>
Your CI database has 1,000 rows. Production has 50 million. The planner makes completely different decisions for each. Running <code>EXPLAIN</code> in CI tells you nothing about the production plan. This is the core premise behind <a href="https://boringsql.com/products/regresql">RegreSQL</a>. Catching query plan regressions in CI is far more reliable when the planner sees production-scale statistics.
</p>
<p>
Same applies to <strong>debugging</strong>. A query is slow in production and you want to reproduce the plan locally, but your database has different statistics, and planner chooses the predictable path. Porting production stats can provide you that snapshot of thinking planner has to do in production, without actually going to production.
</p>
<h2 id="pg-restore-relation-stats">
pg_restore_relation_stats<a class="zola-anchor" href="https://boringsql.com/posts/portable-stats/%23pg-restore-relation-stats"></a>
</h2>
<p>
The first of functi</p>[...]Sun, 08 Mar 2026 21:15:56 +0000https://postgr.es/p/7uw