Latest Blog Posts

Improving PostgreSQL Performance with Partitioning
Posted by Umair Shahid in Stormatics on 2025-05-09 at 11:30

My recommended methodology for performance improvement of PostgreSQL starts with query optimization. The second step is architectural improvements, part of which is the partitioning of large tables.

Partitioning in PostgreSQL is one of those advanced features that can be a powerful performance booster. If your PostgreSQL tables are becoming very large and sluggish, partitioning might be the cure.

The Big Table Problem

Large tables tend to grow uncontrollably, especially in OLTP or time-series workloads. As millions or billions of rows accumulate, you begin to notice:

Slow queries due to full table scans or massive indexes.

Heavy I/O usage, especially when indexes cannot fit in memory.

Bloated memory during operations like sorting or joining.

Increased maintenance cost, with longer VACUUM, ANALYZE, and REINDEX times.

Hard-to-manage retention policies, as purging old rows becomes expensive.

These problems are amplified in cloud-hosted databases, where every IOPS, GB, or CPU upgrade increases cost.

The post Improving PostgreSQL Performance with Partitioning appeared first on Stormatics.

LinuxFest Northwest 2025 PostgreSQL Booth Report
Posted by gabrielle roth on 2025-05-09 at 03:35
Jenn and I headed to Bellingham April 26 & 27th for LinuxFest Northwest, a 100% volunteer-run, free event. This is always a fun conference and there were lots of familiar faces this year! Some stats:Over 100 booth visitorsJust wanted to say how much they love Postgres: 18Asked about migrating from MySQL to Postgres: 1“We love […]

Understanding PostgreSQL Write-Ahead Logging (WAL)
Posted by vignesh C in Fujitsu on 2025-05-09 at 01:05

Earlier this year, I had the incredible opportunity to present at PGConf India, where I delved into the intricacies of Write-Ahead Logging (WAL) in PostgreSQL. My presentation aimed to demystify this crucial database feature that ensures data integrity and enhances performance. 

pgroll 0.12.0 update
Posted by Noémi Ványi in Xata on 2025-05-09 at 00:00
pgroll v0.12 includes usability improvements like verbose mode and idempotent migrate command.

Extending PostgreSQL with Java: Overcoming Integration Challenges
Posted by cary huang in Hornetlabs Technology on 2025-05-08 at 20:56

Why Bridge Java with C in the First Place?

Bridging Java and C combines the strengths of both languages. A C application may rely on Java for modern libraries, cloud APIs, or UI and web capabilities, while a Java app might need C for low-level system access or performance-critical tasks. Sometimes, there’s simply no alternative—certain features only exist in one language. While modern languages like C++ and Go offer both high- and low-level control, many systems aren’t written in them. For existing C or Java codebases, bridging is often the most practical way to extend functionality without a full rewrite.

In my case, the goal was to build a C-based PostgreSQL extension called SynchDB that integrates with the Java-based Debezium Embedded library to enable heterogeneous database replication into PostgreSQL. Debezium already provides mature connectors for databases like MySQL, SQL Server, and Oracle, so rather than reinventing the wheel in C, I chose to bridge the two runtimes using JNI. This approach allows PostgreSQL to consume change data from other systems in real time. However, maintaining both C and Java components within the PostgreSQL extension framework introduces unique challenges—such as cross-language memory management, threading compatibility, signal handling, and debugging across runtime boundaries. Let’s explore some of those next.

SynchDB – A Heterogeneous Database Replication Tool for PostgreSQL

This project serves as a practical case study for the complexities of bridging Java and C, highlighting the technical challenges and design decisions involved in maintaining two runtimes under PostgreSQL’s extension framework. The basic architecture diagram is shown as below where the yellow box represents the Java space (also known as a Java Virtual Machine (JVM)), the blue box represents PostgreSQL extension (C space) and the orange box represents the PostgreSQL core.

The working principles of SynchDB extension starts by instantiating a JVM and running a p

[...]

Waiting for Postgres 18: Accelerating Disk Reads with Asynchronous I/O
Posted by Lukas Fittl on 2025-05-07 at 07:45
With the Postgres 18 Beta 1 release this week a multi-year effort, and significant architectural shift in Postgres is taking shape: Asynchronous I/O (AIO). These capabilities are still under active development, but they represent a fundamental change in how Postgres handles I/O, offering the potential for significant performance gains, particularly in cloud environments where latency is often the bottleneck. Why asynchronous I/O matters How Postgres 17’s read streams paved the way New io_method…

Unleashing the Power of PostgreSQL with pgEdge Distributed Multi-Master Replication and Postgres Platform - Part 1
Posted by Ahsan Hadi in pgEdge on 2025-05-07 at 04:47

Before we delve into the main subject of this blog, it is essential to understand the benefits of PostgreSQL replication, and the difference between single-master replication (SMR) and multi-master replication (MMR). In every modern business application, the database is becoming a critical part of the architecture and the demand for making the database performant and highly available is growing tremendously.

Planning Ahead for Better Performance

Our goal when designing a system for high performance is to make the database more efficient when handling an application request - this ensures that the database is not becoming a business bottleneck. If your database resides on a single host, the resources of the system that is hosting the database can be easily exhausted; having a system that supports scaling the database so it can more effectively respond to the application's heavy load.With pgEdge Distributed Postgres and the power of PostgreSQL, you can perform both horizontal and vertical scaling:The technique of replicating data across multiple PostgreSQL databases that are running on multiple servers can also be considered horizontal scaling. The data is not distributed, but database changes are replicated to each cluster node so the application load can be divided across multiple machines to achieve better performance.Reliability and high-availability are also crucial for a powerful and responsive system:
  • Reliability means that the database is able to respond to user/application requests at all times with consistency and without any server interruption.
  • High-availability is also a critical consideration that ensures that database operations are not interrupted and the database downtime is minimized.
Statistically, downtime per year reflects the ability of your database and application to handle failures and outages without user downtime. Often, downtime per year is negotiated into a service level agreement (SLA) for applications that require high-availability; this clause specifies the cumula[...]

Upgrade PostgreSQL from 16 to 17 on Ubuntu 25.04
Posted by Paolo Melchiorre in ITPUG on 2025-05-06 at 22:00

Howto guide for upgrading PostgreSQL from version 16 to 17 on Ubuntu, after its upgrade from version 24.10 to 25.04 (Plucky Puffin).

CREATE INDEX: Data types matter
Posted by Hans-Juergen Schoenig in Cybertec on 2025-05-06 at 04:00

In the PostgreSQL world, as well as in order database systems, data types play a crucial role in ensuring optimal performance, efficiency, as well as semantics. Moreover, some data types are inherently easier and faster to index than others. Many people are not aware of the fact that indexes indeed make a difference, so let us take a look and see how long it takes to index the very same data using different
types.

Creating some sample data

To show which differences a data type makes, we first have to create some sample table. In this case, it contains 5 data types that we want to inspect to understand how they behave:

blog=# CREATE TABLE t_demo (
        v1      int, 
        v2      int8, 
        v3      float, 
        v4      numeric, 
        v5      text
);
CREATE TABLE

Finally, we use our good old friend (the generate_series function), which has served me very well over the years:

blog=# INSERT INTO t_demo
        SELECT x, x, x, x, x FROM (
                SELECT  (random()*10000000)::int4 AS x 
                FROM    generate_series(1, 50000000)
        ) AS y;
INSERT 0 50000000

What this does is generate 50 million random rows and put them into my table. Note that all column entries are identical to ensure fairness for our index creation.

The data might look as follows:

blog=# SELECT * FROM t_demo LIMIT 10;
   v1    |   v2    |   v3    |   v4    |   v5    
---------+---------+---------+---------+---------
 9443332 | 9443332 | 9443332 | 9443332 | 9443332
 2220480 | 2220480 | 2220480 | 2220480 | 2220480
 1328189 | 1328189 | 1328189 | 1328189 | 1328189
 4506728 | 4506728 | 4506728 | 4506728 | 4506728
 8148249 | 8148249 | 8148249 | 8148249 | 8148249
   74086 |   74086 |   74086 |   74086 | 74086
 4160715 | 4160715 | 4160715 | 4160715 | 4160715
 9193039 | 9193039 | 9193039 | 9193039 | 9193039
 4062983 | 4062983 | 4062983 | 4062983 | 4062983
 7609357 | 7609357 | 7609357 | 7609357 | 7609357
(10 rows)

Note that PostgreSQL does not necessarily write data to disk immediat

[...]

Mini Summit 5: Extension Management in CNPG
Posted by David Wheeler in Tembo on 2025-05-05 at 21:51
Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The last Extension Ecosystem Mini-Summit is upon us. How did that happen?

Join use for a virtual conference session featuring Gabriele Bartolini, who will be discussing Extension Management in CNPG. I’m psyched for this one, as the PostgresSQL community has contributed quite a lot to improving extensions management in CloudNativePG in the past year, some of which we covered in previously. If you miss it, the video, slides, and transcript will appear here soon.

Though it may be a week or two to get the transcripts done, considering that PGConf.dev is next week, and featuring the Extension Ecosystem Summit on Tuesday, 13 May in Montreál, CA. Hope to see you there; be sure to say “hi!”

PgPedia Week, 2025-05-04
Posted by Ian Barwick on 2025-05-05 at 20:22
PostgreSQL 18 changes this week

The PostgreSQL release is slowly taking shape, with the first draft of release notes now available. Corrections and additions can be sent via the pgsql-hackers thread " PG 18 release notes draft committed ".

PostgreSQL 18 articles Postgres 18 Release Notes (2025-05-02) - Bruce Momjian Waiting for PostgreSQL 18 – Allow NOT NULL constraints to be added as NOT VALID (2025-05-01) - Hubert 'depesz' Lubaczewski PostgreSQL 18: part 4 or CommitFest 2025-01 (2025-04-29) - Pavel Luzanov / PostgresPro Update Your Control Files (2025-04-28) - David E. Wheeler

more...

AI in the DBeaver Query Editor
Posted by Grant Fritchey on 2025-05-05 at 14:15

You know I had to do it as soon as I found it was possible. Yes, I installed and enabled AI in the DBeaver Query Editor so I can use AI with my PostgreSQL database work. Let’s face it. It was inevitable. However, the setup isn’t intuitive. Setting Up in DBeaver I’m going to assume […]

The post AI in the DBeaver Query Editor appeared first on Grant Fritchey.

Waiting for PostgreSQL 18 – Add function to get memory context stats for processes
Posted by Hubert 'depesz' Lubaczewski on 2025-05-05 at 13:14
On 8th of April 2025, Daniel Gustafsson committed patch: Add function to get memory context stats for processes   This adds a function for retrieving memory context statistics and information from backends as well as auxiliary processes. The intended usecase is cluster debugging when under memory pressure or unanticipated memory usage characteristics.   When calling … Continue reading "Waiting for PostgreSQL 18 – Add function to get memory context stats for processes"

Checklist for Meetup Attendees
Posted by Andreas Scherbaum on 2025-05-04 at 22:00
This blog posting is part of a series about organizing Meetups. While mostly centered around PostgreSQL Meetups, large parts can be adapted by other Meetups as well. All postings in this series: Checklist for Meetup Attendees (this posting) Checklist for Meetup Speakers Checklist for Meetup Organizers Overview posting Attending an in-person Meetup You found an interesting Meetup, and like to sign up. Before you sign up, please read the entire Meetup description.

Checklist for Meetup Organizers
Posted by Andreas Scherbaum on 2025-05-04 at 22:00
This blog posting is part of a series about organizing Meetups. While mostly centered around PostgreSQL Meetups, large parts can be adapted by other Meetups as well. All postings in this series: Checklist for Meetup Attendees Checklist for Meetup Speakers Checklist for Meetup Organizers (this posting) Overview posting You are organizing a Meetup. Congratulations, this is big! Here are some recommendations: First things first: find a date. Pay attention to school holidays and public holidays, especially for in-person Meetups.

Checklist for Meetup Speakers
Posted by Andreas Scherbaum on 2025-05-04 at 22:00
This blog posting is part of a series about organizing Meetups. While mostly centered around PostgreSQL Meetups, large parts can be adapted by other Meetups as well. All postings in this series: Checklist for Meetup Attendees Checklist for Meetup Speakers (this posting) Checklist for Meetup Organizers Overview posting You are speaking at a Meetup. Congratulations! General advice Check your slides for a good contrast, and fonts large enough.

Another look into PostgreSQL CTE materialization and non-idempotent subqueries
Posted by Shayon Mukherjee on 2025-05-04 at 12:00
A few days ago, I wrote about a surprising planner behavior with CTEs, DELETE, and LIMIT in PostgreSQL, a piece I hastily put together on a bus ride. That post clearly only scratched the surface of a deeper issue that I’ve since spent way too many hours exploring. So here are some more formed thoughts and findings. The core issue revisited Let’s quickly recap: when using a query like the following, the planner might execute your query in ways you don’t expect.

How to handle files?
Posted by Sergey Solovev on 2025-05-03 at 15:21

Greetings!

Fault tolerance is a very important aspect of every non-startup application.
It can be described as a definition:

Fault tolerance is the ability of a system to maintain proper operation despite failures or faults in one or more of its components.

But this gives only slight overview - fault tolerance concerns many areas especially when we are talking about software engineering:

  • Network failures (i.e. connection halt due to power outage on intermediate router)
  • Dependent service unavailability (i.e. another microservice)
  • Hardware bugs (i.e. Pentium FDIV bug)
  • Storage layer corruptions (i.e. bit rot)

As a database developer I'm interested in latter - in the end all data stored in disks.

But you should know that not only disks can lead to such faults. There are other pieces that can misbehave or it's developer's mistake that he/she didn't work with these parts correctly.

I'm going to explain how this 'file write stack' (named in opposite to 'network stack') works.
Of course, main concern will be about fault tolerance.

Application

Everything starts in application's code. Usually, there is separate interface to work with files.

Each PL (programming language) has own interface. Some examples:

  • fwrite - C
  • std::fstream.write - C++
  • FileStream.Write - C#
  • FileOutputStream.Write - Java
  • open().write - Python
  • os.WriteFile - go

These all means given by PL itself: read, write, etc...
Their main advantage - platform independence: runtime (i.e. C#) or compiler (i.e. C) implements this.
But this have drawbacks. Buffering in this case - all calls to this virtual "write" function store going to be written data in special buffer to later write it all at once (make syscall to write).

Due to documentation, each of all PL above support buffering:

[...]

Postgres 18 Release Notes
Posted by Bruce Momjian in EDB on 2025-05-03 at 00:15

I have just completed the first draft of the Postgres 18 release notes. It includes a little developer community feedback but still needs more XML markup and links.

The release note feature count is 206. There is a strong list of optimizer, monitoring, and constraint improvements. Postgres 18 Beta 1 should be released soon. The final Postgres 18 release is planned for September/October of this year.

Anatomy of a Database Operation
Posted by Karen Jex in Crunchy Data on 2025-05-02 at 16:49

Slides and transcript from my talk, "Anatomy of a Database Operation", at DjangoCon Europe in Dublin on 25 April 2025.

I'll share the recording as soon as it's available.

Introduction

This talk seemed like a great idea back in December 2024 when I asked the Django community for suggestions of database topics people wanted to hear about:

I'd love to be a DjangoCon Europe speaker again, because it's one of my favourite events. To that end, I'm looking for help to put together an irresistible talk submission! If you have suggestions for database/Postgres related topics that you'd really like to hear about, or burning database questions that you want to know the answer to, please let me know. I'm counting on you!

One of the suggestions was:

"how databases work and store data under the hood [so I can] build a complete mental model of what happens once you call SELECT … or INSERT."

Great idea, I thought. And then I realised just how much there was behind that one simple request!

So here goes...


I even learnt Django for the purpooses of this talk!

For demonstration purposes, I created this highly sophisticated web site that displays beer information that’s stored in a Postgres database. You can list beers, types of beer and breweries, and you can search for your favourite beer or add a new one.

But what’s happening behind the scenes when you do those things?

  • How is the request passed to the database?
  • How are the results retrieved and returned to you?
  • How’s the data stored, retrieved and modified?
  • And why should you even care?

One reason you might care is that users probably aren't happy when they see this ("LOADING, please wait...").


Or this ("DATABASE ERROR!").


This is me!
As you can see, from the diagram representing my career so far, (and as you already know if you've read my posts or watched my talks before), I have a database background.

I was a DBA for 20 years before I moved into database consultan

[...]

Mini Summit 4 Transcript: The User POV
Posted by David Wheeler in Tembo on 2025-05-01 at 21:02
Orange card with large black text reading “The User POV”. Smaller text above reads “04.23.2025” and below reads “Celeste Horgan (Aiven), Sonia Valeja (Percona), & Alexey Palazhchenko (FerretDB)”

On April 23, we hosted the fourth of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.dev), taking place May 13-16, in Montreál, Canada. Celeste Horgan, Developer Educator at Aiven, Sonia Valeja, PostgreSQL DBA at Percona, and Alexey Palazhchenko, CTO FerretDB, joined for a panel discussion moderated by Floor Drees.

Amd now, the transcripts of “The User POV” panel, by Floor Drees

Introduction

My name is Floor, I’m one of the organizers of these Extension Ecosystem Mini-Summits. Other organizers are also here:

The stream and the closed captions available for the recording are supported by PGConf.Dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Next, and last in this series, on May 7 we’re gonna have Gabriele Bartolini talk to us about Extension Management in CloudNativePG. Definitely make sure you head over to the Meetup page, if you haven’t already, and RSVP for that one!

The User POV

Floor: For the penultimate edition of this series, we’re inviting a couple of Postgres extension and tooling users to talk about how they pick and choose projects that they want to use, how they do their due diligence and, their experience with running extensions.

But I just wanted to set the context for the meeting today. We thought that being in the depth of it all, if you’re an extension developer, you kind of lose the perspective of what it’s like to use extensions and other auxiliary tooling. You lose that user’s point of view. But users, maybe they’re

[...]

Waiting for PostgreSQL 18 – Allow NOT NULL constraints to be added as NOT VALID
Posted by Hubert 'depesz' Lubaczewski on 2025-05-01 at 10:55
On 7th of April 2025, Álvaro Herrera committed patch: Allow NOT NULL constraints to be added as NOT VALID   This allows them to be added without scanning the table, and validating them afterwards without holding access exclusive lock on the table after any violating rows have been deleted or fixed.   Doing ALTER TABLE … Continue reading "Waiting for PostgreSQL 18 – Allow NOT NULL constraints to be added as NOT VALID"

Queries on Vacuum
Posted by Dave Stokes on 2025-04-30 at 15:22

 I am (slowly) adding handy PostgreSQL queries to my GitHub, and Vacuum is the newest category.  The end goal is to have a compilation of queries for those of us who need to keep an instance healthy.

Over the years, I have collected hundreds of various queries and hate hunting them down in my code snippet library. Finally, they will be in one place and easy to search. 

Please contribute if you have similar or better queries (hint, hint!).

PostgreSQL Trusted Extensions for Beginners
Posted by Pavlo Golub in Cybertec on 2025-04-30 at 08:09

Introduction

Recently, we had a long discussion in our internal chat about the concept of Trusted Extensions in PostgreSQL. It became clear that while the feature is very useful, it’s often misunderstood — especially by beginners. Let's fix that!

This post explains what trusted extensions are, why they exist, how they work, and provides some important hints and warnings for everyday use.

What Are PostgreSQL Extensions?

An extension is a package of SQL scripts, types, functions, and sometimes even compiled C code that extends PostgreSQL's capabilities.
Extensions are installed on the server and then enabled inside a database using:

CREATE EXTENSION my_extension;

Normally, installing or enabling an extension requires superuser privileges, because extensions can modify how the database server behaves.

What Does "Trusted" Mean?

Trusted extensions allow non-superusers (regular database users) to enable certain extensions themselves using CREATE EXTENSION, without needing superuser rights.

In simple words:

  • If an extension is trusted, a user with CREATE privilege on a database can activate it.
  • If an extension is not trusted, only a superuser can activate it.

Important:
"Trusted" does not mean:

  • The extension is bug-free.
  • The code has been officially audited by PostgreSQL core developers.
  • It is completely safe in every possible scenario.

It simply means:

"We believe that enabling this extension should not allow users to bypass security or harm the database server."

How Does PostgreSQL Know if an Extension is Trusted?

Each extension has a control file (.control) where its properties are described.
Inside that file, the line:

trusted = true

tells PostgreSQL that the extension can be enabled by non-superusers.

If the line is missing, the extension is considered untrusted by default.

Example of a simple control file:

comment = 'Extension for text search'
default_version = '1.0'
relocatable = false
superuser = fal
[...]

A PostgreSQL planner gotcha with CTEs DELETE and LIMIT
Posted by Shayon Mukherjee on 2025-04-29 at 12:00
I recently discovered an unexpected behavior in PostgreSQL involving a pattern of using a Common Table Expression (CTE) with DELETE ... RETURNING and LIMIT to process a batch of items from a queue-like table. What seemed straightforward turned out to have a surprising interaction with the query planner. The scenario Let’s say you have a task_queue table and want to pull exactly one task for a specific queue_group_id. A common approach is using a CTE:

The Tux and the Pachyderm
Posted by Federico Campoli on 2025-04-29 at 07:00

In the previous blog post we have seen how plenty of ram doesn’t necessary results in a faster instance.

This time we’ll quickly see how PostgreSQL and Linux relate to each other in particular if we want to run our RDBMS at scale.

Then we’ll have a look to the virtual memory in Linux and how to make it more friendly to PostgreSQL.

Let’s dig in.

PG Day Chicago 2025
Posted by Henrietta Dombrovskaya on 2025-04-29 at 06:33

We did it!

Thank you so much to everyone who made this event a success! Starting with the great talks selection (thank you, CfP committee!), to our amazing sponsors (thank you, Pat Wright!), to volunteers, to attendees!

My highlights were having multiple new speakers, new topics, and seeing a lot of new attendees for many of whom this was their first Postgres conference! I hope you all enjoyed it, and I hope to see you again!

And please don’t forget to leave us feedback here!

Orphaned files after a PostgreSQL crash
Posted by Laurenz Albe in Cybertec on 2025-04-29 at 05:00

A computer screen where "ls --orphaned" ran on a data directory and returned files with all kinds of funny orphan names
© Laurenz Albe 2025

PostgreSQL is famous for its stability, and rightly so. But that does not mean that it can never crash. And while PostgreSQL usually cleans up after itself, it has no good way to do so after a crash (after all, it has lost its memory). As a consequence, you can end up with orphaned files in your data directory. If these files are small, they probably won't worry you. But sometimes you can end up with a lot of junk in your data directory. It is notoriously difficult to deal with that problem, so I decided to write about it.

Why would PostgreSQL crash?

In case the title of this article gives you bad feelings about PostgreSQL, I'll try to dispel them. PostgreSQL is not in the habit of crashing. But sometimes a crash is not PostgreSQL's fault. Let me enumerate a few possible causes for a crash:

In fact, the most frequent cause of a crash that leaves large orphaned files in its wake is people running VACUUM (FULL). This command creates a copy of the table and all its indexes. If you run out of disk space and PostgreSQL cannot create a new WAL segment, the database server will crash. Such a crash will leave the partly created copy of the table behind. To avoid that kind of problem, I recommend that you keep pg_wal and the data directory o

[...]

Source code line numbers for database queries in Ruby on Rails with Marginalia and Query Logs
Posted by Andrew Atkinson on 2025-04-29 at 00:00

Back in 2022, we covered how to log database query generation information from a web app using pg_stat_statements for Postgres. https://andyatkinson.com/blog/2022/10/07/pgsqlphriday-2-truths-lie

The application context annotations can look like this. They’ve been re-formatted for printing:

application=Rideshare
controller=trip_requests
source_location=app/services/trip_creator.rb:26:in `best_available_driver'
action=create

I use pg_stat_statements to identify costly queries generated in the web application, often ORM queries (the ORM is Active Record in Ruby on Rails), with the goal of working on efficiency and performance improvements.

The annotations above are included in the query field and formatted as SQL-compatible comments.

Application context usually includes the app name and app concepts like MVC controller names, action names, or even more precise info which we’ll cover next.

How can we make these even more useful?

What’s the mechanism to generate these annotations?

For Ruby on Rails, we’ve used the Marginalia Ruby gem to create these annotations.

Besides the context above, a super useful option is the :line option which captures the source code file and line number.

Given how dynamic Ruby code can be, including changes that can happen at runtime, the :line level logging takes these annotations from “nice to have” to “critical” to find opportunities for improvements.

What’s more, is that besides Marginalia, we now have a second option that’s built-in to Ruby on Rails.

What’s been added since then?

In Rails 7.1, Ruby on Rails gained similar functionality to Marginalia directly in the framework.

While nice to have directly in the framework, the initial version didn’t have the source code line-level capability.

That changed in the last year! Starting from PR 50969 to Rails linked below, for Rails 7.2.0 and 8.0.2, the source_location option was added to Active Record Query Logs, equivalent to the :line option in Marginalia.

PR: Support :sou

[...]

PostgreSQL 18: part 4 or CommitFest 2025-01
Posted by Pavel Luzanov in Postgres Professional on 2025-04-29 at 00:00

We continue to follow the news about PostgreSQL 18. The January CommitFest brings in some notable improvements to monitoring, as well as other new features.

You can find previous reviews of PostgreSQL 18 CommitFests here: 2024-07, 2024-09, 2024-11.

  • EXPLAIN (analyze): buffers on by default
  • pg_stat_io: input/output statistics in bytes instead of pages
  • pg_stat_io: WAL statistics
  • pg_stat_get_backend_io: I/O statistics for a specific process
  • VACUUM(verbose): visibility map information
  • Total vacuum and analysis time per table
  • autovacuum: change the number of workers without restarting the server
  • psql: connection service information
  • psql: expanded display for \d* commands
  • psql: leakproof flag in \df* output
  • jsonb: null conversion to other types
  • MD5 encryption algorithm: end-of-support preparations
  • New function uuidv7
  • postgres_fdw: SCRAM authentication without storing the password
  • passwordcheck: minimum password length
  • New function casefold and pg_unicode_fast collation
  • DML commands: RETURNING with OLD and NEW
  • to_number: convert a string of roman numberals to numeric

...

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.