[syndicated profile] planet_postgresql_feed

What’s Zig and Zig’s C/C++ compiler?

In case you are not familiar with it, Zig is a programming language. Among other characteristics, Zig prides itself on being a low-level / systems programming language with great interoperability with C and C++. Unlike other comparable languages like Rust, it does explicit memory allocation / freeing (even though it adds cool features like the defer keyword), so it seems to have a mental model closer to C, in which Postgres is programmed. This also makes it a very interesting language for developing Postgres extensions in Zig, and there’s the pgzx framework to help with that.

But other than the extensions, what’s Zig bringing to Postgres and what’s this post really about? Zig’s compiler. It’s quite an advanced piece of technology that, apart from compiling Zig code, can also compile C/C++ code, and does so really well. There’s a mind-blowing blog post from Andrew Kelly, creator of Zig, that I’d recommend reading, about using Zig as a C/C++ compiler, claiming it is a powerful drop-in replacement for GCC/Clang.

zig cc, the command line for Zig’s C compiler, is included with the Zig distribution, which is by itself a self-contained, small downloadable package (41-50MiB on Linux, depending on the architecture). zig ccsupports the same options as Clang, which, in turn, supports the same options as GCC”, making it a drop-in replacement. To achieve this level of compatibility, zig cc uses LLVM behind the scenes (it’s technically a layer on top of an LLVM frontend). As a curiosity, Andrew’s post details how it’s possible that Zig’s distribution is (significantly) smaller than even LLVM binaries!

So if it is a drop-in replacement, building Postgres with zig cc should be easy, right? Let’s give it a try.

Building Postgres with zig cc

It turns out to be quite straightforward.

First we need to download Zig. Zig is statically linked ("Zig’s Linux tarballs are fully statically linked, and therefore work correctly on all Linux distributions.").

Download a

[...]
[syndicated profile] planet_postgresql_feed

Introduction

If you’ve created web apps with relational databases and ORMs like Active Record (part of Ruby on Rails), you’ve probably experienced database performance problems after a certain size of data and query volume.

In this post, we’re going to look at a specific type of problematic query pattern that’s somewhat common.

We’ll refer to this pattern as “Big INs,” which are queries with an IN clause that has a big list of values. As data grows, the length of the list of values will grow. These queries tend to perform poorly for big lists, causing user experience problems or even partial outages.

We’ll dig into the origins of this pattern, why the performance of it is poor, and explore some alternatives that you can use in your projects.

IN clauses with a big list of values

The technical term for values are a parenthesized list of scalar expressions.

For example in the SQL query below, the IN clause portion is WHERE author_id IN (1,2,3) and the list of scalar expressions is (1,2,3).

SELECT * FROM books
WHERE author_id IN (1, 2, 3);

The purpose of this clause is to perform filtering. Looking at a query execution plan in Postgres, we’ll see something like this fragment below:

Filter: (author_id = ANY ('{1,2,3}'::integer[]))

This of course filters the full set of books down to ones that match on author_id.

Filtering is a typical database operation. Why are these slow?

Parsing, planning, and executing

Remember that our queries are parsed, planned, and executed. A big list of values are treated like constants, and don’t have associated statistics.

Queries with big lists of values take more time to parse and use more memory.

Without pre-collected table statistics for planning decisions, PostgreSQL is more likely to mis-estimate cardinality and row selectivity.

This can mean the planner chooses a sequential scan over an index scan, causing a big slowdown.

How do we create this pattern?

Creating this pattern directly

In Active Record

[...]
vitus_wagner: My photo 2005 (Default)
[personal profile] vitus_wagner

https://arstechnica.com/security/2025/05/signal-resorts-to-weird-trick-to-block-windows-recall-in-desktop-app/

ТУт Microsoft решила вставить в Windows фичу Windows Recall. Эта штука делает скриншоты каждые три секунды, распознает текст с них и позволяет делать поиск на тему "тут я на днях что-то читал про keyword, что это было?"

Казалось бы штука полезная. Но фактически это приводит к тому, что все, что человек делал на компьютере, где-то сохраняется. И кто-то может получить к этому доступ. Этим кем-то могут быть правоохранительные органы, члены семьи, а то и троян, севший на компьютер.

Поэтому защитники приватности в панике.

Разработчики мессенжера Signal изящно выкрутились - они пометили свои окна как содержащие copyrighted content, и соответственно на скриншотах такие окна изображаются черными прямоугольниками. И на скриншотах, делаемых Recall - тоже.

(помнится было дело, писал покойный Ленский обзор какой-то компьютерной игры, и нужны ему были скриншоты. А игра очень не любила чтобы с нее делали скриншоты и стримы. Пришел Ленский ко мне в гости, запустили мы эту игру то ли в qemu, то ли в dosemu, а против лома нет приема. Если мы делаем скриншоты с эмулятора, то запущенная внутри эмулятора программа про это знать не знает).

На мой взляд наличие у компьютера памяти о действиях пользователя, особенно в сочетании с AI, вещь однозначно полезная. А попытки требовать согласие второго участника разговора на запись этого разговора - несправедливы. Все что видели глаза и слышали уши человека он имеет право сохранить в памяти, биологической ли, технической ли, навечно. И проблема здесь скорее в том, что компьютер недостаточно защищен от проинкновения посторонних людей.

Вообще когда AI в персональьных компьютерах станет нормой, вероятно имеющееся в законодательстве право не свидетельствовать против себя и близких родственников должно быть распространено на компьютеры. И АI должен иметь право не свидетельствовать против хозяина.

[syndicated profile] planet_postgresql_feed

Collation torture test results are finally finished and uploaded for Debian.

https://github.com/ardentperf/glibc-unicode-sorting

The test did not pick up any changes in en_US sort order for either Bullseye or Bookworm 🎉

Buster has glibc 2.28 so it shows lots of changes – as expected.

The postgres wiki had claimed that Jessie(8) to Stretch(9) upgrades were safe. This is false if the database contains non-english characters from many scripts (even with en_US locale). I just now tweaked the wording on that wiki page. I don’t think this is new info; I think it’s the same change that showed up in the Ubuntu tables under glibc 2.21 (Ubuntu 15.04)

FYI – the changelist for Stretch(9) does contain some pure ascii words like “3b3” but when you drill down to the diff, you see that it’s only moving a few lines relative to other strings with non-english characters:

@@ -13768521,42 +13768215,40 @@ $$.33
༬B༬
3B༬
3B-༬
-3b3
3B༣
3B-༣
+3B٣
+3B-٣
+3b3
3B3

In the process of adding Debian support to the scripts, I also fixed a few bugs. I’d been running the scripts from a Mac but now I’m running them from a Ubuntu laptop and there were a few minor syntax things that needed updating for running on Linux – even though, ironically, when I first started building these scripts it was on another Linux before I switched to Mac. I also added a file size sanity check, to catch if the sorted string-list file was only partly downloaded from the remote machine running some old OS (… realizing this MAY have wasted about an hour of my evening yesterday …)

The code that sorts the file on the remote instance is pretty efficient. It does the sort in two stages and the first stage is heavily parallelized to utilize whatever CPU is available. Lately I’ve mostly used c6i.4xlarge instances and I typically only need to run them for 15-20 minutes to get the data and then I terminate them. The diffs and table generation run locally. On my poor old laptop, the diff for buster ran at 100% cpu and 10°C hotter than the idle co

[...]
[syndicated profile] planet_postgresql_feed

Last week, I presented at the PGConf.dev for the first time and participated in a community summit for the first time. The idea was pitched by Teresa Giacomini, and that’s how this event was described in the program:

Community building, support, and maintenance are all critical to the future of Postgres. There are many types and layers to community building from events, podcasts, & meetups to extracurricular fun like chess & karaoke; recognition & rewards to Postgres booths at non-Postgres conferences; getting started in smaller communities to wrangling a global one.

In this 3-hour summit we will:

  • Have short presentations from the hosts on different aspects of community
  • Perform a short exercise to gather the group’s thoughts on some key questions:
    • What does community mean?
    • How do we make it easier for people to get involved?
    • What community initiatives already exist? What’s missing? How can we improve them?
  • Break into smaller groups to tackle areas the group believes are most important
  • Report out the larger group by each small group
  • Each group adds their results to the PostgreSQL Wiki
  • Determine a way for us to track our progress moving forward

Pre-work: We will gather some interest prior to the summit on topics for discussion.

Due to the interactive nature of the summit participation is limited to 60 people. Participants should be committed to build, support, or maintain a community in some way, and be ready to leave the summit with concrete action items to move the Postgres community forward. While the hosts from this summit are from the US & Europe, we hope that folks from less established Postgres communities will join us.

Pat Wright and Andreas Scherbaum were the other two organizers. We started by asking the conference organizers to email the conference participants a questionnaire with a list of topics they would be interested in discussing. Then, we analyzed

[...]
[syndicated profile] planet_postgresql_feed

Introduction

PostgreSQL Extension Day 2025 made its successful debut on May 12, 2025, just one day before the start of pgconf.dev 2025. This focused one-day event brought together the community around a single theme: the PostgreSQL extension ecosystem. From innovative ideas and development insights to discussions on safer extension building and delivery, the day was all about “everything extensions.”

The conference featured 14 tightly packed 25-minute talks, making for a long but highly productive day. For those unable to attend in person, the event was also live-streamed on YouTube.

Thanks to the hard work of the organizers and volunteers, PostgreSQL Extension Day 2025 turned out to be a great success. In this blog, I’ll walk through some of the key highlights and takeaways from this event.

Conference Highlights

Community and Volunteer Driven

Since this was the first-ever pgext.day conference, organized by Yurii Rashkovski, there was plenty of room for things to go sideways. Fortunately, a small but dedicated team—including Grant Zhou, Sweta Vooda, Charis Charalampidi, and myself—volunteered to support Yurii with setting up the live streaming and recording equipment early in the morning. Together, we handled the camera setup, microphones, projector, and streaming rig, and quickly got up to speed on how to operate the entire system before the event began.

I have to say, by the time the conference started, I felt surprisingly confident running the live streaming, camera work, and digital recording gear—a fun learning experience in itself!

Social

The social aspect of a conference is just as important as the sessions themselves—it’s where connections are made, ideas are exchanged, and the community truly comes alive. At pgext.day 2025, we had the chance to enjoy dinner together both before and after the conference, giving everyone time to relax, share

[...]
[syndicated profile] planet_postgresql_feed

I gave a presentation at PGConf.dev last week, Adventures in Extension Packaging. It summarizes stuff I learned in the past year in developing the PGXN Meta v2 RFC, re-packaging all of the extensions on pgt.dev, and experimenting with the CloudNativePG community’s proposal to mount extension OCI images in immutable PostgreSQL containers.

Turns out a ton of work and experimentation remains to be done.

I’ll post the link to the video once it goes up, but in meantime, here are the slides:

Previous work covers the first half of the talk, including:

The rest of the talk encompasses newer work. Read on for details.

Automated Packaging Challenges

Back in December I took over maintenance of the Trunk registry, a.k.a., pgt.dev, refactoring and upgrading all 200+ extensions and adding Postgres 17 builds. This experience opened my eyes to the wide variety of extension build patterns and configurations, even when supporting a single OS (Ubuntu 22.04 “Jammy”). Some examples:

  • pglogical requires an extra make param to build on PostgreSQL 17: make -C LDFLAGS_EX="-L/usr/lib/postgresql/17/lib"
  • Some pgrx extensions require additional params, for example:
  • pljava needs a pointer to libjvm: mvn clean install -Dpljava.libjvmdefault=/usr/lib/x86_64-linux-gnu/libjvm.so
  • plrust needs files to be moved arou
[...]
[syndicated profile] planet_postgresql_feed

Welcome to the second part of our TimescaleDB best practices series! In the first part, we explored how to perform massive backfill operations efficiently, sharing techniques to optimize performance and avoid common pitfalls. If you haven’t had a chance to read the first part yet, you can check it out using this link

In today’s blog, we will discuss another crucial aspect of time-series data management: massive delete operations.

As your data grows over time, older records often lose their relevance but continue to occupy valuable disk space, potentially increasing storage costs and might degrade the performance if not managed well. 

Let’s walk through some strategies to clean up or downsample aged data in TimescaleDB, helping you maintain a lean, efficient, and cost-effective database.

Prerequisites for Massive Delete Operations

Here are a few important steps to follow for performing a large-scale delete on production and to ensure we are prepared in case something goes wrong.

Tune Autovacuum Settings 

In PostgreSQL, VACUUM is a maintenance process that removes dead tuples, obsolete row versions left behind by UPDATE or DELETE operations. These dead tuples occupy space but are no longer visible to any active transactions. Vacuuming reclaims this space, helping to reduce table bloat and maintain database performance.

The autovacuum feature automates this process by periodically running in the background, ensuring that dead tuples are cleaned up without manual intervention. This is especially important after large delete operations, where a significant number of dead rows can accumulate. If not handled promptly, this can lead to table bloat, increased I/O, and slower query performance.

However, its effectiveness depends heavily on how well it is configured. Without proper tuning, autovacuum may run too infrequently or too slowly, allowing dead tuples to pile up and impact performance.

Here is a list of important autovacuum parameters along with their recommended values th

[...]
[syndicated profile] planet_postgresql_feed
Version 0.3.1 of the open-source Xata Agent adds support for custom MCP servers and tools, introduces Ollama as a local LLM provider, and includes support for reasoning models O1 and O4-mini.
[syndicated profile] dxdt_ru_feed

Posted by Александр Венедюхин

Высокопроизводительные микропроцессоры GPU требуют как-то отслеживать географически, чтобы они работали только в тех регионах планеты, где разрешается. Это известная практика, которая уже применяется для станков, сельскохозяйственных машин и прочего оборудования. Интересно, что если чип можно дистанционно отследить и заблокировать (блокирование – следующий логичный шаг), то нельзя считать, что это работает только для неких конкретных регионов, на которые наложили санкции в данный момент времени. Естественно, технология такая работает в конкретной точке, поэтому отключать можно и домашних пользователей там, где общие санкции пока наложить не успели. Почему-то про это меньше шумят, чем про требования “официальных бекдоров” в системах обмена сообщениями на смартфонах.

А как такая технология отслеживания могла бы работать? Напрашивается вариант, когда сама условная “видеокарта” устанавливает соединение с удалённым сервером через Интернет. Независимые и от ОС, и даже от прочего оборудования в том же компьютере-носителе, системы удалённого доступа давно известны: IPMI и пр., с выделенной операционной системой и независимой “одноплатной” аппаратурой (SoC). В случае с видеокартой – если доступа к центральному серверу нет, то прошивка не работает. Токены, разрешающие работу, можно привязывать ко времени, например. Дальше возникает вопрос, как на стороне сервера определить положение чипа, прошивка которого прислала запрос. Можно встроить в чип приёмник GNSS (GPS) и передавать координаты. Однако приём сигнала спутниковых систем не отличается надёжностью, компьютер с видеокартой может быть установлен в подвале. Хотя, это уже проблемы потребителя – пусть он антенну выставляет на окно, что ли. Впрочем, координаты возможно подспуфить (но не всегда). С другой стороны, в качестве GNSS можно использовать сети спутников связи по типу Starlink, что понадёжнее.

Сервер может померить сетевую дистанцию “через Интернет” по времени доставки IP-пакетов. Это даст радиус на некотором сетевом графе. Один сервер не позволит определить регион с достаточной точностью, но если расставить много точек присутствия по Сети, то точность улучшится. Проблема в том, что если искомый чип подключен через некий туннель (VPN), то более или менее точно удастся определить местоположение точки выхода, а дальше – опять получится один радиус: дистанция, определяемая по времени в пути, понятно, не сильно зависит от того, есть VPN или нет, но вот плечо “последней мили”, ведущее от точки VPN до оконечного устройства, будет одно и то же для всех измерящих серверов. Впрочем, нетрудно опять списать на проблемы потребителя – пусть он сперва антенну из подвала выставляет, а потом отключает VPN.

Всё же, более эффективен какой-то гибридный метод, учитывающий и GNSS, и сетевые задержки, и, скажем, локальную электромагнитную обстановку: кто сказал, что не стоит добавить сюда WiFi и базовые станции GSM?

[syndicated profile] planet_postgresql_feed
Got interested recently in speed of pg_dump. Specifically, if, over the years, it has became faster, and if yes, how much. Couple of years ago I was in position where we needed to run pg_dump, and found some inefficiencies, which got later patched. This was around the version 12. So, how does the situation look … Continue reading "pg_dump speed across versions"
[syndicated profile] planet_postgresql_feed

In what I can't say isn't a tradition at this point, we're in an odd-numbered year so there's news on the pdot front! Get it here!

The biggest change (and the reason for the big 1-0-0) is simplifying usage: rather than requiring a shell function to plug the graph body into a template for interactive use, pdot now outputs the entire digraph or flowchart markup. The old behavior is still available with the --body flag, but the new default means it's a lot easier to get started -- pdot postgres_air fks | dot -Tpng | wezterm imgcat and go. You only need scripting to do the pipelining for you, or to customize the graph's appearance.

Other notable updates along the way:

  • PGHOST, PGDATABASE, PGUSER, and PGPASSWORD environment variables are honored
  • new policies graph, and many improvements to others especially triggers and function refs
  • usable as a Rust library!

Late last year I also presented at PGConf.EU in Athens, should you be interested.

[syndicated profile] planet_postgresql_feed

Introduction

pgconf.dev 2025 just wrapped up in Montreal, Canada, following its successful debut in Vancouver last year—and once again, it delivered a fantastic mix of deep technical content and strong community social activities.

As always, the focus was on both the current state and future direction of PostgreSQL, with over 40 thoughtfully curated technical talks covering everything from performance, storage, extensions and new features. The week wasn’t just about technical talks though—there were plenty of chances to connect through community events like Meet & Eat, the Social Run, and group dinners, making the experience as social as it was informative.

Montreal brought its own unique charm to the event. With its French-speaking culture, beautiful Old Town, and scenic waterfront, the city felt a little like Europe—laid-back, stylish, and totally different from the west coast vibe of Vancouver. Oh, and the food? Absolutely amazing!

WARNING: long blog post

Conference Highlights

Here are some personal highlights from pgconf.dev 2025, based on my own experience and participation throughout the week. I’ve made an effort to capture key takeaways from the talks I attended. and included photos from the conference to give you a feel for the energy, community, and atmosphere of the event.

Sponsor Swags

At the conference sign-in desk, a colorful array of sponsor swag was neatly displayed alongside the official pgconf.dev T-shirts. From stickers and pens to notebooks, socks, and other branded goodies, the table was a treasure trove for attendees. Everyone was welcome to help themselves and take as many items as they needed — a small but thoughtful way for sponsors to share their appreciation and for participants to bring home a piece of the event. The generous assortment added a lively and welcoming touch to the registration area, setting a positive tone from the moment attendees walked in.

Have you

[...]
[syndicated profile] planet_postgresql_feed

Just over a week ago, I attended PGConf.DE 2025 in Berlin with the rest of the Data Egret team and gave a talk titled “Data Archiving and Retention in PostgreSQL: Best Practices for Large Datasets.” This post is a written version of my talk for those who couldn’t attend.

Below, you’ll find each slide from the talk — along with what was said.

I’ve started talking about something that happens with almost every Postgres database — the slow, steady growth of data. Whether it’s logs, events, or transactions — old rows pile up, performance suffers, and managing it all becomes tricky. My talk was focusing on  practical ways to archive, retain, and clean up data in PostgreSQL, without breaking queries or causing downtime.

As you can see my work with Postgres focuses a lot on monitoring, performance and automation. I do that at Data Egret, where we help teams run Postgres reliably, both on-prem and in the cloud.

We specialise entirely in Postgres and involved a lot in the community. We help companies with scaling, migrations, audits, and performance tuning — everything around making Postgres run better.

I was also excited to share that Data Egret is now а part of a new initiative in the Postgres ecosystem: The Open Alliance for PostgreSQL Education. It’s an effort to build open, independent, community-driven certification.  

Then I dived into the topic of my talk.

Postgres can handle big tables, but once data starts piling up, it doesn’t always degrade gracefully:

  • queries slow down,
  • VACUUM takes longer,
  • indexes grow,
  • backups get heavier.

>And often, you’re keeping old data around for reporting, audits, or just in case. And that’s OKAY.  Because the issue isn’t really volume — it’s how we manage it.

This isn’t about discarding data — it’s about managing it wisely. Frequently used, or ‘hot’ data, should remain readily accessible and fast to query, without being archived or moved to storage.
And cold data? Move, c

[...]
[syndicated profile] planet_postgresql_feed

Introduction

In this post, we’ll cover a way to generate short, alphanumeric, pseudo random identifiers using native Postgres tactics.

These identifiers can be used for things like transactions or reservations, where users need to read and share them easily. This approach is an alternative to using long, random generated values like UUID values, which have downsides for usability and performance.

We’ll call the identifier a public_id and store it in a column with that name. Here are some example values:

SELECT public_id
FROM transactions
ORDER BY random()
LIMIT 3;

 public_id
-----------
 0359Y
 08nAS
 096WV

Natural and Surrogate Keys

In database design, we have design our schema to use natural and surrogate keys to identify rows.

For our public_id identifier, we’re going to generate it from a conventional surrogate integer primary key called id. We aren’t using natural keys here.

The public_id is intended for use outside the database, while the id integer primary key is used inside the database to be referenced by foreign key columns on other tables.

Whle public_id is short which minimizes space and speeds up access, the main reason for it is for usability.

With that said, the target for total space consumption was to be fewer bytes than a 16-byte UUID. This was achieved with an integer primary key and this additional 5 character generated value, targeting a smaller database where this provides plenty of unique values now and into the future.

Let’s get into the design details.

Design Properties

Here were the desired design properties:

  • A fixed size, 5 characters in length, regardless of the size of the input integer (and within the range of the integer data type)
  • Fewer bytes of space than a uuid data type
  • An obfuscated value, pseudorandom, not easily guessable. While not easily guessable, this is not meant to be “secure”
  • Reversibility back into the original integer
  • Only native Postgres capabilities, no extensions, client web app langu
[...]
[syndicated profile] planet_postgresql_feed

I last wrote about auto-releasing PostgreSQL extensions on PGXN back in 2020, but I thought it worthwhile, following my Postgres Extensions Day talk last week, to return again to the basics. With the goal to get as many extensions distributed on PGXN as possible, this post provides step-by-step instructions to help the author of any extension or Postgres utility to quickly and easily publish every release.

TL;DR

  1. Create a PGXN Manager account
  2. Add a META.json file to your project
  3. Add a pgxn-tools powered CI/CD pipeline to publish on tag push
  4. Fully-document your extensions

Release your extensions on PGXN

PGXN aims to become the defacto source for all open-source PostgreSQL extensions and tools, in order to help users quickly find and learn how to use extensions to meet their needs. Currently, PGXN distributes source releases for around 400 extensions (stats on the about page), a fraction of the ca. 1200 known extensions. Anyone looking for an extension might exist to solve some problem must rely on search engines to find potential solutions between PGXN, GitHub, GitLab, blogs, social media posts, and more. Without a single trusted source for extensions, and with the proliferation of AI Slop in search engine results, finding extensions aside from a few well-known solutions proves a challenge.

By publishing releases and full documentation — all fully indexed by its search index — PGXN aims to be that trusted source. Extension authors provide all the documentation, which PGXN formats for legibility and linking. See, for example, the pgvector docs.

If you want to make it easier for users to find your extensions, to read your documentation — not to mention provide sources for binary packaging systems — publish every release on PGXN.

Here’s how.

Create an Account

Step one: create a PGXN Manager account. The Emai

[...]
[syndicated profile] dxdt_ru_feed

Posted by Александр Венедюхин

Давно собирался написать о том, как ИИ/LLM может сыграть в условиях Нового Средневековья, способствуя замещению знаний о технологиях представлениями (в стиле выдачи LLM) о ритуалах, связанных с этими технологиями. Вчера опубликовал текст по этой теме на “Хабре”.

Esther Minano: pgstream v0.5.0 update

May. 20th, 2025 12:00 pm
[syndicated profile] planet_postgresql_feed
Improved user experience with new transformers, YAML configuration, CLI refactoring and table filtering.
[syndicated profile] planet_postgresql_feed

I’m pleased to welcome seven new Google Summer of Code 2025 contributors to the Postgres community!

I encourage you to welcome contributors during these first weeks to get them excited and invested in our community. You will meet them on mailing lists, Slack, Discord, and other media.

The table below details information about this year’s project, contributors, and mentors!

Project Title Contributor Assigned Mentors
Enhancements to pgwatch v3 RPC integration Ahmad Gouda Akshat Jaimini, Pavlo Golub
pgmoneta: Incremental backup for PostgreSQL 13-16 Ashutosh Sh Haoran Zhang, Jesper Pedersen
Extension Support for pgexporter Bassam Adnan Saurav Pal, Jesper Pedersen
Upgrade pgwatch Grafana dashboards to v11 Gaurav Patidar Rajiv Harlalka, Pavlo Golub
ABI Compliance Checker Mankirat Singh David Wheeler, Pavlo Golub
pgmoneta: WAL Filtering Mohab Yasser Shahryar Soltanpour, Jesper Pedersen
Enhancing Pgagroal Security Tejas Tyagi Luca Ferrari, Jesper Pedersen

We expect GSoC contributors to actively participate in the Community Bonding period from May 8th to June 1st. This period’s goal is to prepare contributors to begin their project work effectively on June 2nd. So please help them accommodate.

It was an insane start to the year! The GSoC program had the highest number of proposals ever, as well as the highest number of spam and AI-generated applications. Due to the high volume of new organizat

[...]

Ian Barwick: PgPedia Week, 2025-05-18

May. 19th, 2025 12:43 pm
[syndicated profile] planet_postgresql_feed

A very short edition this week...

PostgreSQL 18 changes this week

Following last week's beta1 release , things seem to have been quite quiet on all fronts, which hopefully means people are busy testing and not finding issues. From previous experience, this is the point in the release cycle where I start to review the changes over the past year and work out what I've missed ( feedback always welcome!).

PostgreSQL 18 articles Good time to test io_method (for Postgres 18) (2025-05-12) - Tomas Vondra discusses io_method and io_workers

more...

Profile

nataraj: (Default)
Swami Dhyan Nataraj

July 2024

S M T W T F S
 123456
789 10111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated May. 23rd, 2025 09:44 pm
Powered by Dreamwidth Studios
OSZAR »