Making Postgres Faster: New Features for 7x Faster Queries and 500x Faster Updates

Making Postgres Faster: New Features for 7x Faster Queries and 500x Faster Updates
01
Hypercore: Making Postgres Powerful for Real-Time Analytics  
02
New Features and Optimizations
03
Try Timescale’s New Performance Boosts Today

As we close out a brat summer, we’ve been pushing Postgres beyond its limits to help developers more easily manage time-series workloads and real-time analytics. This August, we introduced updates built on our core technology, hypercore—designed to help you ship code faster, optimize performance, and confidently scale.

Before diving into the specifics, let’s go back to where it all started. Want to know what's coming up? Stay tuned for daily releases here.

Hypercore: Making Postgres Powerful for Real-Time Analytics  

Years ago, developers faced a critical challenge: traditional databases struggled with time-series data and real-time analytics. Enter hypercore, enabling Postgres to seamlessly handle all these complex data scenarios.

Hypercore’s hybrid storage approach—recent data in rows for fast ingest and lookup, older data in a columnar format for efficient querying—makes it ideal for applications like sensor data analysis, stock trades, or real-time user interactions. This architecture delivers up to 350x faster queries using 98 % less storage than AWS RDS and lays the groundwork for the advanced performance and efficiency improvements we’ll explore next.

New Features and Optimizations

Chunk-skipping: 7x faster, 87 % less storage

Ever run a query that’s frustratingly slow because it scans data you don’t even need? With chunk-skipping indexes, TimescaleDB intelligently skips irrelevant chunks (that's what we call data partitions), allowing you to query just the data you care about. This results in queries that are 7x faster and reduces storage usage by 87 %. For example, if you have a hypertable partitioned by start_time but need to filter on a secondary column like end_time, chunk-skipping indexes will dynamically prune chunks that don’t contain relevant data, significantly speeding up queries that would otherwise require scanning all partitions.

500x faster updates and deletes with compressed tuple filtering

Before compressed tuple filtering, updating or deleting compressed data was slow and inefficient, especially in environments with limited resources. Entire batches of up to 1,000 rows had to be decompressed and written to disk, even if only a small portion needed modification. 

With compressed tuple filtering, TimescaleDB uses min/max metadata filtering in the decompression pipeline to skip irrelevant batches, targeting only the necessary data. This optimization makes DML operations (inserts, deletes, and updates) up to 500x faster, drastically reducing the overhead of decompressing unnecessary data by avoiding the need to materialize irrelevant rows.

360x faster upserts with index scans 

One of our customers, Ndustrial, found upserts weren’t performing with their high-cardinality dataset. Investigation revealed that a sequential scan slowed down finding the batches of data needed for the queries. When we replaced the sequential scan with an index scan (using a pre-existing index), upserts sped up by 360x. This approach will keep Ndustrial running smoothly, even with massive data growth. 

400x faster queries with optimizations on tiered storage

Recent improvements to our tiered storage architecture address the performance challenges of querying large datasets stored in slower, cost-effective storage like S3. Chunk exclusion prunes irrelevant chunks outside the query’s time range or conditions, cutting down unnecessary scans and speeding up query execution. Row group exclusion further optimizes performance by skipping entire Parquet row groups that don’t match the query criteria, while column exclusion reduces I/O by ensuring only the relevant columns are read. These optimizations work together to deliver up to 400x faster queries when accessing tiered data in S3, allowing you to manage massive datasets at lower costs without sacrificing performance.

Try Timescale’s New Performance Boosts Today

With these new features and the power of hypercore, you're ready to handle even more demanding workloads—from gigabytes to petabytes of data. In the coming days, we’ll dive deeper into how we built these optimizations and explore real-world use cases. Stay tuned for more technical deep dives (including an update on our own Insights product, which processes an insane amount of data on a single Postgres node). 

Why wait? Sign up today and experience faster queries and more efficient storage firsthand. Your performance gains are just a query away.

Plus, don’t forget to share your insights by participating in the 2024 State of PostgreSQL Survey. Your feedback helps shape the future of PostgreSQL and the tools you rely on.

Ingest and query in milliseconds, even at petabyte scale.
This post was written by

Originally posted

Last updated

3 min read
Announcements & Releases
Contributors

Related posts