In this demo, we run a large-scale HTAP workload on Azure Database for PostgreSQL with the built-in Hyperscale (Citus) deployment option. Hyperscale (Citus) uses the open source Citus extension to Postgres to turn a cluster of PostgreSQL servers into a single distributed database that can shard or replicate Postgres tables across the cluster. Citus can simultaneously scale transaction throughput by routing transactions to the right server, and scale analytical queries and data transformations by parallelizing them across all of the servers in the database cluster. In combination with all the powerful Postgres features such as its different index types and other PostgreSQL extensions, this makes Hyperscale (Citus) able to run high performance HTAP workloads at scale. We will show a side-by-side comparison of Hyperscale (Citus) and a single PostgreSQL server running a transactional workload generated by HammerDB, while simultaneously running analytical queries, and show how you get further speedups by pre-aggregating the data in parallel (using rollups) on the same Postgres database. Video bookmarks: ► 0:17 What Citus is ► 0:37 Overview of anatomy of the demo ► 1:47 Demo begins ► 9:55 Summary of performance results ► 10:58 Marco’s interpretation of the demo ► 14:14 How Windows telemetry team uses Postgres & Citus This demo, originally shared at SIGMOD and excerpted from an interview Claire Giordano did with Marco Slot at the Microsoft European Virtual Open Source Summit—explores the promise of HTAP—that there is finally a database that can do transactions and analytics at scale—and shows how you can use Postgres with Hyperscale (Citus) on Azure to serve both transactional & analytical needs of HTAP applications.
For more tips like this, check out the working remotely playlist at www.youtube.com/FoetronAcademy . Also, if you need any further assistance then you can raise a support ticket and get it addressed.
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article