This page provides you with instructions on how to extract data from PostgreSQL and load it into Panoply. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is PostgreSQL?
PostgreSQL, also called Postgres, is an open source object-relational database management system that runs on all major operating systems. It's known for its stability and its ability to handle high volumes of transactions.
What is Panoply?
Panoply provides a managed data warehouse platform that lets users quickly set up a new Amazon Redshift instance. It uses machine learning algorithms to handle complex tasks like schema building, data mining, modeling, scaling, performance tuning, security, and backup. Panoply can import data with no schema, no modeling, and no configuration, and you can work with the analysis, SQL, and visualization tools you already know on data in Panoply just as you would if you were creating a Redshift data warehouse manually.
Getting data out of PostgreSQL
Most people retrieve data from relational databases by writing SQL queries. If you're just looking to export data in bulk, however, you can use the command-line tool
pg_dump to export data from a PostgreSQL database as a CSV file or a script that you can run to restore the database on any PostgreSQL server.
Loading data into Panoply
Once you know all of the columns you want to insert, use the CREATE TABLE statement in Panoply's Redshift data warehouse to set up a table to receive all the data.
Next, migrate your data. It may seem like the easiest course would be to build INSERT statements to add data to your Redshift table row by row. That would be a mistake; Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, a better approach is to copy the data into Amazon S3 and then use the COPY command to load it into Redshift.
Keeping PostgreSQL data up to date
The script you have now should satisfy all your data needs for PostgreSQL – right? Not yet. How do you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow; if latency is important to you, it's not a viable option.
Instead, you can identify some key fields that your script can use to bookmark its progression through the data, and pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in PostgreSQL.
Other data warehouse options
Panoply is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure Synapse Analytics, which are RDBMSes that use similar SQL syntax. Others choose a data lake, like Amazon S3 or Delta Lake on Databricks. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, To S3, and To Delta Lake.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from PostgreSQL to Panoply automatically. With just a few clicks, Stitch starts extracting your PostgreSQL data, structuring it in a way that's optimized for analysis, and inserting that data into your Panoply data warehouse.