This page provides you with instructions on how to extract data from Db2 and load it into PostgreSQL. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Db2?
Db2 is IBM's relational DBMS. IBM provides versions of Db2 that run on-premises, hosted by IBM, or in the cloud. The on-premises version runs on System z mainframes, System i minicomputers, and Linux, Unix, and Windows workstations.
What is PostgreSQL?
PostgreSQL, also known as Postgres, calls itself "the world's most advanced open source database." The popular object-relational database management system (ORDBMS) offers enterprise-grade features with a strong emphasis on extensibility and standards compliance.
PostgreSQL runs on all major operating systems, including Linux, Unix, and Windows. It's open source, fully ACID-compliant, and has full support for foreign keys, joins, views, triggers, and stored procedures in multiple languages. PostgreSQL is often the best back-end database for web systems and software tools. It's available in cloud-based deployments by most major cloud vendors. And since its syntax forms the basis for querying Amazon Redshift, which makes migration between the two systems relatively painless, Postgres a good stepping-stone for developers who may later use Redshift's data warehouse platform.
Getting data out of Db2
The most common way to get data out of any relational database is to write SELECT queries. You can specifying filters and ordering, and limit results. You can also use the EXPORT command to export the data from a whole table.
Loading data into Postgres
Once you have identified all of the columns you will want to insert, you can use the
CREATE TABLE statement in Postgres to create a table that can receive all of this data. Then, Postgres offers a number of methods for loading in data, and the best method varies depending on the quantity of data you have and the regularity with which you plan to load it.
For simple, day-to-day data insertion, running
INSERT queries against the database directly are the standard SQL method for getting data added. Documentation on INSERT queries and their bretheren can be found in the Postgres documentation here.
For bulk insertions of data, which you will likely want to conduct if you have a high volume of data to load, other tools exist as well. This is where the
COPY command becomes quite useful, as it allows you to load large sets of data into Postgres without needing to run a series of INSERT statements. Documentation can be found here.
The Postgres documentation also provides a helpful overall guide for conducting fast data inserts, populating your database, and avoiding common pitfalls in the process. You can find it here.
Keeping Db2 data up to date
So you've written a script to export data from Db2 and load it into your data warehouse. That should satisfy all your data needs for Db2 – right? Not yet. How do you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow; if latency is important to you, it's not a viable option.
Instead, you can identify some key fields that your script can use to bookmark its progression through the data, and pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Db2.
Other data warehouse options
PostgreSQL is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, Snowflake, or Microsoft Azure SQL Data Warehouse, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Snowflake, To Panoply, To Azure SQL Data Warehouse, and To S3.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Db2 to PostgreSQL automatically. With just a few clicks, Stitch starts extracting your Db2 data via the API, structuring it in a way that's optimized for analysis, and inserting that data into your PostgreSQL data warehouse.