436

I am trying to copy an entire table from one database to another in Postgres. Any suggestions?

Braiam's user avatar
Braiam
4,49511 gold badges50 silver badges83 bronze badges
askedJul 7, 2010 at 13:27
nix's user avatar
2
  • 11
    If you're okay with installing DBeaver, it has a really simple way of transferring between two databases you're connected to. Just right click the source table and select Export Data, target a Database table(s) and set the target as the destination database.CommentedMar 22, 2020 at 7:55
  • 1
    @rovyko I'm trying to do the same thing in DBeaver but using dynamic sql. Please let me know if you know how to do it.CommentedAug 2, 2022 at 8:30

30 Answers30

493

Extract the table and pipe it directly to the target database:

pg_dump -t table_to_copy source_db | psql target_db

Note: If the other database already has the table set up, you should use the-a flag to import data only, else you may see weird errors like "Out of memory":

pg_dump -a -t table_to_copy source_db | psql target_db
answeredMay 23, 2013 at 8:05
thomax's user avatar
Sign up to request clarification or add additional context in comments.

13 Comments

How will this work for remote-db links? E.g., I need to dump from a different location.
@curlyreggie havn't tried this, but I see no reason why it wouldn't work. Try adding user and server specifics to the command, like sopg_dump -U remote_user -h remote_server -t table_to_copy source_db | psql target_db
@thomax This worked despite an error thrown:ERROR: role "remote_user" does not exist.
You can try this: "pg_dump -U remote_user -h remote_server -t table_to_copy source_db | psql target_db -U remote_user -h remote_server "
note that if the other database already has the table set up, you should use the-a flag fordata only. i.e.pg_dump -a -t my_table my_db | psql target_db. While I'm here, If your database is on a server, I find it easier to just dump the database to a file and then scp that file to the database, then send the contents of the file to psql. e.g.pg_dump -a -t my_table my_db > my_file.sql and after putting that on your server -->psql my_other_db < my_file.sql
@EamonnKenny to dump a case-sensitive table, do:pg_dump -t '"tableToCopy"' source_db | psql target_db. Note that single AND double quotes surround the table name
|
165

You can also use the backup functionality in pgAdmin II. Just follow these steps:

  • In pgAdmin, right click the table you want to move, select "Backup"
  • Pick the directory for the output file and set Format to "plain"
  • Click the "Dump Options #1" tab, check "Only data" or "only Schema" (depending on what you are doing)
  • Under the Queries section, click "Use Column Inserts" and "User Insert Commands".
  • Click the "Backup" button. This outputs to a .backup file
  • Open this new file using notepad. You will see the insert scripts needed for the table/data. Copy and paste these into the new database sql page in pgAdmin. Run as pgScript - Query->Execute as pgScript F6

Works well and can do multiple tables at a time.

answeredSep 17, 2012 at 19:15
a2ron44's user avatar

7 Comments

This is a good gui-based solution for moving data between databases. Thanks!
You can select multiple tables under theObjects section. On OSX, click the SQL button or get theSQL Editor via theTools menu to paste in the SQL copied from the backup file.
works, thanks. Very slow though on big tables.. is there a better way to do it to speed it up? (like ignore foreign keys or something?)
@Timothy Here'sthe postgres documentation page on how to speed up backing up and restoring
old answer but still relevant, works great, just don't forget to set Disable triggers when exporting all database
|
124

Usingdblink would be more convenient!

truncate table tableA;insert into tableAselect *from dblink('hostaddr=xxx.xxx.xxx.xxx dbname=mydb user=postgres',            'select a,b from tableA')       as t1(a text,b text);
SebaGra's user avatar
SebaGra
2,9992 gold badges36 silver badges43 bronze badges
answeredJul 8, 2010 at 1:45
tinychen's user avatar

5 Comments

Why two dbname in two times..? which one is source and target.?
tableA that we are inserting to is the destination, and the tableA in the dbLink is the source.
if I want to use dblink bun I dont know the structure of the source source table?
@Ossarotte hey, did you find the answer to your question?
@IbrahimNoor You can also use CREATE TABLE x AS (SELECT * FROM ...); Note that it makes more sense to use CREATE SERVER, nowadays.
50

Using psql, on linux host that have connectivity to both servers

( export PGPASSWORD=password1   psql -U user1 -h host1 database1 \  -c "copy (select field1,field2 from table1) to stdout with csv" ) \| ( export PGPASSWORD=password2   psql -U user2 -h host2 database2 \    -c "copy table2 (field1, field2) from stdin csv" )
divinedragon's user avatar
divinedragon
5,39613 gold badges59 silver badges103 bronze badges
answeredOct 31, 2013 at 1:54
Alexey Sviridov's user avatar

4 Comments

No need for export,PGPASSWORD=password1 psql -U ... then you don't even need explicit subshells! Ordinarily, you'll want to do a couple things to set up first, so subshells may be necessary anyway. Also, the passwords won't be exported into subsequent processes. Thanks!
@LimitedAtonement Actually you right, export and subshells isn't necessary. It's just a part of more complicated script, and even i didn't try without export and subshells, so, i provide it as is only to be honest and provide worked solution
The table must exist in destination DB. To create it, trypg_dump -t '<table_name>' --schema-only
Put passwords to~/.pgpass.
26

Firstinstall dblink

Then, you would do something like:

INSERT INTO t2 select * from dblink('host=1.2.3.4 user=***** password=****** dbname=D1', 'select * t1') tt(       id int,  col_1 character varying,  col_2 character varying,  col_3 int,  col_4 varchar );
Felipe Augusto's user avatar
Felipe Augusto
8,22413 gold badges43 silver badges75 bronze badges
answeredApr 15, 2015 at 9:41

2 Comments

This answer is great because it allows one to filter copied rows (add WHERE clause in the dblink 2nd argument). However, one needs to be explicit about column names (Postgres 9.4) with something like:INSERT INTO l_tbl (l_col1, l_col2, l_col3) SELECT * FROM dblink('dbname=r_db hostaddr=r_ip password=r_pass user=r_usr', 'select r_col1, r_col2, r_col3 from r_tbl where r_col1 between ''2015-10-29'' AND ''2015-10-30'' ') AS t1(col1 MACADDR, col2 TIMESTAMP, col3 NUMERIC(7,1)); (l means local, r is remote. Escape single quotes. Provide col types.)
Just nothing that you should use CREATE SERVER, nowadays.
25

If you have both remote server then you can follow this:

pg_dump -U Username -h DatabaseEndPoint -a -t TableToCopy SourceDatabase | psql -h DatabaseEndPoint -p portNumber -U Username -W TargetDatabase

It will copy the mentioned table of source Database into same named table of target database, if you already have existing schema.

answeredAug 18, 2016 at 7:17
Piyush S. Wanare's user avatar

Comments

15

Use pg_dump to dump table data, and then restore it with psql.

answeredJul 7, 2010 at 13:29
Pablo Santa Cruz's user avatar

6 Comments

Then use another databaserole to connect, a role that has enough permissions.postgresql.org/docs/8.4/static/app-pgdump.html
What am I doing wrong? pg_dump -t "tablename" dbName --role "postgres" > db.sql "postgres" would be the user I'm trying to set the role to. It still gives me "Access is denied".
Do you have permissions to write the db.sql file?
How do I check what permissions I have?
not really a helpful answer, given that the other answers show you how to use pg_dump
|
11

Here is what worked for me.First dump to a file:

pg_dump -h localhost -U myuser -C -t my_table -d first_db>/tmp/table_dump

then load the dumped file:

psql -U myuser -d second_db</tmp/table_dump
answeredNov 4, 2016 at 22:27
max's user avatar

2 Comments

for dump load also need "-h localhost"
I got an error «ERROR: syntax error at or near "ÿ_" --LINE 1: ÿ_-» during the load. Solution was to not use powershell but cmd
11

You could do the following:

pg_dump -h <host ip address> -U <host db user name> -t <host table> > <host database> | psql -h localhost -d <local database> -U <local db user>

answeredMar 24, 2018 at 19:21
g0x11's user avatar

2 Comments

would you like to say something about it
thats legit 😂 you own me
10

To move a table from database A to database B at your local setup, use the following command:

pg_dump -h localhost -U owner-name -p 5432 -C -t table-name database1 | psql -U owner-name -h localhost -p 5432 database2
Felipe Augusto's user avatar
Felipe Augusto
8,22413 gold badges43 silver badges75 bronze badges
answeredNov 9, 2015 at 12:56
RKT's user avatar

2 Comments

I tried it. This does not work because you can only give it the first password.
@max you can doexport PGPASSWORD=<passw> before running the command
8

Combiningthis answer andthis answer, which is more convenient as you don't need to specify the columns:

TRUNCATE TABLE tableA;INSERT INTO tableASELECT (rec).*FROM dblink('hostaddr=xxx.xxx.xxx.xxx dbname=mydb user=postgres',            'SELECT myalias FROM tableA myalias')       AS t1(rec tableA);
answeredMay 5, 2023 at 4:36
pkExec's user avatar

2 Comments

👆Note this answer. It's buried but actually does everything in one go! Thanks @pkExec!
You may have to create the dblink extension:CREATE EXTENSION IF NOT EXISTS dblink;
7

Same as answers byuser5542464 andPiyush S. Wanare but split in two steps:

pg_dump -U Username -h DatabaseEndPoint -a -t TableToCopy SourceDatabase > dumpcat dump | psql -h DatabaseEndPoint -p portNumber -U Username -W TargetDatabase

otherwise the pipe asks the two passwords in the same time.

answeredSep 8, 2016 at 7:57
Adobe's user avatar

1 Comment

Is there possibility that I can mention target database's table name?
7

I was usingDataGrip (By Intellij Idea). and it was very easy copying data from one table (in a different database to another).

First, make sure you are connected with both DataSources in Data Grip.

Select Source Table and press F5 or (Right-click -> Select Copy Table to.)

This will show you a list of all tables (you can also search using a table name in the popup window). Just select your target and press OK.

DataGrip will handle everything else for you.

answeredFeb 6, 2020 at 13:48
Hammad Tariq's user avatar

2 Comments

Please note, DataGrip is aNot Free!
This functionality is also part of IntelliJ Ultimate (also not free), but something that many people may already have.
6

pg_dump does not work always.

Given that you have the same table ddl in the both dbs you could hack it from stdout and stdin as follows:

 # grab the list of cols straight from bash psql -d "$src_db" -t -c \ "SELECT column_name  FROM information_schema.columns  WHERE 1=1  AND table_name='"$table_to_copy"'" # ^^^ filter autogenerated cols if needed      psql -d "$src_db" -c  \ "copy ( SELECT col_1 , col2 FROM table_to_copy) TO STDOUT" |\ psql -d "$tgt_db" -c "\copy table_to_copy (col_1 , col2) FROM STDIN"
Felipe Augusto's user avatar
Felipe Augusto
8,22413 gold badges43 silver badges75 bronze badges
answeredJul 28, 2017 at 16:02
Yordan Georgiev's user avatar

Comments

5

I tried some of the solutions here and they were really helpful. In my experience best solution is to usepsql command line, but sometimes i don't feel like using psql command line. So here is another solution forpgAdminIII

create table table1 as( select t1.*  from dblink(   'dbname=dbSource user=user1 password=passwordUser1',   'select * from table1'    ) as t1(    fieldName1 as bigserial,    fieldName2 as text,    fieldName3 as double precision   ) )

The problem with this method is that the name of the fields and their types of the table you want to copy must be written.

Michał Szkudlarek's user avatar
Michał Szkudlarek
1,5021 gold badge21 silver badges35 bronze badges
answeredNov 12, 2015 at 10:12
Eloy A's user avatar

Comments

4

Check thispython script

python db_copy_table.py "host=192.168.1.1 port=5432 user=admin password=admin dbname=mydb" "host=localhost port=5432 user=admin password=admin dbname=mydb" alarmrules -w "WHERE id=19" -vSource number of rows = 2INSERT INTO alarmrules (id,login,notifybyemail,notifybysms) VALUES (19,'mister1',true,false);INSERT INTO alarmrules (id,login,notifybyemail,notifybysms) VALUES (19,'mister2',true,false);
answeredFeb 27, 2019 at 10:42
themadmax's user avatar

Comments

4

As an alternative, you could also expose your remote tables as local tables using the foreign data wrapper extension. You can then insert into your tables by selecting from the tables in the remote database. The only downside is that it isn't very fast.

answeredJun 10, 2019 at 16:33
ThatDataGuy's user avatar

Comments

4

forDBeaver tool users, you can"Export data" to table in another database.

enter image description here

Only error I kept facing was because ofwrong postgres driver.

SQL Error [34000]: ERROR: portal "c_2" does not exist    ERROR: Invalid protocol sequence 'P' while in PortalSuspended state.

Here is a official wiki on how to export data:https://github.com/dbeaver/dbeaver/wiki/Data-transfer

answeredMay 10, 2021 at 21:33
prayagupadhyay's user avatar

1 Comment

can upsert work here?
3

If the both DBs(from & to) are password protected, in that scenario terminal won't ask for the password for both the DBs, password prompt will appear only once.So, to fix this, pass the password along with the commands.

PGPASSWORD=<password> pg_dump -h <hostIpAddress> -U <hostDbUserName> -t <hostTable> > <hostDatabase> | PGPASSWORD=<pwd> psql -h <toHostIpAddress> -d <toDatabase> -U <toDbUser>
answeredApr 18, 2019 at 7:33
Dante's user avatar

Comments

3

You can do inTwo simple steps:

# dump the database in custom-format archivepg_dump -Fc mydb > db.dump# restore the databasepg_restore -d newdb db.dump

In case ofRemote Databases:

# dump the database in custom-format archivepg_dump -U mydb_user -h mydb_host -t table_name -Fc mydb > db.dump# restore the databasepg_restore -U newdb_user -h newdb_host -d newdb db.dump
answeredJan 22, 2022 at 0:11
OM Bharatiya's user avatar

1 Comment

+1 I had to do it this way because with regular pg_dump (ie pg_dump > table.sql) I was getting this error due to wrongly-encoded NULL columns: "pg_dump restore error: invalid command \N" Using -Fc aka custom format solves by compressing the dump file.
3

Having done this wrong several times, I'll contribute a solution to SAFELY and RELIABLY copy a table from one remote db to another. There's a lot that can go wrong between the dump and restore. For clarity, some additional criteria in this solution:

  • Copy only one table
  • Does not delete anything in either source/dest database
  • Makes sure the id sequence resumes in the to_table, instead of resetting to 1
  • Avoidsdrop table or--clean mistakes from hasty copy-paste
  • Separates dump and restore into two different steps
  • Allows flexibility in customizing the to_table (different indexes, etc)
  • Both databases are remote
  • Each database has a different hostname, port, username, pass

Prerequsites: getpg_dump,pg_restore,psql matching the remote db version

# Figure out which database version is running#   to use the pg_dump, pg_restore with the version.# Run the query:#   select version() # PostgreSQL 14.10# Then install the matching versionbrew tap homebrew/versionsbrew search postgresql@brew install postgresql@14# Later we can switch backbrew install postgresql@16

Export a table from the remote db,including all large objects in the table

# Dump from 10.0.1.123:1234## -Fc Uses "format custom" optimized for pg_restore# -b include all large objects, i.e. blobs, bytea, etc# -U username# -h hostname# -p port# -a only include table data and large objects# -t table name# PGPASSWORD is the supported env var to pass in a passwordPGPASSWORD="FROM-DB-PASSWORD" pg_dump -Fc -b -U FROM-DB-USERNAME -h 10.0.1.123 -p 1234 -a -t from_table from_db_name > from_table.dump# Get the last id sequence for restore laterpsql -h 10.0.1.123 -p 1234 -d from_db_name -U FROM-DB-USERNAME -W -c "select * from from_table_name_id_seq;"# last_value == 9999

Import the table into another remote db

# NO CLEAN, NO DROP/DELETE## Safely create a table with a different name for now.# This helps avoid copy-paste errors accidentally#   importing back to or deleting things in from_db.psql -h 10.0.1.456 -p 4567 -d to_db_name -U TO-DB-USERNAME -W -c "create table to_table (id bigserial not null primary key, . . . );"# Restore to 10.0.1.456:4567## -U username# -h hostname# -p port# -a only include table data and large objects# -t table name# -d database namePGPASSWORD="TO-DB_PASSWORD" pg_restore -h 10.0.1.456 -p 4567 -d to_db_name -U TO-DB-USERNAME -a -t to_table_name from_table.dump# Restore the id sequence we got from the last export step above.psql -h 10.0.1.456 -p 4567 -d to_db_name -U TO-DB-USERNAME -W -c "alter sequence to_table_name_id_seq restart with 9999;"# Rename the table to match the from_table_namepsql -h 10.0.1.456 -p 4567 -d to_db_name -U TO-DB-USERNAME -W -c "alter table to_table_name rename to name_matching_from_table_name;"# Cleanuprm from_table.dump
answeredFeb 26, 2024 at 14:26
Josh Hibschman's user avatar

Comments

2

You have to use DbLink to copy one table data into another table at different database. You have to install and configure DbLink extension to execute cross database query.

I have already created detailed post on this topic.Please visit this link

answeredAug 22, 2015 at 11:42
Anvesh's user avatar

Comments

2

On my mac using a| asked for two passwords at the same time which didn't work. here is what I did.

pg_dump -h {host} -U {user} -t {table} {db} | psql postgresql://{user}:{password}@{host}:{port}/{db}
answeredDec 5, 2023 at 4:32
Daniel Olson's user avatar

Comments

1

It could be done fairly simple manner.Just use the following command

pg_dump –U <user_name> –t <table_name> <source_database> | psql –U <user_name> <targeted_database>

replace values in <> with your specific parameters and also remove <>.

answeredJul 20, 2023 at 12:59
hamza._.ghouri's user avatar

Comments

0

if you want to copy data from one server database to another server database then you have createdblink connection both database otherwise you can export the table data in csv and import the data in other database table, table fields should be same as primary table.

answeredSep 13, 2022 at 12:34
Alok Kumar Maurya's user avatar

Comments

0

Without any piping, on Windows, you can use:

Dump - Edit this to be on one line

"C:\Program Files\PostgreSQL\14\bin\pg_dump.exe"--host="host-postgres01"--port="1234"--username="user01"-t "schema01.table01"--format=c-f "C:\Users\user\Downloads\table01_format_c.sql""DB-01"

Restore - Edit this to be on one line

"C:\Program Files\PostgreSQL\14\bin\pg_restore.exe"--host="host-postgres02"--port="5678"--username="user02"-1--dbname="DB-02""C:\Users\user\Downloads\table01_format_c.sql"

You will be prompted for user passwords.

This solution will put the new table in a schema with the same name (schema01).

answeredJan 3, 2023 at 17:29
scrollout's user avatar

2 Comments

what is the-1 for?
0

for postgres version >= 8.4.0 the below worked for me

pg_dump -U user -h host --column-inserts --data-only --table=table_name database_name | psql -h host -p port -U user -W database_name
answeredDec 22, 2023 at 11:15
JMXCODE's user avatar

1 Comment

Thank you for your interest in contributing to the Stack Overflow community. This question already has quite a few answers—including one that has been extensively validated by the community. Are you certain your approach hasn’t been given previously?If so, it would be useful to explain how your approach is different, under what circumstances your approach might be preferred, and/or why you think the previous answers aren’t sufficient. Can you kindlyedit your answer to offer an explanation?
0
bash -c "psql [postgres connection string B] -c 'TRUNCATE \"TABLE NAME\";' && pg_dump -a -t '\"TABLE NAME\"' -d [postgres connection string A] | psql [postgres connection string B]"

I'm on fish shell, sharing this if anyone is still struggling.

This does 3 things

  1. Truncate the destination table
  2. Dump the table from source DB
  3. Pipe to the destination DB via psql

This is in memory data replication

answeredMay 19, 2024 at 1:38
Pencilcheck's user avatar

Comments

-1

If you run pgAdmin (Backup:pg_dump, Restore:pg_restore) from Windows it will try to output the file by default toc:\Windows\System32 and that's why you will get Permission/Access denied error and not because the user postgres is not elevated enough.RunpgAdmin as Administrator or just choose a location for the output other than system folders of Windows.

Felipe Augusto's user avatar
Felipe Augusto
8,22413 gold badges43 silver badges75 bronze badges
answeredSep 8, 2018 at 12:55
Imre's user avatar

Comments

-2

Just use CREATE TABLE:

CREATE TABLE new_table AS TABLE existing_table;
answeredApr 3, 2024 at 15:47
Elmer Ortega's user avatar

1 Comment

how does this command span two different databases, as posed in the question?

Your Answer

Sign up orlog in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

By clicking “Post Your Answer”, you agree to ourterms of service and acknowledge you have read ourprivacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.