72

I have to dump large amount of data from file to a table PostgreSQL. I know it does not support 'Ignore' 'replace' etc as done in MySql. Almost all posts regarding this in the web suggested the same thing like dumping the data to a temp table and then do a 'insert ... select ... where not exists...'.

This will not help in one case, where the file data itself contained duplicate primary keys. Any body have an idea on how to handle this in PostgreSQL?

P.S. I am doing this from a java program, if it helps

Kam
  • 853
  • 1
  • 6
  • 8

4 Answers4

96

Use the same approach as you described, but DELETE (or group, or modify ...) duplicate PK in the temp table before loading to the main table.

Something like:

CREATE TEMP TABLE tmp_table 
ON COMMIT DROP
AS
SELECT * 
FROM main_table
WITH NO DATA;

COPY tmp_table FROM 'full/file/name/here';

INSERT INTO main_table
SELECT DISTINCT ON (PK_field) *
FROM tmp_table
ORDER BY (some_fields)

Details: CREATE TABLE AS, COPY, DISTINCT ON

Jayen
  • 5,219
  • 2
  • 40
  • 61
Ihor Romanchenko
  • 24,934
  • 7
  • 45
  • 42
  • 8
    When I try the copy command I get ERROR: relation "tmp_table" does not exist – Nate Mar 13 '14 at 20:07
  • 1
    @Nate Have you created `tmp_table` before you executed the `COPY` command? – Ihor Romanchenko Mar 13 '14 at 20:48
  • 1
    @Nate When using psql or other client with auto-commit, just omit `ON COMMIT DROP` from first statement, and do a `DROP TABLE tmp_table;` at the end. – Amir Ali Akbari Jan 06 '15 at 16:40
  • 7
    Tweak: use upsert on more modern PSQL, e.g. `ON CONFLICT DO NOTHING` for the INSERT statement. – Joseph Lust Apr 11 '17 at 22:29
  • Joe has a great point; the original solution protects against duplicates in the source file, but not against duplicates that exist between the source file and the target table. An insert with ON CONFLICT DO NOTHING protects against both. Probably wise to do a quick `SELECT COUNT(*) FROM tmp_table;` to compare with results of the INSERT and see how many rows were elided. – Matthew Mark Miller Aug 15 '17 at 15:59
  • 3
    Wrap the answer's code with a transaction (`begin; ...; commit;`) so that `on commit drop` will drop the temp table at the end of the transaction. otherwise the table will drop immediately, right before the `copy` and `insert into` get a chance to run. hence "tmp_table does not exist". – danneu Apr 21 '18 at 00:00
  • will this ignore all the records that violate the PK constrains, instead of including one of the duplicate records? `SELECT DISTINCT ON (PK_field)` – raw-bin hood Jul 26 '20 at 22:07
57

PostgreSQL 9.5 now has upsert functionality. You can follow Igor's instructions, except that final INSERT includes the clause ON CONFLICT DO NOTHING.

INSERT INTO main_table
SELECT *
FROM tmp_table
ON CONFLICT DO NOTHING
jva
  • 2,769
  • 1
  • 24
  • 41
Barrel Roll
  • 603
  • 5
  • 3
16

Igor’s answer helped me a lot, but I also ran into the problem Nate mentioned in his comment. Then I had the problem—maybe in addition to the question here—that the new data did not only contain duplicates internally but also duplicates with the existing data. What worked for me was the following.

CREATE TEMP TABLE tmp_table AS SELECT * FROM newsletter_subscribers;
COPY tmp_table (name, email) FROM stdin DELIMITER ' ' CSV;
SELECT count(*) FROM tmp_table;  -- Just to be sure
TRUNCATE newsletter_subscribers;
INSERT INTO newsletter_subscribers
    SELECT DISTINCT ON (email) * FROM tmp_table
    ORDER BY email, subscription_status;
SELECT count(*) FROM newsletter_subscribers;  -- Paranoid again

Both internal and external duplicates become the same in the tmp_table and then the DISTINCT ON (email) part removes them. The ORDER BY makes sure that the desired row comes first in the result set and DISTINCT then discards all further rows.

Denis Drescher
  • 841
  • 11
  • 15
0

Insert into a temp table grouped by the key so you get rid of the duplicates

and then insert if not exists

Jester
  • 2,849
  • 5
  • 30
  • 43