Wednesday, June 01, 2011

pg_dump and slowness

I am trying to make a backup of a very big postgresql(around 165 gb) for two days now. and at last I find out my mistake. never take a dump at same disk. it eats a lot of IO and kill all services which depends to dumped database.

first try I am dumping database to same disk and after around 6 hours web server started to giving timeout and lovely sitescope mails :) and I had to kill that process.

then I read a lot and started to dumping the database to another machine and it was smooth took around 4 hours to dump and no web server gave any timeout.

example commands

pg_dump -Fc dbname > db.backup

and I started restore like this

pg_restore -d dbname db.backup

before that I needed to recreate the db from psql. and this have not finished I must tell that pg_restore has -j parameter which gives more thread to read the dump file and you can give cpu number to -j which will work faster.

No comments:

C# scan cs file and find variables values and names

using Microsoft . CodeAnalysis ; using Microsoft . CodeAnalysis . CSharp ; using Microsoft . CodeAnalysis . CSharp . Syntax ; using Xunit . ...