select substring(query,0,90),state,query_start,pid from pg_stat_activity where datname='DBNAME' order by query_start;
SELECT blocked_locks.pid AS blocked_pid,
blocking_locks.pid AS blocking_pid,
blocking_activity.state AS blocking_state,
blocking_activity.query_start AS blocking_query_start,
substring(blocked_activity.query,0,60) AS blocked_statement,
substring(blocking_activity.query,0,60) AS current_statement_in_blocking_process,
blocked_activity.datname AS db
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.GRANTED ORDER BY blocking_activity.query_start;
Permissions and pg_hba
Permissions are controlled by the pg_hba file
To find out where this is run
show hba_file;
Normally somewhere like /var/lib/pgsql/10/data/pg_hba.conf
I have seen this setup to allow all local users get acess
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# Default is host all all 127.0.0.1/32 ident
Data
Postgres stores in files in data folder.
The default is something like this
/var/lib/postgresql/9.5/main
Also seen /var/lib/pgsql/9.6/data/, Default dir is /usr/local/pgsql/data
Run this to find out actual location
SHOW data_directory
New Setup
After installing
e.g. for postgres 10.
# Init DB (using default data folder)
sudo service postgresql-10 initdb
sudo service postgresql-10 start
exit
sudo su - postgres
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/10/data/pg_hba.conf.orig
# See below for allowing local users to login as postgres user
vi /var/lib/pgsql/10/data/pg_hba.conf
sudo service postgresql-10 reload
exit
sudo su -
psql -U postgres
> CREATE ROLE NOSUPERUSER CREATEDB CREATEROLE INHERIT LOGIN;
> ALTER USER
WITH PASSWORD '';
> \q
Backup/ Restore
SQL Dump
The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:
pg_dump dbname > dumpfile
Can be more specific. e.g. -n to backup individucal schema, -t for individual table
pg_dump -Fc %DATABASE% -f %DUMP_FILE_PATH%
As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration.
Restore
Non-text file dumps are restored using the pg_restore utility. Text files can use psql
where dumpfile is the file output by the pg_dump command. The database dbname will not be created by this command, so you must create it yourself from template0 before executing psql (e.g., with createdb -T template0 dbname)
By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with the ON_ERROR_STOP variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs:
psql --set ON_ERROR_STOP=on dbname < dumpfile
Either way, you will only have a partially restored database.
pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the
pg_dumpall program is provided.
pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is:
pg_dumpall > dumpfile
The resulting dump can be restored with psql:
Barman
It will backup the databases configured on a DB server.
Otions.. Streaming (prefered) vs rsync
This is a simple script we used to keep a certain number of fodler (backups). Note it will only delete 1 folder ( the oldest) at a time. So if you have way more folders, you may need to manually delete them first.
#!/bin/bash
dir="<barmanDir>/Local/base/"
min_dirs=3 // If there are more dirs than this we will delete the oldest
[[ $(find "$dir" -maxdepth 1 -type d | wc -l) -ge $min_dirs ]] &&
IFS= read -r -d $'\0' line < <(find "$dir" -maxdepth 1 -printf '%T@ %p\0' 2>/dev/null | sort -z -n)
file="${line#* }"
ls -lLd "$file"
rm -rf "$file"
Starting stopping
service postgresql-9.6 initdb
chkconfig postgresql-9.6 on
service postgresql-9.6 start