Welcome, guest | Sign In | My Account | Store | Cart

Primitive shell for running PostgreSQL quieries from console. Make sure you set the right connection settings in the script.

All queries are executed in autocommit mode.

Example command line usage:

pg.py select * from mytable where oidh=1

The script recognizes a number of magic keywords:

pg.py show databases
pg.py show schemas
pg.py describe <table_name>
Python, 71 lines
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import sys
import psycopg2
import psycopg2.extras
import psycopg2.extensions

DATABASE_HOST = '127.0.0.1'
DATABASE_NAME = "postgres"
SCHEMA = 'postgres'
USER_NAME = 'root'
PASSWORD = 'root'
SHOW_WHITE_SPACE = True # set to False if you need to trim column data

# show table names when called without args
sql = """SELECT table_name FROM information_schema.tables where table_schema = '%s'\n""" % SCHEMA
if len(sys.argv) > 1:
   sql = (" ".join(sys.argv[1:])).strip()

if sql == 'show databases':
   sql = 'select datname as database from pg_database'

if sql == 'show schemas':
	sql = 'select nspname as schema from pg_namespace'

if sql.startswith('describe'):
	sql = """select ordinal_position as index,column_name as column,is_nullable as allow_null,data_type as type FROM information_schema.columns WHERE table_name ='%s' order by ordinal_position  """  % ((sql[len('describe'):]).strip()) 

sys.stderr.write('Connecting to %s...\n' % DATABASE_HOST)
con = psycopg2.connect(database=DATABASE_NAME, user=USER_NAME, host=DATABASE_HOST, password=PASSWORD)

# set autocommit
if  re.search(r'\b(update|insert|delete|drop)\b', sql.lower()):
    print 'Enabling autocommit...'
    con.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

cur = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
sys.stderr.write("Executing: '%s'\n" % sql)
cur.execute(sql)

rs = []

try:
	rs = cur.fetchall()
except:
	sys.stderr.write('Nothing to fetch.\n')

if not len(rs):
	sys.stderr.write('No results.\n')
	sys.exit(0)

first_row = rs[0]

column_names_with_indexes = [item for item in first_row._index.iteritems()]
sorted_column_names_pairs = sorted(column_names_with_indexes, key=lambda k: k[1])
sorted_column_names = [p[0] for p in sorted_column_names_pairs]

for row in rs:
	row_data =[]
	for idx, col in enumerate(row):
		data = SHOW_WHITE_SPACE and "'" + str(col) + "'" or str(col).strip()
		row_data.append(sorted_column_names[idx].ljust(16) + ': ' + data)
	if len(row) >1:
		sys.stdout.write( '-' * 40 + '\n')
	sys.stdout.write (("\n".join(row_data)).encode('us-ascii', 'xmlcharrefreplace'))
	sys.stdout.write ("\n")
	#print ("\n".join(row_data)).encode('unicode_escape')

cur.close()
con.close()

2 comments

Kent Johnson 13 years, 5 months ago  # | flag

Wht not just use psql?

ccpizza (author) 13 years, 5 months ago  # | flag

2Kent:

The server in many cases is on a different machine, and using psql requires an installed and properly configured postgresql client on the connecting machine. Also when using the standard client you need to enter credentials each time.

Another reason I prefer to use the python script above is that I can easily format the output exactly the way I want.

For example, if there are lots of columns in a table instead of this:

-------------------------------
| oidh | status | name | year |
| 0    |   A    | jiji | 2010 | 
| 1    |   I    | jicu | 2010 |
-------------------------------

I would rather prefer to have this:

------------------
| oidh    | 0    |
| status  | A    |
| name    | jiji |
| year    | 2010 |
------------------
| oidh    | 1    |
| status  | I    |
| name    | jicu |
| year    | 2010 |
------------------