This method will work nice if you have a few inserts to make (typically less than 1000 rows). However, if you have more rows to insert, I strongly advise to use any one of the bulk insert methods benchmarked here.
For a full functioning example, please refer to my Jupyter notebook on GitHub.
Step 1: Specify the connection parameters
# Here you want to change your database, username & password according to your own values
param_dic = {
"host" : "localhost",
"database" : "worldbankdata",
"user" : "myuser",
"password" : "Passw0rd"
}
Step 2: Connect to the database and insert your dataframe one row at the time
import psycopg2
import pandas as pd
# Here you want to change your database, username & password according to your own values
param_dic = {
"host" : "localhost",
"database" : "worldbankdata",
"user" : "myuser",
"password" : "Passw0rd"
}
def connect(params_dic):
""" Connect to the PostgreSQL database server """
conn = None
try:
# connect to the PostgreSQL server
print('Connecting to the PostgreSQL database...')
conn = psycopg2.connect(**params_dic)
except (Exception, psycopg2.DatabaseError) as error:
print(error)
sys.exit(1)
return conn
def single_insert(conn, insert_req):
""" Execute a single INSERT request """
cursor = conn.cursor()
try:
cursor.execute(insert_req)
conn.commit()
except (Exception, psycopg2.DatabaseError) as error:
print("Error: %s" % error)
conn.rollback()
cursor.close()
return 1
cursor.close()
# Connecting to the database
conn = connect(param_dic)
# Inserting each row
for i in dataframe.index:
query = """
INSERT into emissions(column1, column2, column3) values('%s',%s,%s);
""" % (dataframe['column1'], dataframe['column2'], dataframe['column3'])
single_insert(conn, query)
# Close the connection
conn.close()
The full working code is available here.
Reminder
Single inserts scale very very poorly as the number of rows increase

If you have many (1000+) rows to insert, I strongly advise to use any one of the bulk insert methods benchmarked here.