so I've been having trouble uploading a 1M+ dataframe that is created through some python data manipulation of multiple CSV's, when I finish the process of manipulating the data I want to upload the final product to a table that is already created under a specific schema of SQL server data base, I've trying using pyodbc but is taking forever to load. I was wondering if there is a faster way to upload frames to a table. Here is the connection part of the code that I've been using.
import pyodbc
server = 'testawsserver'
database = 'testDatabase'
username = 'testusername'
password = 'testpassword'
cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
for index, row in concat_tidy.iterrows():
cursor.execute("INSERT INTO [testDatabase].[shcema1].[testtable] (column1,column2,column3,column4,column5,column6) values(?,?,?,?,?,?)", row.column1, row.column2, row.column3, row.column4, row.column5, row.column6)
cnxn.commit()
cursor.close()
Basically I want to reduce the long wait for load time to the sql server table, appreciate the assistance in this issue.