I don't know how sophisticated are your QA/QC operations you run on the input .csv files (what if user has added a row with a string in the column that you expect to be an integer?). If you target ArcGIS 10.4+, I may recommend using pandas Python package to read the .csv file and cast the columns into the proper types so you don't have deal with the cast errors yourself. When you are done, you can always export the produced data frame into an output .csv into a user temp folder using the tempfile module.
If you are only interested in getting your columns right (without actually checking whether all rows would qualify), I suggest converting the .csv file into an in_memory layer first.
Say you have a .csv file with the rows:
ID FieldInt FieldStr FieldDate
1 10 Value1 2018-02-12
2 20 Value2 2018-02-14
3a 20a Value3 2018-02-16
You would like all the fields to be of string types. If you would convert this .csv into a table using the arcpy.TableToTable_conversion(, you would get:

As you can see, ArcGIS decided to cast the ID and FieldInt fields into Integer fields and values that could not have been casted are now just null.

You will not be able to restore the null values, but you can still move the data left into the columns of right type. You create a new table with the fields found in the .csv file using the data types you need:
- Create an empty geodatabase table.
- Add fields with the necessary types (using
arcpy.AddField_management.
- Convert source
.csv into a temp table in_memory\data.
- Append with the
arcpy.Append_management(src, target) moving the data from the temp table into a production one.
Even if you would have a field map in place, the situation I'm describing above would make it impossible to import all the data right. Try yourself to run the TableToTable tool in ArcMap UI.
arcpy.TableToTable_conversion(in_rows="C:/GIS/Temp/data.csv", out_path="C:/GIS/Temp/ArcGISHomeFolder/sample.gdb", out_name="trick1", where_clause="", field_mapping='ID "ID" true true false 4 Text 0 0 ,First,#,C:\GIS\Temp\data.csv,ID,-1,-1;FieldInt "FieldInt" true true false 4 Text 0 0 ,First,#,C:\GIS\Temp\data.csv,FieldInt,-1,-1;FieldStr "FieldStr" true true false 8000 Text 0 0 ,First,#,C:\GIS\Temp\data.csv,FieldStr,-1,-1;FieldDate "FieldDate" true true false 20 Text 0 0 ,First,#,C:\GIS\Temp\data.csv,FieldDate,-1,-1', config_keyword="")
Even after you've specified all the fields to be of Text type, the last row is not loaded (only null present`).
PS. A dirty workaround I've seen in someone's code was to put a top row in the .csv file with values of the type one wanted to have and then delete the row after the data import was done. This could be done using Python's csv module and then using arcpy.da.UpdateCursor to delete the first row.