0

I'm writing csv file using pyspark. Pyspark write the files in multiple nodes. I'm using coalesce function to write down to single node.

df.coalesce(1).write.format("csv").mode("overwrite").save(path+"/"+filename)

But i need to to keep the filename in different name and move that to local file system. Any best approach to accomplish this in pyspark?

Raja
  • 203
  • 4
  • 18
  • 1
    Does this answer your question? [Spark dataframe save in single file on hdfs location](https://stackoverflow.com/questions/40792434/spark-dataframe-save-in-single-file-on-hdfs-location) – Jim Todd Jan 11 '22 at 20:11

0 Answers0