I'm facing an issue where I'm trying to save some datas (str and array) in a json file (or xml I still don't really know) in a really fast way.
To better understand let assume that I'm running this deep learning code using opencv and yolov4:
https://gist.github.com/YashasSamaga/e2b19a6807a13046e399f4bc3cca3a49
this will give me position np.array([x, y, w, h]) of differents str(labels) associated to a certain float(weight)
I want to save in a file, for each frame, the exact position, weight and label for everything present on there without affecting performances and which will works for a video longer than one hour.
I tryied this:
import cv2
import xml.etree.cElementTree as ET
# create root for xml
root = ET.Element("root")
# create iteration counter
nbriteration = 0
while video_is_running:
doc = ET.SubElement(root,'frame', Frame=str(nbriteration))
# Use opencv yolov4 code here which catch every detected_elements (contain position, label and weight)
for element in detected_elements:
# "save" each element in the XML using the branch related to the frame
ET.SubElement(doc, '%s'%element.label, x = str(element.pos_x), y = str(element.pos_y), weight = str(element.weight))
nbriteration += 1
# write xml
tree = ET.ElementTree(root))
tree.write(r"D:\output\filename.xml")
# pretty display for notepad++ (I don't mind for the time taken here)
doc = etree.parse(r"D:\output\filename.xml")
et = etree.tostring(doc, encoding='UTF-8', xml_declaration=True, pretty_print=True)
doc.write(r"D:\output\filename.xml", pretty_print=True)
The problem is, for a long video, it'll affect the performances of my program.
I took a look at here or here but I'm in both cases : write a lot of datas as fast as possible for real time application.
Is there a better way to write all this data without affecting performance?