I currently have a file that contains animated blendshape information:
"brow_mid_down_left","brow_mid_down_right","cheek_squint_left","cheek_squint_right","cheek_balloon_left","cheek_balloon_right","cheek_up_left","cheek_up_right","mouth_corner_in_left","mouth_corner_in_right","mouth_corner_up_left","mouth_corner_up_right","mouth_wide_left","mouth_wide_right","lips_part","lips_upper_in",
0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,
0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,
0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,0.000000,
I need to load this data to my characters in Blender. But my issue is that what I have written takes nearly 2 minutes for only a few hundred frames and has become the slowest part of the entire process.
I am using a simple for loop to iterate through each file line, while at the same time I change the scene's frame using bpy.context.scene.frame_set(i) and then write the values to each property using the line bpy.context.object.keyframe_insert(data_path='["'+shapeName+'"]').
Here is a more complete example of the loop:
with open(os.path.join(path, shapesfile), mode='r') as file:
header = list(filter(None, file.readline()[0:-1].replace('"','').split(','))) #split all values and use filter to remove any empty values - https://stackoverflow.com/a/3845453/3961748
for i, line in enumerate(file): #for each line of data in the file
bpy.context.scene.frame_set(i) #THIS IS THE BOTTLE NECK
shapeVals = line.split(",")
for n, shapeName in enumerate(header): #for each blendshape found in the header
shapeName = 'Mhf'+shapeName
bpy.context.object[shapeName] = float(shapeVals[n])
bpy.context.object.keyframe_insert(data_path='["'+shapeName+'"]') #Keyframes on custom properties use Python's getitem/setitem syntax - https://blender.stackexchange.com/a/3014/32655
From my testing I have found that the very act of changing the frame within the loop is causing the huge delay, and I don't believe it has to do with reading the file or writing the values. Because by simply commenting out the line bpy.context.scene.frame_set(i) then trying again, the process is lightning fast.
Is there anything else I can try?
Is it possible to construct an array for each property and just write directly to it's animation data somehow without needing to change frames?
Thanks for any advice, I'm sure even a slight improvement to this part would significantly speed up my batch loading (which currently takes a long time).
%sformatting suggestion!! – Logic1 Oct 11 '18 at 08:21.formatbut I prefer it still. This can also be done by creating an fcurve and populating withfcurve.keyframe_points.foreach_set("co", data)Where data is a flat list of frames and valuesdata = [f1, v1, f2, v2, ...]which may be quicker if dataset is huge. – batFINGER Oct 11 '18 at 08:30