ML Object detection / tracking - how to make a custom .onnx file?
Hi all!
I am wildly fascinated by the ML features, and made some object detection experiments with the coco datasets before.
Now clients are asking for specific objects, like statues, or a product packaging.
For the latter i prepared a dataset with these tools:
www.lobe.ai
www.teachablemachine.withgoogle.com
www.edgeimpulse.com
but the formats that i was able to export there didn't match with Lens studio.
With Lobe i can even export a .ONNX or a .PB file, but if i place it in Lens Studio, i get this error messages:
.onnx : Resource import for … failed: ONNX model Conv layer converter does not support padding mode NOTSET .pb: Resource import for … failed: Unable to parse TensorFlow proto.
Did anyone manage to make a custom ML Detection yet? If so, how??
Thanks !
Answers
-
Lens Studio converts the model with their own converter, and the error messages are not very useful.
Check https://docs.snap.com/lens-studio/references/guides/lens-features/machine-learning/compatibility/
Supports CONSTANT, REFLECT and SYMMETRIC padding types. Seems like your models have : "NOTSET"
1