Arduino Nicla voice with locally trained model

I am trying to deploy a custom locally trained audio classification model to the Arduino Nicla Voice.

I have already exported models through PyTorch, ONNX, TensorFlow, and TFLite, and I also tried Edge Impulse BYOM. However, in Edge Impulse I do not see any deployment target for Syntiant or NDP120, only generic targets like Arduino library.

From what I have been able to find, it looks like Edge Impulse only enables the Nicla Voice / NDP120 deployment path when using their Audio (Syntiant) pipeline, and not for arbitrary custom BYOM models.

Is there any supported way to run a locally trained model on the Nicla Voice without retraining everything inside the Edge Impulse cloud workflow? Would be a bit odd if that is the only way IMO but I have not been able to find an alternative.

I have a very large local audio dataset, so fully cloud based training with a max of 1 hour compute time does not make much sense.

Does anyone have experience with locally trained models on the Nicla Voice?

If not possible could anyone tell me their experience with the edge impulse cloud training? Are checkpoints saved if the 1 hour compute time is exceeded? Can you continue training from a checkpoint?

Thanks.