How to convert tiny-yolo to kmodel?


#1

I’m trying to convert a custom darknet tiny-yolov3 model to a kmodel using the model compiler, but it doesn’t seem to be working. Here is the command I’m using:

python __main__.py --model_loader model_loader/darknet --weights_path models/yolo-tiny-obj_final.weights --cfg_path models/yolo-tiny-obj.cfg

I got this error:

Traceback (most recent call last):
  File "__main__.py", line 150, in <module>
    main()
  File "__main__.py", line 123, in main
    k210_layers = model_loader_module.load_model(dataset_val, rfb, args)
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\__init__.py", line 87, in load_model
    decode_darknet(cfg_path, weights_path, build_dir)
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\__init__.py", line 28, in decode_darknet
    dtype='float32')
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\D2T_lib\darknet_tool.py", line 67, in __init__
    self.from_cfg_file(cfg_file)
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\D2T_lib\darknet_tool.py", line 75, in from_cfg_file
    self.net.layers_from_cfg(cfg_file)
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\D2T_lib\net.py", line 68, in layers_from_cfg
    self.parse_block(contents[block_st:line_id], copy.copy(block_id))
  File "C:\Users\Nathan\kendryte-model-compiler\model_loader\darknet\D2T_lib\net.py", line 118, in parse_block
    initializer = __parse_layers__[default_name]
    KeyError: 'yolo'

#2

hi, please use nncase, model compiler not update many month ago


#3

Ok, I was able to convert the model to a kmodel, but it’s about 7 MB, so it doesn’t fit in the memory of the K210. How can I modify the darknet config or do something else to make it fit? I thought tiny-yolov3 was supposed to be small enough to fit?


#4

I modified the config by removing the last convolutional layer, and it seems to work fine on my PC. But when I convert it to a kmodel and load a modified verison of the example code, it gets stuck on this line: while(!_kpu.isForwardOk());. From looking through the code, it looks like this is because the variable g_ai_done_flag never changes, which seems like it means the KPU gets stuck while processing the model. Why does this happen?