In the training script mbnet_keras.py you shrink the images to 128x128. Why do you use mobilenet 224x224?
the functions shrink to 128 is not used, I forget to clean the code.
Gotcha, these 2 image loading functions are only for testing at the end, I’ll remove them
So Im using your transfer learning to categorize 734 images into 4 categories. The images are found
Found 734 images belonging to 4 classes.
but it’s getting an error
Epoch 1/20 Traceback (most recent call last): File "c:/Users/laurent/Dropbox/AI/mbnet_keras.py", line 89, in <module> paralleled_model.fit_generator(generator=train_generator,steps_per_epoch=step_size_train,callbacks=callbacks_list,epochs=20) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator initial_epoch=initial_epoch) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\engine\training_generator.py", line 181, in fit_generator generator_output = next(output_generator) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\utils\data_utils.py", line 709, in get six.reraise(*sys.exc_info()) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\six.py", line 693, in reraise raise value File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\utils\data_utils.py", line 685, in get inputs = self.queue.get(block=True).get() File "C:\Users\laurent\.conda\envs\tf_gpu\lib\multiprocessing\pool.py", line 657, in get raise self._value File "C:\Users\laurent\.conda\envs\tf_gpu\lib\multiprocessing\pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras\utils\data_utils.py", line 626, in next_sample return six.next(_SHARED_SEQUENCES[uid]) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras_preprocessing\image\iterator.py", line 100, in __next__ return self.next(*args, **kwargs) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras_preprocessing\image\iterator.py", line 112, in next return self._get_batches_of_transformed_samples(index_array) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras_preprocessing\image\iterator.py", line 226, in _get_batches_of_transformed_samples interpolation=self.interpolation) File "C:\Users\laurent\.conda\envs\tf_gpu\lib\site-packages\keras_preprocessing\image\utils.py", line 102, in load_img raise ImportError('Could not import PIL.Image. ' ImportError: Could not import PIL.Image. The use of `array_to_img` requires PIL.
conda install pillow
Zepan, you forgot that in your post
oh, thank you, I have installed it, and forget add it in post
When converting the model to kmodel I’m getting this error:
Fatal: Layer PAD is not supported
What does it mean and how do I solve that?
By the way, all this can be done on windows, if you’re interested. (I think some people might be put off by the linux requirement and maybe you’ll sell a few more boards this way)
Model is now 100% accurate and 5e-6 loss after 1000 epochs, 300 batch. I think it’s over trained, with only 1000 photos
I’ll do a nice write up once the maix bit can recognize obstacles from open space, too low and too high. I shot video in the forest and our land, then batch converted it to jpg sequences for classification.
can you post the pb to check?
please do not use PAD method, use SpaceToBatchND instead, here is the example:
Since I’m using Keras I don’t see where the PAD method come from and I’m using your modified mobilenet.py that has ZeroPadding2D((1,1),(1,1))
Can you tell me how to replace PAD with SpaceToBatchND?
You may need to update keras to the latest version.
2.2.4 is already installed.
I reinstalled the entire conda environment to isolate and it seems that PAD comes from layers.ZeroPadding2D
So how did you get it working @Zepan? What version of … everything are you using? Is there another way to do ZeroPadding which is K210 compatible?
strange, did you follow the step and use the tools in my github?
in my workspace, there have no problem
should I upload docker image?
Yes I followed every step and used all your tools.
A docker could help yes but only a temporarily fix as it’s ultimately better to know the root cause so I attached the keras .h5 file as well as the .py that does the training. Could you have a look? Maybe the reason lies somewhere there.
Could you include the proper version of NCC with your Maix_Toolbox?
For linux and windows.
The ncc.dll properties show version 1.0.0 but https://github.com/kendryte/nncase/releases/tag/v0.1.0-rc5 shows v 0.1.0rc5 and above in the thread, you talk about a version 0.4 … which is confusing
Wu corrected the get nnc to the latest and now it’s converting a kmodel no problem
But the kmodel is about 4MB, not 2.7MB, is there a way to know what is eating up so much space?
Also loading never ends and boxing the
task = kpu.load(0x200000) with
try: task = kpu.load(0x200000) except: e=sys.exc_info() lcd.draw_string(100,112,"ERROR: "%e) else: lcd.draw_string(100,112,"Done")
Doesn’t show any error so:
- how do we detect kpu.loading progress?
- how do we catch errors?