Human Activity Recognition (HAR) model and demo (still has issues)

Hi guys,

I managed to create HAR model using Keras (adapted from this repo ), for recognizing activity based on accelerometer data. The accuracy is not bad, and also able to convert to .kmodel.

Since I’m going to do inference using MaixPy, and for now MaixPy only supports image data as input, I need to convert the accelerometer data from float to uint8.

The demo code is here:

The kmodel file is here.

Now, my issue is that the inference output seems not changed (only a bit) regardless the input data. I’m not sure there’s problem with the code, or the model. But I can confirm that the model works properly if using regular Python.

Please kindly advice. Thanks.

CC @Zepan

The issue is that, the output is always similar as:
(0.05627171, 0.01401903, 0.3184637, 0.4632558, 0.1154114, 0.03257844)
regardless the input. As we can see that 3rd index (0.4632558) is always bigger means the prediction always the same.

can you provide pb or tflite file?

For sure, here it is: (645.4 KB)

And here’s the .h5 Keras file (1.6 MB)

just get the same result.
Did your model run right on original keras project?
If yes, try nncase’s “inference” method, it will run on pc, simulate k210’s output.
You can check keras result, nncase result, K210 result, and figure it out which step get error.

Also you can create image obj like this:

# Preparing dummy 'image' for input as MaixPy expect image data
dummy = image.Image(size=(4,80))
inputData = dummy.to_grayscale(1)
# Fill the data to dummy image
for row in range(80):
    for col in range(4):
        inputData[4*row+col] = data[row][col]

and what about your dataset for nncase? It need include all data range

and what’s your float to int8 method? do you have orignal data and post data?

Thanks for taking a look, really appreciate it. Also your code looks neat!

I can evaluate the model and result in quite good. Here’s confusion matrix:

Attached my training and evaluation (5.7 KB)

Original dataset is here

Here’s specifically for converting float to uint8. It seems that I cannot use linear scaling, the training is not good. So as suggested by a paper, I use this instead:

def myMap(x, in_min, in_max, out_min, out_max):
    return int((x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min)

def quantize(sample):
    retval = 0
    if sample > 2*GRAVITY:
        retval = 16
    if sample > GRAVITY and sample < 2*GRAVITY:
        retval = myMap(sample, GRAVITY, 2*GRAVITY, 11, 15)
    if sample > 0 and sample < GRAVITY:
        retval = myMap(sample, 0, GRAVITY, 1, 10)
    if sample == 0:
        retval = 0
    if sample > -GRAVITY and sample < 0:
        retval = myMap(sample, -GRAVITY, 0, -10, -1)
    if sample > -2*GRAVITY and sample < -GRAVITY:
        retval = myMap(sample, -2*GRAVITY, -GRAVITY, -15, -11)
    if sample < -2*GRAVITY:
        retval = -16

    return myMap(retval, -16, 16, 0, 32)

Original paper suggest to use negative value, but I think we can not use negative value for image.

Sorry I don’t upload all files to Github. When it works, I’ll upload the whole project files.

I’m honestly not sure how/what to prepare for dataset during conversion with nncase. I just use a grayscale image 4x80 pixel. Is that right? How many should I prepare? For now, I only use one :slight_smile:

I’ve tried to run with “inference”, apparently it’s failed saying that “Quantized… layer not supported”

Please advice.

Hi, here is my sample for Activity Recognition.

I use M5StickV SH200Q acceralator data for sample(data.cvs) with my body, just a Sitting/Standing/Walking/Running status.
CNN learning on Google Colab(accel.ipynb). is M5StickV source code.

And result is like a following.


Ouw wowww. Thanks! Will try it out.