emrp:ws2025:amt
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| emrp:ws2025:amt [2026/02/26 06:09] – 36502_students.hsrw | emrp:ws2025:amt [2026/02/26 21:31] (current) – 37554_students.hsrw | ||
|---|---|---|---|
| Line 43: | Line 43: | ||
| |**//Figure 3//** Interfacing of the MCU and MPU6050| | |**//Figure 3//** Interfacing of the MCU and MPU6050| | ||
| - | To facilitate easier normalisation for IMU measurements, | + | To facilitate easier normalisation for IMU measurements, |
| === 2.2.3 Power Management === | === 2.2.3 Power Management === | ||
| Line 124: | Line 124: | ||
| |**//Figure 11// | |**//Figure 11// | ||
| - | === 2.4.2 Gateway System (RPi) === | + | === 2.4.2 Gesture Model === |
| The motion classification model has a three-class structure: | The motion classification model has a three-class structure: | ||
| * **Move:** The animal' | * **Move:** The animal' | ||
| Line 356: | Line 356: | ||
| main() | main() | ||
| - | if _name_ == "_main_": | + | if _name_ == "__main__": |
| main() | main() | ||
| </ | </ | ||
| Line 450: | Line 450: | ||
| </ | </ | ||
| - | The controller then collected a fixed-length IMU data window consisting of a total of 96 samples at a sampling frequency of 100 Hz. Each sample contained a total of six 16-bit raw measurements: | + | The controller then collected a fixed-length IMU data window consisting of a total of 96 samples at a sampling frequency of 100 Hz on the gateway side (a bit lower considering the transmission latency) |
| The collected window data was not sent in a single piece but was divided into fixed-length frames, taking into account the maximum packet size of the LoRa module. Each frame consisted of a header and a payload section. The header section contained address information, | The collected window data was not sent in a single piece but was divided into fixed-length frames, taking into account the maximum packet size of the LoRa module. Each frame consisted of a header and a payload section. The header section contained address information, | ||
| Line 618: | Line 618: | ||
| uint8_t who = 0; | uint8_t who = 0; | ||
| if (mpu_read(hi2c, | if (mpu_read(hi2c, | ||
| - | if (who != 0x68) return -2; // WHO_AM_I beklenen | + | if (who != 0x68) return -2; |
| // Wake up (disable sleep bit) | // Wake up (disable sleep bit) | ||
| Line 625: | Line 625: | ||
| // Sample rate = Gyro output / (1 + SMPLRT_DIV). Gyro output default 8kHz (DLPF=0) or 1kHz (DLPF!=0) | // Sample rate = Gyro output / (1 + SMPLRT_DIV). Gyro output default 8kHz (DLPF=0) or 1kHz (DLPF!=0) | ||
| - | mpu_write(hi2c, | + | mpu_write(hi2c, |
| mpu_write(hi2c, | mpu_write(hi2c, | ||
| mpu_write(hi2c, | mpu_write(hi2c, | ||
| Line 1267: | Line 1267: | ||
| raise ValueError(f" | raise ValueError(f" | ||
| - | # Return the window | + | # Return the window |
| return out | return out | ||
| </ | </ | ||
| Line 1312: | Line 1312: | ||
| ==== 3.3 Gesture Model ==== | ==== 3.3 Gesture Model ==== | ||
| === 3.3.1 Dataset Creation === | === 3.3.1 Dataset Creation === | ||
| - | The dataset was collected directly on the Raspberry Pi via the IMU (MPU6050) for training the motion classification model. During the data collection process, a total of 6 channels were read from the sensor: 3-axis acceleration (AX, AY, AZ) and 3-axis gyroscope (GX, GY, GZ). The collection process was designed based on windows, and each window consisted of A=96 consecutive samples. The sampling frequency Fs=100 Hz was selected, so each window represented a time interval of approximately 0.96 s. To increase temporal continuity and enhance data diversity, inter-window stride was applied, and S=48 with 50% overlapping windows was implemented. This structure ensured better capture of short-term motion transitions and increased boundary examples between classes. | + | The dataset was collected directly on the Raspberry Pi via the IMU (MPU6050) for training the motion classification model. During the data collection process, a total of 6 channels were read from the sensor: 3-axis acceleration (AX, AY, AZ) and 3-axis gyroscope (GX, GY, GZ). The collection process was designed based on windows, and each window consisted of A=96 consecutive samples. The sampling frequency Fs=100 Hz was selected |
| <file python record.py> | <file python record.py> | ||
| Line 1755: | Line 1755: | ||
| print(f" | print(f" | ||
| - | if _name_ == "_main_": | + | if _name_ == "__main__": |
| main() | main() | ||
| </ | </ | ||
| Line 1882: | Line 1882: | ||
| **Model v2 Architecture** | **Model v2 Architecture** | ||
| - | The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi. Figure 21 shows the full architecture. | + | The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi One-dimensional CNNs have been shown to be effective for classifying temporal sensor signals with limited training data [R3]. Figure 21 shows the full architecture. |
| {{ : | {{ : | ||
| Line 1907: | Line 1907: | ||
| **Experiment 1 — Batch Normalization** | **Experiment 1 — Batch Normalization** | ||
| - | Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. | + | Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. |
| **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision. | **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision. | ||
| Line 1915: | Line 1915: | ||
| A learning rate callback was added: when validation loss does not improve for 10 consecutive epochs, the learning rate is multiplied by 0.5 (minimum: 1×10⁻⁶). This allows the model to take large gradient steps early in training and progressively finer steps as it approaches a local optimum. | A learning rate callback was added: when validation loss does not improve for 10 consecutive epochs, the learning rate is multiplied by 0.5 (minimum: 1×10⁻⁶). This allows the model to take large gradient steps early in training and progressively finer steps as it approaches a local optimum. | ||
| - | **Result:** Test accuracy improved from 95.7% to 96.6% (113/117 correct). Shake recall improved from 41/46 to 42/46. Validation loss reached a lower minimum and remained more stable than in Experiment 1. EarlyStopping triggered at ~40 epochs. The improvement was measurable and consistent. | + | **Result:** Test accuracy improved from 95.7% to 96.6% (113/117 correct). Shake recall improved from 41/46 to 42/46. Validation loss reached a lower minimum and remained more stable than in Experiment 1. EarlyStopping triggered at ~40 epochs. The improvement was measurable and consistent.This configuration was further improved by augmentation in the final step. |
| **Decision: | **Decision: | ||
| Line 1921: | Line 1921: | ||
| To further increase the effective training set size, each training window was duplicated with a perturbed copy: Gaussian noise (σ=0.05 in normalised space) was added, and a random amplitude scale factor drawn from U(0.9, 1.1) was applied. The training set grew from 348 to 696 windows. The validation and test sets were not augmented. | To further increase the effective training set size, each training window was duplicated with a perturbed copy: Gaussian noise (σ=0.05 in normalised space) was added, and a random amplitude scale factor drawn from U(0.9, 1.1) was applied. The training set grew from 348 to 696 windows. The validation and test sets were not augmented. | ||
| - | **Result: | + | **Result: |
| - | **Decision: | + | **Decision: |
| Figure 23 summarises all three experiments across the three key metrics: test accuracy, epochs to convergence, | Figure 23 summarises all three experiments across the three key metrics: test accuracy, epochs to convergence, | ||
| {{ : | {{ : | ||
| - | |**//Figure 23// | + | | **//Figure 23// |
| **Final Model Selection** | **Final Model Selection** | ||
| - | The final model is the v2 + BatchNorm + ReduceLROnPlateau configuration. It achieves | + | The final model is the v2 + BatchNorm + ReduceLROnPlateau |
| {{ : | {{ : | ||
| Line 2089: | Line 2089: | ||
| y_train = y_train[shuffle_idx] | y_train = y_train[shuffle_idx] | ||
| - | print(f" | + | print(f" |
| # ───────────────────────────────────────────────────────────────────────────── | # ───────────────────────────────────────────────────────────────────────────── | ||
| # 4) Model definition | # 4) Model definition | ||
| Line 2100: | Line 2100: | ||
| x = layers.Conv1D(16, | x = layers.Conv1D(16, | ||
| - | x = layers.BatchNormalization()(x) | + | x = layers.BatchNormalization()(x) |
| x = layers.Conv1D(16, | x = layers.Conv1D(16, | ||
| - | x = layers.BatchNormalization()(x) | + | x = layers.BatchNormalization()(x) |
| x = layers.MaxPooling1D(2)(x) | x = layers.MaxPooling1D(2)(x) | ||
| x = layers.Conv1D(24, | x = layers.Conv1D(24, | ||
| - | x = layers.BatchNormalization()(x) | + | x = layers.BatchNormalization()(x) |
| x = layers.GlobalAveragePooling1D()(x) | x = layers.GlobalAveragePooling1D()(x) | ||
| x = layers.Dense(24, | x = layers.Dense(24, | ||
| Line 2134: | Line 2134: | ||
| ) | ) | ||
| - | # YENİ: | + | # ReduceLROnPlateau |
| reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( | reduce_lr = tf.keras.callbacks.ReduceLROnPlateau( | ||
| monitor=" | monitor=" | ||
| Line 2148: | Line 2148: | ||
| epochs=200, | epochs=200, | ||
| batch_size=8, | batch_size=8, | ||
| - | callbacks=[early, | + | callbacks=[early, |
| verbose=2 | verbose=2 | ||
| ) | ) | ||
| Line 2278: | Line 2278: | ||
| <file python server.py> | <file python server.py> | ||
| # | # | ||
| + | import sys | ||
| + | import struct | ||
| import time | import time | ||
| import json | import json | ||
| Line 2618: | Line 2620: | ||
| #Initialize LoRa control ports (M0-M1, AUX) | #Initialize LoRa control ports (M0-M1, AUX) | ||
| GPIO.setmode(GPIO.BOARD) | GPIO.setmode(GPIO.BOARD) | ||
| - | GPIO.setwarnings(False) | + | |
| - | GPIO.setup(PIN_LORA_M0, | + | GPIO.setup(PIN_LORA_M0, |
| - | GPIO.setup(PIN_LORA_M1, | + | GPIO.setup(PIN_LORA_M1, |
| - | GPIO.setup(PIN_LORA_AUX, | + | GPIO.setup(PIN_LORA_AUX, |
| - | #Wait until AUX is high (Module is ready) | + | |
| - | if not wait_until_pin(PIN_LORA_AUX, | + | if not wait_until_pin(PIN_LORA_AUX, |
| - | raise_error(" | + | raise_error(" |
| - | else: | + | else: |
| - | print(" | + | print(" |
| - | #Set module mode (11) sleep | + | |
| - | GPIO.output(PIN_LORA_M0, | + | GPIO.output(PIN_LORA_M0, |
| - | GPIO.output(PIN_LORA_M1, | + | GPIO.output(PIN_LORA_M1, |
| - | if not wait_until_pin(PIN_LORA_AUX, | + | if not wait_until_pin(PIN_LORA_AUX, |
| - | raise_error(" | + | raise_error(" |
| - | else: | + | else: |
| - | print(" | + | print(" |
| - | #Send lora configuration parameters | + | |
| - | uart.reset_input_buffer() | + | uart.reset_input_buffer() |
| - | print(" | + | print(" |
| - | uart.write(lora_params) | + | uart.write(lora_params) |
| - | uart.flush() | + | uart.flush() |
| - | time.sleep(0.1) | + | time.sleep(0.1) |
| - | #Validate lora configuration parameters | + | |
| - | uart.reset_input_buffer() | + | uart.reset_input_buffer() |
| - | print(" | + | print(" |
| - | uart.write(cmd_params) | + | uart.write(cmd_params) |
| - | uart.flush() | + | uart.flush() |
| - | rx = uart.read(6) | + | rx = uart.read(6) |
| - | if rx != lora_params: | + | if rx != lora_params: |
| - | raise_error(" | + | raise_error(" |
| - | else: | + | else: |
| - | print(" | + | print(" |
| - | time.sleep(0.1) | + | time.sleep(0.1) |
| - | #Set module mode (00) transceiver | + | |
| - | if not wait_until_pin(PIN_LORA_AUX, | + | if not wait_until_pin(PIN_LORA_AUX, |
| - | raise_error(" | + | raise_error(" |
| - | else: | + | else: |
| - | print(" | + | print(" |
| - | GPIO.output(PIN_LORA_M0, | + | GPIO.output(PIN_LORA_M0, |
| - | GPIO.output(PIN_LORA_M1, | + | GPIO.output(PIN_LORA_M1, |
| - | time.sleep(0.01) | + | time.sleep(0.01) |
| #INIT FIREBASE SERVER | #INIT FIREBASE SERVER | ||
| Line 2705: | Line 2707: | ||
| fs = args.fs | fs = args.fs | ||
| period = 1.0 / fs | period = 1.0 / fs | ||
| + | | ||
| + | window_id = 0 | ||
| print(" | print(" | ||
| Line 2766: | Line 2770: | ||
| ts = time.time() | ts = time.time() | ||
| - | print(f" | + | print(f" |
| print(f" | print(f" | ||
| print(f" | print(f" | ||
| Line 2804: | Line 2808: | ||
| {{ : | {{ : | ||
| - | |Gateway & Server Test Video| | + | |**//Video 5//** Gateway & Server Test Video| |
| ===== 4.Discussion ===== | ===== 4.Discussion ===== | ||
| Line 2836: | Line 2840: | ||
| “Development and validation of a novel pedometer algorithm to quantify extended characteristics of the locomotor behavior of dairy cows, Journal of Dairy Science”. Volume 98. Issue 9. | “Development and validation of a novel pedometer algorithm to quantify extended characteristics of the locomotor behavior of dairy cows, Journal of Dairy Science”. Volume 98. Issue 9. | ||
| pp. 6236-6242. ISSN 0022-0302. DOI: https:// | pp. 6236-6242. ISSN 0022-0302. DOI: https:// | ||
| + | |||
| + | Kiranyaz, S., Avci, O., Abdeljaber, O., Ince, T., Gabbouj, M., & Inman, D. J. (2021). 1D convolutional neural networks and applications: | ||
| + | |||
| + | Ioffe, S., & Szegedy, C. (2015). Batch normalization: | ||
emrp/ws2025/amt.1772082542.txt.gz · Last modified: 2026/02/26 06:09 by 36502_students.hsrw