User Tools

Site Tools


emrp:ws2025:amt

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
emrp:ws2025:amt [2026/02/26 17:06] – [Table] 37554_students.hsrwemrp:ws2025:amt [2026/02/26 21:31] (current) 37554_students.hsrw
Line 43: Line 43:
 |**//Figure 3//** Interfacing of the MCU and MPU6050| |**//Figure 3//** Interfacing of the MCU and MPU6050|
  
-To facilitate easier normalisation for IMU measurements, the accelerometer configuration was selected as +/-2g, and the gyroscope configuration as +/-250dps. Each measurement was provided using windows containing a total of 96 samples, where each sample contains 6 values (ax,ay,az,gx,gy,gz). To facilitate communication synchronisation, the IMU sampling frequency was set to 1kHz and the SMPLRT_DIV value to 7 so that each window would be collected within a 1-second time interval, resulting in a sampling frequency of 125Hz. This yielded an IMU window period of approximately 0.8 seconds.+To facilitate easier normalisation for IMU measurements, the accelerometer configuration was selected as +/-2g, and the gyroscope configuration as +/-250dps. Each measurement was provided using windows containing a total of 96 samples, where each sample contains 6 values (ax,ay,az,gx,gy,gz). To facilitate communication synchronisation, the IMU sampling frequency was set to 1kHz and the SMPLRT_DIV value to 7 so that each window would be collected within a 1-second time interval, resulting in a sampling frequency of 125Hz on the controller side. This yielded an IMU window period of approximately 0.8 seconds.
  
 === 2.2.3 Power Management === === 2.2.3 Power Management ===
Line 124: Line 124:
 |**//Figure 11//**  Schematic Indicating the Connections of All Modules in the Gateway Unit| |**//Figure 11//**  Schematic Indicating the Connections of All Modules in the Gateway Unit|
  
-=== 2.4.2 Gateway System (RPi) ===+=== 2.4.2 Gesture Model ===
 The motion classification model has a three-class structure: The motion classification model has a three-class structure:
   * **Move:** The animal's motion along the x, y, z axes with a specific acceleration   * **Move:** The animal's motion along the x, y, z axes with a specific acceleration
Line 356: Line 356:
 main() main()
  
-if _name_ == "_main_":+if _name_ == "__main__":
     main()     main()
 </file> </file>
Line 450: Line 450:
 </file> </file>
  
-The controller then collected a fixed-length IMU data window consisting of a total of 96 samples at a sampling frequency of 100 HzEach sample contained a total of six 16-bit raw measurements: three-axis acceleration and three-axis gyroscope data. Thus, a single window produced 96 × 6 × 2 bytes of raw data.+The controller then collected a fixed-length IMU data window consisting of a total of 96 samples at a sampling frequency of 100 Hz on the gateway side (a bit lower considering the transmission latency) Each sample contained a total of six 16-bit raw measurements: three-axis acceleration and three-axis gyroscope data. Thus, a single window produced 96 × 6 × 2 bytes of raw data.
  
 The collected window data was not sent in a single piece but was divided into fixed-length frames, taking into account the maximum packet size of the LoRa module. Each frame consisted of a header and a payload section. The header section contained address information, channel information, the system-defined "MAGIC" constant value, and a frame counter field. The window data was divided into multiple frames, and each frame was transmitted sequentially with the frame counter field. By this flow, full IMU logic is code bases the measurement and windowing logic is given below. The collected window data was not sent in a single piece but was divided into fixed-length frames, taking into account the maximum packet size of the LoRa module. Each frame consisted of a header and a payload section. The header section contained address information, channel information, the system-defined "MAGIC" constant value, and a frame counter field. The window data was divided into multiple frames, and each frame was transmitted sequentially with the frame counter field. By this flow, full IMU logic is code bases the measurement and windowing logic is given below.
Line 618: Line 618:
     uint8_t who = 0;     uint8_t who = 0;
     if (mpu_read(hi2c, REG_WHO_AM_I, &who, 1) != HAL_OK) return -1;     if (mpu_read(hi2c, REG_WHO_AM_I, &who, 1) != HAL_OK) return -1;
-    if (who != 0x68) return -2;  // WHO_AM_I beklenen+    if (who != 0x68) return -2;
  
     // Wake up (disable sleep bit)     // Wake up (disable sleep bit)
Line 625: Line 625:
  
     // Sample rate = Gyro output / (1 + SMPLRT_DIV). Gyro output default 8kHz (DLPF=0) or 1kHz (DLPF!=0)     // Sample rate = Gyro output / (1 + SMPLRT_DIV). Gyro output default 8kHz (DLPF=0) or 1kHz (DLPF!=0)
-    mpu_write(hi2c, REG_SMPLRT_DIV, 0x07);   // örnek: 1kHz/(1+7)=125Hz (DLPF ON is assumed)+    mpu_write(hi2c, REG_SMPLRT_DIV, 0x07);   // 1kHz/(1+7)=125Hz (DLPF ON is assumed)
     mpu_write(hi2c, REG_CONFIG, 0x03);       // DLPF ~44Hz (typical)     mpu_write(hi2c, REG_CONFIG, 0x03);       // DLPF ~44Hz (typical)
     mpu_write(hi2c, REG_GYRO_CONFIG, 0x00);  // ±250 dps     mpu_write(hi2c, REG_GYRO_CONFIG, 0x00);  // ±250 dps
Line 1267: Line 1267:
         raise ValueError(f"Incomplete window: got {samples_written} samples, expected {A}")         raise ValueError(f"Incomplete window: got {samples_written} samples, expected {A}")
  
-    # Return the window safter the transfer is successful+    # Return the window after the transfer is successful
     return out     return out
 </file> </file>
Line 1312: Line 1312:
 ==== 3.3 Gesture Model ==== ==== 3.3 Gesture Model ====
 === 3.3.1 Dataset Creation === === 3.3.1 Dataset Creation ===
-The dataset was collected directly on the Raspberry Pi via the IMU (MPU6050) for training the motion classification model. During the data collection process, a total of 6 channels were read from the sensor: 3-axis acceleration (AX, AY, AZ) and 3-axis gyroscope (GX, GY, GZ). The collection process was designed based on windows, and each window consisted of A=96 consecutive samples. The sampling frequency Fs=100 Hz was selected, so each window represented a time interval of approximately 0.96 s. To increase temporal continuity and enhance data diversity, inter-window stride was applied, and S=48 with 50% overlapping windows was implemented. This structure ensured better capture of short-term motion transitions and increased boundary examples between classes.+The dataset was collected directly on the Raspberry Pi via the IMU (MPU6050) for training the motion classification model. During the data collection process, a total of 6 channels were read from the sensor: 3-axis acceleration (AX, AY, AZ) and 3-axis gyroscope (GX, GY, GZ). The collection process was designed based on windows, and each window consisted of A=96 consecutive samples. The sampling frequency Fs=100 Hz was selected for the gateway, so each window represented a time interval of approximately 0.96 s. To increase temporal continuity and enhance data diversity, inter-window stride was applied, and S=48 with 50% overlapping windows was implemented. This structure ensured better capture of short-term motion transitions and increased boundary examples between classes.
  
 <file python record.py> <file python record.py>
Line 1755: Line 1755:
     print(f"Shake / Move: {ratio('Shake','Move'):.2f}x")     print(f"Shake / Move: {ratio('Shake','Move'):.2f}x")
  
-if _name_ == "_main_":+if _name_ == "__main__":
     main()     main()
 </file> </file>
Line 1882: Line 1882:
  
 **Model v2 Architecture** **Model v2 Architecture**
-The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi. Figure 21 shows the full architecture.+The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi One-dimensional CNNs have been shown to be effective for classifying temporal sensor signals with limited training data [R3]. Figure 21 shows the full architecture.
  
 {{ :emrp:ws2025:model_v2_pipeline.png?direct&600 |}} {{ :emrp:ws2025:model_v2_pipeline.png?direct&600 |}}
Line 1907: Line 1907:
  
 **Experiment 1 — Batch Normalization** **Experiment 1 — Batch Normalization**
-Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. BN normalises the activations of each mini-batch during training, which stabilises gradient flow and reduces sensitivity to the initial learning rateThe change was three additional lines in the model definition.+Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. Batch normalization stabilizes the distribution of layer activations during training and can reduce the need for Dropout regularization [R4]In our experiments, adding BatchNorm halved convergence time from ~80 to ~37 epochs.
  
 **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision. **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision.
Line 2278: Line 2278:
 <file python server.py> <file python server.py>
 #!/usr/bin/env python3 #!/usr/bin/env python3
 +import sys
 +import struct
 import time import time
 import json import json
Line 2618: Line 2620:
     #Initialize LoRa control ports (M0-M1, AUX)     #Initialize LoRa control ports (M0-M1, AUX)
     GPIO.setmode(GPIO.BOARD)     GPIO.setmode(GPIO.BOARD)
- GPIO.setwarnings(False) +    GPIO.setwarnings(False) 
- GPIO.setup(PIN_LORA_M0, GPIO.OUT) +    GPIO.setup(PIN_LORA_M0, GPIO.OUT) 
- GPIO.setup(PIN_LORA_M1, GPIO.OUT) +    GPIO.setup(PIN_LORA_M1, GPIO.OUT) 
- GPIO.setup(PIN_LORA_AUX, GPIO.IN, pull_up_down = GPIO.PUD_UP) +    GPIO.setup(PIN_LORA_AUX, GPIO.IN, pull_up_down = GPIO.PUD_UP)
   
- #Wait until AUX is high (Module is ready) +    #Wait until AUX is high (Module is ready) 
- if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): +    if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): 
- raise_error("LoRa module is busy!", -1, cleanup()) +        raise_error("LoRa module is busy!", -1, cleanup()) 
- else: +    else: 
- print("LoRa module is available")+        print("LoRa module is available")
   
- #Set module mode (11) sleep +    #Set module mode (11) sleep 
- GPIO.output(PIN_LORA_M0, GPIO.HIGH) +    GPIO.output(PIN_LORA_M0, GPIO.HIGH) 
- GPIO.output(PIN_LORA_M1, GPIO.HIGH) +    GPIO.output(PIN_LORA_M1, GPIO.HIGH) 
- if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): +    if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): 
- raise_error("Cannot set parameters", -1, cleanup()) +        raise_error("Cannot set parameters", -1, cleanup()) 
- else: +    else: 
- print("Parameters are set")+        print("Parameters are set")
   
- #Send lora configuration parameters +    #Send lora configuration parameters 
- uart.reset_input_buffer() +    uart.reset_input_buffer() 
- print("TX:", lora_params.hex(" ")) +    print("TX:", lora_params.hex(" ")) 
- uart.write(lora_params) +    uart.write(lora_params) 
- uart.flush() +    uart.flush() 
- time.sleep(0.1)+    time.sleep(0.1)
   
- #Validate lora configuration parameters +    #Validate lora configuration parameters 
- uart.reset_input_buffer() +    uart.reset_input_buffer() 
- print("TX:", cmd_params.hex(" ")) +    print("TX:", cmd_params.hex(" ")) 
- uart.write(cmd_params) +    uart.write(cmd_params) 
- uart.flush() +    uart.flush() 
- rx = uart.read(6) +    rx = uart.read(6) 
- if rx != lora_params: +    if rx != lora_params: 
- raise_error("Unable to validate configuration parameters", -1, cleanup()) +        raise_error("Unable to validate configuration parameters", -1, cleanup()) 
- else:  +    else:  
- print("Config parameters:", rx.hex(" ")) +        print("Config parameters:", rx.hex(" ")) 
- time.sleep(0.1)+        time.sleep(0.1)
   
- #Set module mode (00) transceiver +    #Set module mode (00) transceiver 
- if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): +    if not wait_until_pin(PIN_LORA_AUX, GPIO.HIGH, 1.0): 
- raise_error("LoRa module is busy!", -1, cleanup()) +        raise_error("LoRa module is busy!", -1, cleanup()) 
- else: +    else: 
- print("LoRa module is available to transmit"+        print("LoRa module is available to transmit"
- GPIO.output(PIN_LORA_M0, GPIO.LOW) +        GPIO.output(PIN_LORA_M0, GPIO.LOW) 
- GPIO.output(PIN_LORA_M1, GPIO.LOW) +        GPIO.output(PIN_LORA_M1, GPIO.LOW) 
- time.sleep(0.01)+        time.sleep(0.01)
  
     #INIT FIREBASE SERVER     #INIT FIREBASE SERVER
Line 2705: Line 2707:
     fs = args.fs     fs = args.fs
     period = 1.0 / fs     period = 1.0 / fs
 +    
 +    window_id = 0
  
     print("Live TFLite inference started.")     print("Live TFLite inference started.")
Line 2766: Line 2770:
             ts = time.time()             ts = time.time()
  
-            print(f"[Window {window_id}] {window_summary(win_i16)}")+            print(f"[Window {window_id}] {window_summary(win)}")
             print(f"  move_prob={move_prob:.4f}  rest_prob={rest_prob:.4f}  shake_prob={shake_prob:.4f}")             print(f"  move_prob={move_prob:.4f}  rest_prob={rest_prob:.4f}  shake_prob={shake_prob:.4f}")
             print(f"  predicted={pred} ({conf*100:.1f}%)")             print(f"  predicted={pred} ({conf*100:.1f}%)")
Line 2836: Line 2840:
 “Development and validation of a novel pedometer algorithm to quantify extended characteristics of the locomotor behavior of dairy cows, Journal of Dairy Science”. Volume 98. Issue 9. “Development and validation of a novel pedometer algorithm to quantify extended characteristics of the locomotor behavior of dairy cows, Journal of Dairy Science”. Volume 98. Issue 9.
 pp. 6236-6242. ISSN 0022-0302. DOI: https://doi.org/10.3168/jds.2015-9657. pp. 6236-6242. ISSN 0022-0302. DOI: https://doi.org/10.3168/jds.2015-9657.
 +
 +Kiranyaz, S., Avci, O., Abdeljaber, O., Ince, T., Gabbouj, M., & Inman, D. J. (2021). 1D convolutional neural networks and applications: A survey. Mechanical Systems and Signal Processing, 151, 107398. https://doi.org/10.1016/j.ymssp.2020.107398
 +
 +Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning (ICML), 448–456. https://proceedings.mlr.press/v37/ioffe15.html
  
  
emrp/ws2025/amt.1772122013.txt.gz · Last modified: 2026/02/26 17:06 by 37554_students.hsrw