User Tools

Site Tools


emrp:ws2025:amt

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
emrp:ws2025:amt [2026/02/26 21:20] – [6. References] 37554_students.hsrwemrp:ws2025:amt [2026/02/28 21:17] (current) 36502_students.hsrw
Line 1882: Line 1882:
  
 **Model v2 Architecture** **Model v2 Architecture**
-The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi. Figure 21 shows the full architecture.+The v2 architecture is an updated 1D-CNN designed for three-class classification and TFLite deployment on Raspberry Pi One-dimensional CNNs have been shown to be effective for classifying temporal sensor signals with limited training data [R3]. Figure 21 shows the full architecture.
  
 {{ :emrp:ws2025:model_v2_pipeline.png?direct&600 |}} {{ :emrp:ws2025:model_v2_pipeline.png?direct&600 |}}
Line 1907: Line 1907:
  
 **Experiment 1 — Batch Normalization** **Experiment 1 — Batch Normalization**
-Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. BN normalises the activations of each mini-batch during training, which stabilises gradient flow and reduces sensitivity to the initial learning rateThe change was three additional lines in the model definition.+Batch Normalization (BN) layers were added immediately after each of the three Conv1D layers. Batch normalization stabilizes the distribution of layer activations during training and can reduce the need for Dropout regularization [R4]In our experiments, adding BatchNorm halved convergence time from ~80 to ~37 epochs.
  
 **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision. **Result:** EarlyStopping triggered at epoch ~37 compared to ~80 for the baseline — convergence speed was more than halved. Test accuracy remained unchanged at 95.7%. Validation loss showed slightly more stable behaviour. The significant reduction in training time with no accuracy cost made this an easy decision.
Line 2834: Line 2834:
 Future work plans include improving model performance with larger datasets, evaluating different communication technologies, and transforming the system into a fully portable edge-cloud architecture. Furthermore, hardware optimization and detailed power profile analysis can further improve battery life. In conclusion, the developed system offers a scalable and modular solution for IoT applications requiring low power consumption and real-time motion detection. Future work plans include improving model performance with larger datasets, evaluating different communication technologies, and transforming the system into a fully portable edge-cloud architecture. Furthermore, hardware optimization and detailed power profile analysis can further improve battery life. In conclusion, the developed system offers a scalable and modular solution for IoT applications requiring low power consumption and real-time motion detection.
  
-===== 6. References =====+===== 6. References & Sources =====
 European Telecommunications Standards Institute (ETSI). 2018. “Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 2: Harmonised Standard for access to radio spectrum for non specific radio equipment”. Visited: 09.02.2026. Available at: https://www.etsi.org/deliver/etsi_en/300200_300299/30022002/03.02.01_60/en_30022002v030201p.pdf European Telecommunications Standards Institute (ETSI). 2018. “Short Range Devices (SRD) operating in the frequency range 25 MHz to 1 000 MHz; Part 2: Harmonised Standard for access to radio spectrum for non specific radio equipment”. Visited: 09.02.2026. Available at: https://www.etsi.org/deliver/etsi_en/300200_300299/30022002/03.02.01_60/en_30022002v030201p.pdf
  
Line 2844: Line 2844:
  
 Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning (ICML), 448–456. https://proceedings.mlr.press/v37/ioffe15.html Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning (ICML), 448–456. https://proceedings.mlr.press/v37/ioffe15.html
 +
 +Project Repository: https://github.com/bytarikesen81/EMRP
  
  
emrp/ws2025/amt.1772137255.txt.gz · Last modified: 2026/02/26 21:20 by 37554_students.hsrw