リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

大学・研究所にある論文を検索できる 「Activity detection systems using infrared array sensors with deep learning (本文)」の論文概要。リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

コピーが完了しました

URLをコピーしました

論文の公開元へ論文の公開元へ
書き出し

Activity detection systems using infrared array sensors with deep learning (本文)

Krishnan Arumugasamy, Muthukumar 慶應義塾大学

2022.09.05

概要

In assistive care technologies, activity detection is one of the vital tasks to assist people by preventing or at least detecting any accident that might occur. Activity detection has conventionally relied on two leading families of devices: wearable and non-wearable ones. As their name suggests, wearable devices are devices that re- quire the person being monitored to wear them or at least carry them with him/her anywhere (s)he goes, such as smartphones, smartwatches, accelerometers, kinetic sen- sors, etc. It is a burden to the person to carry the device. Non-wearable devices, on the other hand, do not have such limiting constraints. A device (typically a sensor) is placed in a specific location in the area under monitoring, with no need for the monitored person to worry about its functioning. In recent years, many non-contact activity detection techniques have been proposed using Wireless Fidelity (Wi-Fi), Light Detection and Ranging (LiDAR), radar etc. These approaches have limitations like coverage issues, and deployment issues related to computational resources.

 The recent introduction of the wide-angle low-resolution infrared (IR) array sensor helped develop device-free monitoring systems to solve most of the issues. Many IR- based activity detection systems have been proposed in recent years. The limitations of the existing works include but are not limited to the difficulty to detect the activity, the non-robustness to the environment, and the computational resource constraints in deployment.

 To address the aforementioned issues, this thesis proposes activity detection sys- tems using a hybrid deep learning model, which could classify blurriness and noisy images produced by the two wide-angle IR array sensors. One is placed on the wall, and another one is placed on the ceiling. Activity detection technique involves two stages. First, we classify the individual frames collected by the wall sensor and the ceiling sensor separately using a Convolution Neural Network (CNN). In the second stage, the output of the CNN is passed through a Long Short-Term Memory (LSTM) with a window size equal to 5 frames to classify the sequence of activities. Afterwards, we combine the ceiling data and wall data and classify each pair of frames using hy-brid deep learning model. Furthermore, we propose an activity detection systems using one IR array sensor on the ceiling allowing for performance comparable to that when using dual sensors. By applying advance deep learning based computer vision techniques, we remove the noise and blurriness in data, which help to improve the IR image quality. The IR images/image sequences are then classified using a hybrid DL model that combines a CNN and an LSTM. By incorporating a wider variety of samples, we use data augmentation to improve the training of neural networks and make the model robust to the environment. A Conditional Generative Adversarial Network (CGAN) performs the data augmentation process. By enhancing the im- ages with Super-Resolution (SR), removing the noise, and augmenting the training data with more samples, the classification accuracy of activity detection can be im- proved. We used quantization to optimize the neural network so that it could run on low-powered devices.

 The contribution of the thesis as follows:

• We propose a lightweight Deep Learning model for activity classification that is robust to environmental changes. Being lightweight, such a model can run on devices with very low computation capabilities, making it a base for a cheap solution for activity detection.
• The blurriness and noise present in the IR captured frames, due to the sensor characteristics the imprecision in the sensor lead to a noticeable drop in per- formance in conventional methods. Our proposed neural network architecture manages to address this issue by exploiting the temporal changes in the frames to identify the activities accurately.
• We identify the activity using a time window of less than 1 second. Despite the smaller time window, we have remarkably enhanced the classification accuracy in comparison to conventional works, which require a larger time window.
• Low Resolution (LR) sensors are always preferred over High Resolution (HR) ones if they provide similar performance. It preserve the privacy of the person and have much lower cost. We demonstrate that it is possible, by using deep learning techniques such as Super-resolution, denoising, and CGAN, to achieve classification performance on the LR data that is nearly identical to that of the classification of the HR data, namely 24×32.

この論文で使われている画像

参考文献

[1] S. B. M. of Internal Affairs and C. Japan, “Statistical handbook of japan 2021,” JAPAN STATISTICAL YEARBOOK, 2021.

[2] “Annual report on the ageing society.” [Online]. Available: https://www8.cao. go.jp/kourei/english/annualreport/2019/pdf/2019.pdf

[3] T. Le Nguyen and T. T. H. Do, “Artificial intelligence in healthcare: A new technology benefit for both patients and doctors,” in 2019 Portland Interna- tional Conference on Management of Engineering and Technology (PICMET). IEEE, 2019, pp. 1–15.

[4] Y. Song and T. J. van der Cammen, “Electronic assistive technology for community-dwelling solo-living older adults: A systematic review,” Maturitas, vol. 125, pp. 50–56, 2019.

[5] F. G. Miskelly, “Assistive technology in elderly care,” Age and ageing, vol. 30, no. 6, pp. 455–458, 2001.

[6] S. A. Zwijsen, A. R. Niemeijer, and C. M. Hertogh, “Ethics of using assistive technology in the care for community-dwelling elderly people: An overview of the literature,” Aging & mental health, vol. 15, no. 4, pp. 419–427, 2011.

[7] D. Moreira, M. Barandas, T. Rocha, P. Alves, R. Santos, R. Leonardo, P. Vieira, and H. Gamboa, “Human activity recognition for indoor localization using smartphone inertial sensors,” Sensors, vol. 21, no. 18, 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/18/6316

[8] F. Shahmohammadi, A. Hosseini, C. E. King, and M. Sarrafzadeh, “Smartwatch based activity recognition using active learning,” in 2017 IEEE/ACM Interna- tional Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE). IEEE, 2017, pp. 321–329.

[9] A. A. Sukor, A. Zakaria, and N. A. Rahim, “Activity recognition using ac- celerometer sensor and machine learning classifiers,” in Proc. IEEE 14th Int. Colloq. on Signal Processing and Its Applications (CSPA). IEEE, 2018, pp. 233–238.

[10] M. L. Gavrilova, Y. Wang, F. Ahmed, and P. P. Paul, “Kinect sensor gesture and activity recognition: New applications for consumer cognitive systems,” IEEE Consumer Electronics Magazine, vol. 7, no. 1, pp. 88–94, 2017.

[11] Y. Hino, J. Hong, and T. Ohtsuki, “Activity recognition using array antenna,” in Proc. IEEE Int. Conf. Communications (ICC). IEEE, 2015, pp. 507–511.

[12] J. Hong, S. Tomii, and T. Ohtsuki, “Cooperative fall detection using doppler radar and array sensor,” in Proc. IEEE 24th Annu. Int. Symp. on Personal, Indoor, and Mobile Radio Commun. (PIMRC). IEEE, 2013, pp. 3492–3496.

[13] M. Bouazizi, C. Ye, and T. Ohtsuki, “2d lidar-based approach for activity iden- tification and fall detection,” IEEE Internet of Things Journal, pp. 1–1, 2021.

[14] T. Nakamura, M. Bouazizi, K. Yamamoto, and T. Ohtsuki, “Wi-fi-csi-based fall detection by spectrogram analysis with cnn,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), 2020, pp. 1–6.

[15] S. Mashiyama, J. Hong, and T. Ohtsuki, “Activity recognition using low reso- lution infrared array sensor,” in Proc. IEEE Int. Conf. Commun. (ICC), 2015, pp. 495–500.

[16] Z. Yang, Z. Wang, J. Zhang, C. Huang, and Q. Zhang, “Wearables can af- ford: Light-weight indoor positioning with visible light,” in Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services, 2015, pp. 317–330.

[17] Y. H. Lee and G. Medioni, “Rgb-d camera based wearable navigation system for the visually impaired,” Computer vision and Image understanding, vol. 149, pp. 3–20, 2016.

[18] “Apple pay.” [Online]. Available: https://www.apple.com/apple-pay/

[19] “visa.” [Online]. Available: https://usa.visa.com/pay-with-visa/find-card/ buy-gift-card

[20] M. Vidal, J. Turner, A. Bulling, and H. Gellersen, “Wearable eye tracking for mental health monitoring,” Computer Communications, vol. 35, no. 11, pp. 1306–1311, 2012.

[21] J. Wijsman, B. Grundlehner, H. Liu, H. Hermens, and J. Penders, “Towards mental stress detection using wearable physiological sensors,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2011, pp. 1798–1801.

[22] D. Anzaldo, “Wearable sports technology-market landscape and compute soc trends,” in 2015 International SoC Design Conference (ISOCC). IEEE, 2015, pp. 217–218.

[23] [Online]. Available: https://www.forbes.com/

[24] “Wearables market sees first decline at beginning of 2022 as demand normalizes, according to idc.” [Online]. Available: https://www.idc.com/ getdoc.jsp?containerId=prUS49250022

[25] S. Patel, H. Park, P. Bonato, L. Chan, and M. Rodgers, “A review of wearable sensors and systems with application in rehabilitation,” Journal of neuroengi- neering and rehabilitation, vol. 9, no. 1, pp. 1–17, 2012.

[26] M. Al-Khafajiy, T. Baker, C. Chalmers, M. Asim, H. Kolivand, M. Fahim, and A. Waraich, “Remote health monitoring of elderly through wearable sensors,” Multimedia Tools and Applications, vol. 78, no. 17, pp. 24 681–24 706, 2019.

[27] J. Liu, J. Sohn, and S. Kim, “Classification of daily activities for the elderly using wearable sensors,” Journal of healthcare engineering, vol. 2017, 2017.

[28] Z. Wang, Z. Yang, and T. Dong, “A review of wearable technologies for elderly care that can accurately track indoor position, recognize physical activities and monitor vital signs in real time,” Sensors, vol. 17, no. 2, p. 341, 2017.

[29] J. Heikenfeld, A. Jajack, J. Rogers, P. Gutruf, L. Tian, T. Pan, R. Li, M. Khine, J. Kim, and J. Wang, “Wearable sensors: modalities, challenges, and prospects,” Lab on a Chip, vol. 18, no. 2, pp. 217–248, 2018.

[30] A. Kamišalić, I. Fister Jr, M. Turkanović, and S. Karakatič, “Sensors and func- tionalities of non-invasive wrist-wearable devices: A review,” Sensors, vol. 18, no. 6, p. 1714, 2018.

[31] J. Huberty, D. K. Ehlers, J. Kurka, B. Ainsworth, and M. Buman, “Feasibility of three wearable sensors for 24 hour monitoring in middle-aged women,” BMC women’s health, vol. 15, no. 1, pp. 1–9, 2015.

[32] M. H. Bahari, L. K. Hamaidi, M. Muma, J. Plata-Chaves, M. Moonen, A. M. Zoubir, and A. Bertrand, “Distributed multi-speaker voice activity detection for wireless acoustic sensor networks,” arXiv preprint arXiv:1703.05782, 2017.

[33] Y. Li, Z. Zeng, M. Popescu, and K. Ho, “Acoustic fall detection using a circu- lar microphone array,” in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. IEEE, 2010, pp. 2242–2245.

[34] Z. Fu, T. Delbruck, P. Lichtsteiner, and E. Culurciello, “An address-event fall detector for assisted living applications,” IEEE transactions on biomedical cir- cuits and systems, vol. 2, no. 2, pp. 88–96, 2008.

[35] T. Hasija, M. Gölz, M. Muma, P. J. Schreier, and A. M. Zoubir, “Source enumer- ation and robust voice activity detection in wireless acoustic sensor networks,” in 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019, pp. 1257–1261.

[36] V. Vishwakarma, C. Mandal, and S. Sural, “Automatic detection of human fall in video,” in International conference on pattern recognition and machine intelligence. Springer, 2007, pp. 616–623.

[37] M. Zia Uddin, W. Khaksar, and J. Torresen, “A thermal camera-based activity recognition using discriminant skeleton features and rnn,” in 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), vol. 1, 2019, pp. 777–782.

[38] L. Becker, “Influence of ir sensor technology on the military and civil defense,” in Quantum Sensing and Nanophotonic Devices III, vol. 6127. SPIE, 2006, pp. 180–194.

[39] S. Park, H. T. Kim, S. Lee, H. Joo, and H. Kim, “Survey on anti-drone systems: Components, designs, and challenges,” IEEE Access, vol. 9, pp. 42 635–42 659, 2021.

[40] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video surveil- lance for fall detection based on human shape deformation,” IEEE Transactions on circuits and systems for video Technology, vol. 21, no. 5, pp. 611–622, 2011.

[41] J. C.-W. Cheung, E. W.-C. Tam, A. H.-Y. Mak, T. T.-C. Chan, and Y.-P. Zheng, “A night-time monitoring system (enightlog) to prevent elderly wander- ing in hostels: a three-month field study,” International journal of environmen- tal research and public health, vol. 19, no. 4, p. 2103, 2022.

[42] S. S. Torkestani, S. Sahuguede, A. Julien-Vergonjanne, J. Cances, and J.-C. Daviet, “Infrared communication technology applied to indoor mobile health- care monitoring system,” International Journal of E-Health and Medical Com- munications (IJEHMC), vol. 3, no. 3, pp. 1–11, 2012.

[43] M. B. Coskun and M. Rais-Zadeh, “Thermal infrared detector sparse array for nasa planetary applications,” in Proc. 5th IEEE Conf. Electron Devices Technology & Manufacturing (EDTM), 2021, pp. 1–3.

[44] E. Josse, A. Nerborg, K. Hernandez-Diaz, and F. Alonso-Fernandez, “In-bed person monitoring using thermal infrared sensors,” in Proc. 16th Conf. Com- puter Science and Intelligence Systems (FedCSIS), 2021, pp. 121–125.

[45] D. L. Luu, C. Lupu, I. Cristian et al., “Speed control and spacing control for autonomous mobile robot platform equipped with infrared sensors,” in Proc. 16th Int. Conf. on Engineering of Modern Electric Systems (EMES), 2021, pp. 1–4.

[46] S. Lee, K. N. Ha, and K. C. Lee, “A pyroelectric infrared sensor-based indoor location-aware system for the smart home,” IEEE Transactions on Consumer Electronics, vol. 52, no. 4, pp. 1311–1317, 2006.

[47] Q. Liang, L. Yu, X. Zhai, Z. Wan, and H. Nie, “Activity recognition based on thermopile imaging array sensor,” in Proc. IEEE Int. Conf. on Electro/Inf. Technol.(EIT), 2018, pp. 0770–0773.

[48] K. A. Muthukumar, M. Bouazizi, and T. Ohtsuki, “Activity detection using wide angle low-resolution infrared array sensors,” in Proc. The Inst. of Electronics, Inf. and Commun. Engineers (IEICE) Conf. Archives., 2020, pp. BS–8–1.

[49] S. O. Al-Jazzar, S. A. Aldalahmeh, D. McLernon, and S. A. R. Zaidi, “Intruder localization and tracking using two pyroelectric infrared sensors,” IEEE Sensors Journal, vol. 20, no. 11, pp. 6075–6082, 2020.

[50] M. Bouazizi and T. Ohtsuki, “An infrared array sensor-based method for local- izing and counting people for health care and monitoring,” in Proc. 42nd Annu. Int. Conf. of the IEEE Engineering in Medicine & Biology Society (EMBC), 2020, pp. 4151–4155.

[51] T. Yang, P. Guo, W. Liu, X. Liu, and T. Hao, “Enhancing pir-based multi- person localization through combining deep learning with domain knowledge,” IEEE Sensors Journal, vol. 21, no. 4, pp. 4874–4886, 2021.

[52] D. B. Sam, S. V. Peri, M. N. Sundararaman, A. Kamath, and R. V. Babu, “Locate, size, and count: Accurately resolving people in dense crowds via detec- tion,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 8, pp. 2739–2751, 2021.

[53] C. Wu, F. Zhang, B. Wang, and K. J. Ray Liu, “Proc. mmtrack: Passive multi- person localization using commodity millimeter wave radio,” in IEEE INFO- COM 2020 - IEEE Conf. on Computer Commun., 2020, pp. 2400–2409.

[54] M. Bouazizi, C. Ye, and T. Ohtsuki, “Low-resolution infrared array sensor for counting and localizing people indoors: When low end technology meets cutting edge deep learning techniques,” Information, vol. 13, no. 3, 2022. [Online]. Available: https://www.mdpi.com/2078-2489/13/3/132

[55] S. Mashiyama, J. Hong, and T. Ohtsuki, “A fall detection system using low resolution infrared array sensor,” in Proc. IEEE Annu. Int. Symp. on Personal, Indoor, and Mobile Radio Commun. (PIMRC), 2014, pp. 2109–2113.

[56] X. Luo, Q. Guan, H. Tan, L. Gao, Z. Wang, and X. Luo, “Simultaneous indoor tracking and activity recognition using pyroelectric infrared sensors,” Sensors, vol. 17, no. 8, 2017. [Online]. Available: https://www.mdpi.com/1424-8220/ 17/8/1738

[57] N. Gu, B. Yang, and T. Zhang, “Dynamic fuzzy background removal for in- door human target perception based on thermopile array sensor,” IEEE Sensors Journal, vol. 20, no. 1, pp. 67–76, 2019.

[58] R. Tang, T. Zhang, Y. Chen, H. Liang, B. Li, and Z. Zhou, “Infrared thermog- raphy approach for effective shielding area of field smoke based on background subtraction and transmittance interpolation,” Sensors, vol. 18, no. 5, p. 1450, 2018.

[59] E. S. Jeon, J.-S. Choi, J. H. Lee, K. Y. Shin, Y. G. Kim, T. T. Le, and K. R. Park, “Human detection based on the generation of a background image by using a far-infrared light camera,” Sensors, vol. 15, no. 3, pp. 6763–6788, 2015.

[60] E. Goubet, J. Katz, and F. Porikli, “Pedestrian tracking using thermal infrared imaging,” in Infrared technology and applications XXXII, vol. 6206. SPIE, 2006, pp. 797–808.

[61] A. Naser, A. Lotfi, and J. Zhong, “Towards human distance estimation using a thermal sensor array,” Neural Computing and Applications, pp. 1–11, June 2021.

[62] S. Okuda, S. Kaneda, and H. Haga, “Human position/height detection using analog type pyroelectric sensors,” in International Conference on Embedded and Ubiquitous Computing. Springer, 2005, pp. 306–315.

[63] X. Zhang, H. Seki, and M. Hikizu, “Detection of human position and motion by thermopile infrared sensor,” International Journal of Automation Technology, vol. 9, no. 5, pp. 580–587, 2015.

[64] S. Parnin and M. Rahman, “Human location detection system using micro- electromechanical sensor for intelligent fan,” in IOP Conference Series: Ma- terials Science and Engineering, vol. 184, no. 1. IOP Publishing, 2017, p. 012042.

[65] T. Hosokawa and M. Kudo, “Person tracking with infrared sensors,” in In- ternational Conference on Knowledge-Based and Intelligent Information and Engineering Systems. Springer, 2005, pp. 682–688.

[66] K. A. Kumar and O. Dhadge, “A novel infrared (ir) based sensor system for human presence detection in targeted locations.” International Journal of Com- puter Network & Information Security, vol. 10, no. 12, 2018.

[67] K. Kobayashi, T. Ohtsuki, and K. Toyoda, “Human activity recognition by infrared sensor arrays considering positional relation between user and sensors,” in Proc. Smart City Based Ambient Intelliegnce, 2018, pp. 1–6.

[68] X. Fan, H. Zhang, C. Leung, and Z. Shen, “Robust unobtrusive fall detection using infrared array sensors,” in Proc. IEEE Int. Conf. on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2017, pp. 194–199.

[69] Y. Taniguchi, H. Nakajima, N. Tsuchiya, J. Tanaka, F. Aita, and Y. Hata, in Proc. Int. Conf. on Soft Computing and Intelligent Systems (SCIS), 2014, pp. 673–678.

[70] C. Zhong, W. W. Ng, S. Zhang, C. D. Nugent, C. Shewell, and J. Medina-Quero, “Multi-occupancy fall detection using non-invasive thermal vision sensor,” IEEE Sensors Journal, vol. 21, no. 4, pp. 5377–5388, 2020.

[71] M. Burns, F. Cruciani, P. Morrow, C. Nugent, and S. McClean, “Using convolutional neural networks with multiple thermal sensors for unobtrusive pose recognition,” Sensors, vol. 20, no. 23, 2020. [Online]. Available: https://www.mdpi.com/1424-8220/20/23/6932

[72] M. Ángel López-Medina, M. Espinilla, C. Nugent, and J. M. Quero, “Evaluation of convolutional neural networks for the classification of falls from heterogeneous thermal vision sensors,” International Journal of Distributed Sensor Networks, vol. 16, no. 5, 2020.

[73] D. Rand, J. J. Eng, P.-F. Tang, J.-S. Jeng, and C. Hung, “How active are people with stroke? use of accelerometers to assess physical activity,” Stroke, vol. 40, no. 1, pp. 163–168, 2009.

[74] D. S. Ward, K. R. Evenson, A. Vaughn, A. B. Rodgers, and R. P. Troiano, “Accelerometer use in physical activity: best practices and research recommen- dations.” Medicine and science in sports and exercise, vol. 37, no. 11 Suppl, pp. S582–8, 2005.

[75] J.-S. Lee and H.-H. Tseng, “Development of an enhanced threshold-based fall detection system using smartphones with built-in accelerometers,” IEEE Sen- sors Journal, vol. 19, no. 18, pp. 8293–8302, 2019.

[76] G. L. Santos, P. T. Endo, K. H. d. C. Monteiro, E. d. S. Rocha, I. Silva, and T. Lynn, “Accelerometer-based human fall detection using convolutional neural networks,” Sensors, vol. 19, no. 7, p. 1644, 2019.

[77] T.-L. Le, J. Morel et al., “An analysis on human fall detection using skeleton from microsoft kinect,” in 2014 IEEE Fifth International Conference on Com- munications and Electronics (ICCE). IEEE, 2014, pp. 484–489.

[78] S. Ranakoti, S. Arora, S. Chaudhary, S. Beetan, A. S. Sandhu, P. Khandnor, and P. Saini, “Human fall detection system over imu sensors using triaxial ac- celerometer,” in Computational Intelligence: Theories, Applications and Future Directions-Volume I. Springer, 2019, pp. 495–507.

[79] X. Sun, Z. Lu, W. Hu, and G. Cao, “Symdetector: detecting sound-related respiratory symptoms using smartphones,” in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2015, pp. 97–108.

[80] X. Sun, Z. Lu, X. Zhang, M. Salathé, and G. Cao, “Infectious disease contain- ment based on a wireless sensor system,” Ieee Access, vol. 4, pp. 1558–1569, 2016.

[81] X. Sun, L. Qiu, Y. Wu, Y. Tang, and G. Cao, “Sleepmonitor: Monitoring respiratory rate and body position during sleep using smartwatch,” Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies, vol. 1, no. 3, pp. 1–22, 2017.

[82] U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity recognition and monitoring using multiple sensors on different body positions,” in Interna- tional Workshop on Wearable and Implantable Body Sensor Networks (BSN’06). IEEE, 2006, pp. 4–pp.

[83] G. M. Weiss, J. L. Timko, C. M. Gallagher, K. Yoneda, and A. J. Schreiber, “Smartwatch-based activity recognition: A machine learning approach,” in 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). IEEE, 2016, pp. 426–429.

[84] S. Mekruksavanich, N. Hnoohom, and A. Jitpattanakul, “Smartwatch-based sit- ting detection with human activity recognition for office workers syndrome,” in 2018 International ECTI Northern Section Conference on Electrical, Electron- ics, Computer and Telecommunications Engineering (ECTI-NCON). IEEE, 2018, pp. 160–164.

[85] S. Mekruksavanich, A. Jitpattanakul, P. Youplao, and P. Yupapin, “Enhanced hand-oriented activity recognition based on smartwatch sensor data using lstms,” Symmetry, vol. 12, no. 9, p. 1570, 2020.

[86] C. Dobbins and R. Rawassizadeh, “Towards clustering of mobile and smartwatch accelerometer data for physical activity recognition,” in Informatics, vol. 5, no. 2. MDPI, 2018, p. 29.

[87] S. Al-Janabi and A. H. Salman, “Sensitive integration of multilevel optimization model in human activity recognition for smartphone and smartwatch applica- tions,” Big data mining and analytics, vol. 4, no. 2, pp. 124–138, 2021.

[88] A. Mannini and A. M. Sabatini, “Machine learning methods for classifying hu- man physical activity from on-body accelerometers,” Sensors, vol. 10, no. 2, pp. 1154–1175, 2010.

[89] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human ac- tivity recognition on smartphones using a multiclass hardware-friendly sup- port vector machine,” in International workshop on ambient assisted living. Springer, 2012, pp. 216–223.

[90] S. Balli, E. A. Sağbaş, and M. Peker, “Human activity recognition from smart watch sensor data using a hybrid of principal component analysis and random forest algorithm,” Measurement and Control, vol. 52, no. 1-2, pp. 37–45, 2019.

[91] N. M. Fung, J. W. S. Ann, Y. H. Tung, C. S. Kheau, and A. Chekima, “Elderly fall detection and location tracking system using heterogeneous wireless net- works,” in 2019 IEEE 9th Symposium on Computer Applications & Industrial Electronics (ISCAIE). IEEE, 2019, pp. 44–49.

[92] Y. Wang, K. Wu, and L. M. Ni, “Wifall: Device-free fall detection by wireless networks,” IEEE Transactions on Mobile Computing, vol. 16, no. 2, pp. 581– 594, 2016.

[93] S. P. Rana, M. Dey, M. Ghavami, and S. Dudley, “Signature inspired home environments monitoring system using ir-uwb technology,” Sensors, vol. 19, no. 2, p. 385, 2019.

[94] H. Yoshino, V. G. Moshnyaga, and K. Hashimoto, “Fall detection on a single doppler radar sensor by using convolutional neural networks,” in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019, pp. 2889–2892.

[95] B. Y. Su, K. Ho, M. J. Rantz, and M. Skubic, “Doppler radar fall activity detection using the wavelet transform,” IEEE Transactions on Biomedical En- gineering, vol. 62, no. 3, pp. 865–875, 2014.

[96] H. Sadreazami, M. Bolic, and S. Rajan, “Capsfall: Fall detection using ultra- wideband radar and capsule network,” IEEE Access, vol. 7, pp. 55 336–55 343, 2019.

[97] H. Sadreazami and M. Bolic, “Fall detection using standoff radar-based sensing and deep convolutional neural network,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 67, no. 1, pp. 197–201, 2019.

[98] C. Ding, Y. Zou, L. Sun, H. Hong, X. Zhu, and C. Li, “Fall detection with multi- domain features by a portable fmcw radar,” in 2019 IEEE MTT-S International Wireless Symposium (IWS). IEEE, 2019, pp. 1–3.

[99] B. Erol and M. G. Amin, “Radar data cube processing for human activity recognition using multisubspace learning,” IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 6, pp. 3617–3628, 2019.

[100] B. Jokanovic, M. Amin, and F. Ahmad, “Radar fall motion detection using deep learning,” in 2016 IEEE radar conference (RadarConf ). IEEE, 2016, pp. 1–6.

[101] C. Álvarez-Aparicio, Á. M. Guerrero-Higueras, F. J. Rodríguez-Lera, J. Ginés Clavero, F. Martín Rico, and V. Matellán, “People detection and track- ing using lidar sensors,” Robotics, vol. 8, no. 3, p. 75, 2019.

[102] Á. M. Guerrero-Higueras, C. Álvarez-Aparicio, M. C. Calvo Olivera, F. J. Rodríguez-Lera, C. Fernández-Llamas, F. M. Rico, and V. Matellán, “Track- ing people in a mobile robot from 2d lidar scans using full convolutional neural networks for security in cluttered environments,” Frontiers in neurorobotics, vol. 12, p. 85, 2019.

[103] L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, and C. Peñafort-Asturiano, “Up-fall detection dataset: A multimodal ap- proach,” Sensors, vol. 19, no. 9, p. 1988, 2019.

[104] S. Moulik and S. Majumdar, “Fallsense: An automatic fall detection and alarm generation system in iot-enabled environment,” IEEE Sensors Journal, vol. 19, no. 19, pp. 8452–8459, 2018.

[105] G. Mastorakis and D. Makris, “Fall detection system using kinect’s infrared sensor,” Journal of Real-Time Image Processing, vol. 9, no. 4, pp. 635–646, 2014.

[106] S. Jankowski, Z. Szymański, U. Dziomin, P. Mazurek, and J. Wagner, “Deep learning classifier for fall detection based on ir distance sensor data,” in 2015 IEEE 8th International Conference on Intelligent Data Acquisition and Ad- vanced Computing Systems: Technology and Applications (IDAACS), vol. 2. IEEE, 2015, pp. 723–727.

[107] Y. Karayaneva, S. Sharifzadeh, W. Li, Y. Jing, and B. Tan, “Unsupervised doppler radar based activity recognition for e-healthcare,” IEEE Access, vol. 9,pp. 62 984–63 001, 2021.

[108] C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, no. 3, pp. 273–297, 1995.

[109] S. Han, C. Qubo, and H. Meng, “Parameter selection in svm with rbf kernel function,” in World Automation Congress 2012. IEEE, 2012, pp. 1–4.

[110] Y. Kim and H. Ling, “Human activity classification based on micro-doppler signatures using a support vector machine,” IEEE transactions on geoscience and remote sensing, vol. 47, no. 5, pp. 1328–1337, 2009.

[111] A. Fleury, M. Vacher, and N. Noury, “Svm-based multimodal classification of activities of daily living in health smart homes: sensors, algorithms, and first experimental results,” IEEE transactions on information technology in biomedicine, vol. 14, no. 2, pp. 274–283, 2009.

[112] Z. He and L. Jin, “Activity recognition from acceleration data based on dis- crete consine transform and svm,” in 2009 IEEE International Conference on Systems, Man and Cybernetics. IEEE, 2009, pp. 5041–5044.

[113] M. Batool, A. Jalal, and K. Kim, “Sensors technologies for human activity analysis based on svm optimized by pso algorithm,” in 2019 International Con- ference on Applied and Engineering Mathematics (ICAEM). IEEE, 2019, pp. 145–150.

[114] A. Sadiq, S. G. Khawaja, M. U. Akram, N. S. Alghamdi, A. Khan, and A. Shaukat, “Machine learning and signal processing based analysis of semg signals for daily action classification,” IEEE Access, vol. 10, pp. 40 506–40 516, 2022.

[115] T. Malisiewicz, A. Gupta, and A. A. Efros, “Ensemble of exemplar-svms for object detection and beyond,” in 2011 International conference on computer vision. IEEE, 2011, pp. 89–96.

[116] M. A. Rahman, S. T. Hasan, and M. A. Kader, “Computer vision based in- dustrial and forest fire detection using support vector machine (svm),” in 2022 International Conference on Innovations in Science, Engineering and Technol- ogy (ICISET). IEEE, 2022, pp. 233–238.

[117] P. S. Singh and S. Karthikeyan, “Salient object detection in hyperspectral im- ages using deep background reconstruction based anomaly detection,” Remote Sensing Letters, vol. 13, no. 2, pp. 184–195, 2022.

[118] N. Haider, “A review on object detection since 2005.” International Journal of Advanced Research in Computer Science, vol. 13, no. 2, 2022.

[119] K.-M. Schneider, “A comparison of event models for naive bayes anti-spam e- mail filtering,” in 10th Conference of the European Chapter of the Association for Computational Linguistics, 2003.

[120] R. Swinburne, “Bayes’ theorem,” Revue Philosophique de la France Et de l, vol. 194, no. 2, 2004.

[121] I. Rish et al., “An empirical study of the naive bayes classifier,” in IJCAI 2001 workshop on empirical methods in artificial intelligence, vol. 3, no. 22, 2001, pp. 41–46.

[122] S. Taheri and M. Mammadov, “Learning the naive bayes classifier with opti- mization models,” International Journal of Applied Mathematics and Computer Science, vol. 23, no. 4, pp. 787–795, 2013.

[123] Z. Feng, L. Mo, and M. Li, “A random forest-based ensemble method for ac- tivity recognition,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015, pp. 5074– 5077.

[124] A. U. Weerasuriya, X. Zhang, B. Lu, K. T. Tse, and C. Liu, “A gaussian process- based emulator for modeling pedestrian-level wind field,” Building and Environ- ment, vol. 188, p. 107500, 2021.

[125] S. Masarat, S. Sharifian, and H. Taheri, “Modified parallel random forest for intrusion detection systems,” The Journal of Supercomputing, vol. 72, no. 6, pp. 2235–2258, 2016.

[126] P. Vepakomma, D. De, S. K. Das, and S. Bhansali, “A-wristocracy: Deep learn- ing on wrist-worn sensing for recognition of user complex activities,” in 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN), 2015, pp. 1–6.

[127] K. H. Walse, R. V. Dharaskar, and V. M. Thakare, “Pca based optimal ann classifiers for human activity recognition using mobile sensors data,” in Pro- ceedings of First International Conference on Information and Communication Technology for Intelligent Systems: Volume 1. Springer, 2016, pp. 429–436.

[128] N. Y. Hammerla, S. Halloran, and T. Plötz, “Deep, convolutional, and recur- rent models for human activity recognition using wearables,” arXiv preprint arXiv:1604.08880, 2016.

[129] Y. Bengio, “Deep learning of representations: Looking forward,” in International conference on statistical language and speech processing. Springer, 2013, pp. 1–37.

[130] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.

[131] M. Zeng, L. T. Nguyen, B. Yu, O. J. Mengshoel, J. Zhu, P. Wu, and J. Zhang, “Convolutional neural networks for human activity recognition using mobile sensors,” in 6th international conference on mobile computing, applications and services. IEEE, 2014, pp. 197–205.

[132] J. Yang, M. N. Nguyen, P. P. San, X. L. Li, and S. Krishnaswamy, “Deep convolutional neural networks on multichannel time series for human activity recognition,” in Twenty-fourth international joint conference on artificial intel- ligence, 2015.

[133] Y. Chen and Y. Xue, “A deep learning approach to human activity recogni- tion based on single accelerometer,” in 2015 IEEE international conference on systems, man, and cybernetics. IEEE, 2015, pp. 1488–1492.

[134] B. Pourbabaee, M. J. Roshtkhari, and K. Khorasani, “Deep convolutional neural networks and learning ecg features for screening paroxysmal atrial fibrillation patients,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 12, pp. 2095–2104, 2018.

[135] A. Sathyanarayana, S. Joty, L. Fernandez-Luque, F. Ofli, J. Srivastava, A. El- magarmid, S. Taheri, and T. Arora, “Impact of physical activity on sleep: A deep learning based exploration,” arXiv preprint arXiv:1607.07034, 2016.

[136] W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” in Proceedings of the 23rd ACM interna- tional conference on Multimedia, 2015, pp. 1307–1310.

[137] S. Ha, J.-M. Yun, and S. Choi, “Multi-modal convolutional neural networks for activity recognition,” in 2015 IEEE International conference on systems, man, and cybernetics. IEEE, 2015, pp. 3017–3022.

[138] M. S. Singh, V. Pondenkandath, B. Zhou, P. Lukowicz, and M. Liwickit, “Trans- forming sensor data to the image domain for deep learning—an application to footstep detection,” in 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017, pp. 2665–2672.

[139] X. Li, Y. Zhang, I. Marsic, A. Sarcevic, and R. S. Burd, “Deep learning for rfid-based activity recognition,” in Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM, 2016, pp. 164–175.

[140] D. Ravi, C. Wong, B. Lo, and G.-Z. Yang, “Deep learning for human activity recognition: A resource efficient implementation on low-power devices,” in 2016 IEEE 13th international conference on wearable and implantable body sensor networks (BSN). IEEE, 2016, pp. 71–76.

[141] Y. Kim and B. Toomajian, “Hand gesture recognition using micro-doppler sig- natures with convolutional neural network,” IEEE Access, vol. 4, pp. 7125–7130, 2016.

[142] T. Zebin, P. J. Scully, and K. B. Ozanyan, “Human activity recognition with inertial sensors using a deep learning approach,” in 2016 IEEE sensors. IEEE, 2016, pp. 1–3.

[143] S. Ha and S. Choi, “Convolutional neural networks for human activity recogni- tion using multiple accelerometer and gyroscope sensors,” in 2016 international joint conference on neural networks (IJCNN). IEEE, 2016, pp. 381–388.

[144] M. Tschannen, O. Bachem, and M. Lucic, “Recent advances in autoencoder- based representation learning,” arXiv preprint arXiv:1812.05069, 2018.

[145] B. Almaslukh, J. AlMuhtadi, and A. Artoli, “An effective deep autoencoder ap- proach for online smartphone-based human activity recognition,” Int. J. Com- put. Sci. Netw. Secur, vol. 17, no. 4, pp. 160–165, 2017.

[146] A. Wang, G. Chen, C. Shang, M. Zhang, and L. Liu, “Human activity recog- nition in a smart home environment with stacked denoising autoencoders,” in International conference on web-age information management. Springer, 2016, pp. 29–40.

[147] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural computation, vol. 18, no. 7, pp. 1527–1554, 2006.

[148] Y. Li, D. Shi, B. Ding, and D. Liu, “Unsupervised feature learning for hu- man activity recognition using smartphone sensors,” in Mining intelligence and knowledge exploration. Springer, 2014, pp. 99–107.

[149] H. Larochelle, M. Mandel, R. Pascanu, and Y. Bengio, “Learning algorithms for the classification restricted boltzmann machine,” The Journal of Machine Learning Research, vol. 13, pp. 643–669, 2012.

[150] N. Y. Hammerla, J. Fisher, P. Andras, L. Rochester, R. Walker, and T. Plötz, “Pd disease state assessment in naturalistic environments using deep learning,” in Twenty-Ninth AAAI conference on artificial intelligence, 2015.

[151] N. D. Lane, P. Georgiev, and L. Qendro, “Deepear: robust smartphone au- dio sensing in unconstrained acoustic environments using deep learning,” in Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing, 2015, pp. 283–294.

[152] T. Plötz, N. Y. Hammerla, and P. L. Olivier, “Feature learning for activity recognition in ubiquitous computing,” in Twenty-second international joint con- ference on artificial intelligence, 2011.

[153] V. Radu, N. D. Lane, S. Bhattacharya, C. Mascolo, M. K. Marina, and F. Kawsar, “Towards multimodal deep learning for activity recognition on mo- bile devices,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, 2016, pp. 185–188.

[154] H. Fang and C. Hu, “Recognizing human activity in smart home using deep learning algorithm,” in Proceedings of the 33rd chinese control conference. IEEE, 2014, pp. 4716–4720.

[155] L. Zhang, X. Wu, and D. Luo, “Real-time activity recognition on smartphones using deep neural networks,” in 2015 IEEE 12th Intl Conf on Ubiquitous In- telligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom). IEEE, 2015, pp. 1236–1242.

[156] M. Edel and E. Köppe, “Binarized-blstm-rnn based human activity recognition,” in 2016 International conference on indoor positioning and indoor navigation (IPIN). IEEE, 2016, pp. 1–7.

[157] Y. Guan and T. Plötz, “Ensembles of deep lstm learners for activity recognition using wearables,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 2, pp. 1–28, 2017.

[158] M. Inoue, S. Inoue, and T. Nishida, “Deep recurrent neural network for mobile human activity recognition with high throughput,” Artificial Life and Robotics, vol. 23, no. 2, pp. 173–185, 2018.

[159] V. Miškovic et al., “Machine learning of hybrid classification models for decision support,” Sinteza 2014-Impact of the Internet on Business Activities in Serbia and Worldwide, pp. 318–323, 2014.

[160] F. J. Ordóñez and D. Roggen, “Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition,” Sensors, vol. 16, no. 1, p. 115, 2016.

[161] X. Zhang, F. Chen, and R. Huang, “A combination of rnn and cnn for attention- based relation classification,” Procedia computer science, vol. 131, pp. 911–917, 2018.

[162] F. Khozeimeh, D. Sharifrazi, N. H. Izadi, J. H. Joloudari, A. Shoeibi, R. Al- izadehsani, J. M. Gorriz, S. Hussain, Z. A. Sani, H. Moosaei et al., “Combining a convolutional neural network with autoencoders to predict the survival chance of covid-19 patients,” Scientific Reports, vol. 11, no. 1, pp. 1–18, 2021.

[163] Y. Zheng, Q. Liu, E. Chen, Y. Ge, and J. L. Zhao, “Exploiting multi-channels deep convolutional neural networks for multivariate time series classification,” Frontiers of Computer Science, vol. 10, no. 1, pp. 96–112, 2016.

[164] C. Liu, L. Zhang, Z. Liu, K. Liu, X. Li, and Y. Liu, “Lasagna: Towards deep hierarchical understanding and searching over mobile sensing data,” in Proceed- ings of the 22nd Annual International Conference on Mobile Computing and Networking, 2016, pp. 334–347.

[165] C. Taramasco, T. Rodenas, F. Martinez, P. Fuentes, R. Munoz, R. Olivares, V. H. C. De Albuquerque, and J. Demongeot, “A novel monitoring system for fall detection in older people,” IEEE Access, vol. 6, pp. 43 563–43 574, 2018.

[166] J. M. Quero, M. Burns, M. A. Razzaq, C. Nugent, and M. Espinilla, “Detection of falls from non-invasive thermal vision sensors using convolutional neural networks,” Multidisciplinary Digital Publishing Institute Proceedings, vol. 2, no. 19, 2018. [Online]. Available: https://www.mdpi.com/2504-3900/2/19/1236

[167] T. Li, B. Yang, and T. Zhang, “Human action recognition based on state de- tection in low-resolution infrared video,” in Proc. IEEE 16th Conf. on Ind. Electronics and Applications (ICIEA), 2021, pp. 1667–1672.

[168] S. Tateno, F. Meng, R. Qian, and Y. Hachiya, “Privacy-preserved fall detection method with three-dimensional convolutional neural network using low-resolution infrared array sensor,” Sensors, vol. 20, no. 20, 2020. [Online]. Available: https://www.mdpi.com/1424-8220/20/20/5957

[169] C. Perra, A. Kumar, M. Losito, P. Pirino, M. Moradpour, and G. Gatto, “Monitoring indoor people presence in buildings using low-cost infrared sensor array in doorways,” Sensors, vol. 21, no. 12, 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/12/4062

[170] J. Tanaka, H. Imamoto, T. Seki, and M. Oba, “Low power wireless human detector utilizing thermopile infrared array sensor,” in SENSORS, 2014 IEEE, 2014, pp. 462–465.

[171] J. Tanaka, M. Shiozaki, F. Aita, T. Seki, and M. Oba, “Thermopile infrared array sensor for human detector application,” in 2014 IEEE 27th International Conference on Micro Electro Mechanical Systems (MEMS), 2014, pp. 1213– 1216.

[172] A. A. Trofimova, A. Masciadri, F. Veronese, and F. Salice, “Indoor human detec- tion based on thermal array sensor data and adaptive background estimation,” Journal of Computer and Communications, vol. 5, no. 4, pp. 16–28, 2017.

[173] S. Munir, S. Mohammadmoradi, O. Gnawali, and C. P. Shelton, “Measuring people-flow through doorways using easy-to-install ir array sensors,” Mar. 16 2021, uS Patent 10,948,354.

[174] M. Burns, P. Morrow, C. Nugent, and S. McClean, “Fusing thermopile infrared sensor data for single component activity recognition within a smart environ- ment,” Journal of Sensor and Actuator Networks, vol. 8, no. 1, p. 10, 2019.

[175] A. Hayashida, V. Moshnyaga, and K. Hashimoto, “The use of thermal ir array sensor for indoor fall detection,” in 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2017, pp. 594–599.

[176] T. Liu, X. Guo, and G. Wang, “Elderly-falling detection using distributed direction-sensitive pyroelectric infrared sensor arrays,” Multidimensional Sys- tems and Signal Processing, vol. 23, no. 4, pp. 451–467, 2012.

[177] V. Pavlov, H. Ruser, and M. Horn, “Feature extraction from an infrared sensor array for localization and surface recognition of moving cylindrical objects,” in 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007, 2007, pp. 1–6.

[178] A. Sixsmith and N. Johnson, “A smart sensor to detect the falls of the elderly,” IEEE Pervasive computing, vol. 3, no. 2, pp. 42–47, 2004.

[179] H. M. Ng, “Poster abstract: Human localization and activity detection using thermopile sensors,” in 2013 ACM/IEEE International Conference on Informa- tion Processing in Sensor Networks (IPSN), 2013, pp. 337–338.

[180] Y. Ogawa and K. Naito, “Fall detection scheme based on temperature distribu- tion with ir array sensor,” in 2020 IEEE International Conference on Consumer Electronics (ICCE). IEEE, 2020, pp. 1–5.

[181] Y. Watanabe, S. Kurihara, and T. Sugawara, “Sensor network topology esti- mation using time-series data from infrared human presence sensors,” in SEN- SORS, 2010 IEEE. IEEE, 2010, pp. 664–667.

[182] B. Song, H. Choi, and H. S. Lee, “Surveillance tracking system using passive infrared motion sensors in wireless sensor network,” in 2008 International Con- ference on Information Networking. IEEE, 2008, pp. 1–5.

[183] M. Samara, “Literature review of sensor fusion technology: For improved occu- pancy information in indoor spaces,” 2017.

[184] D. Wang and J. Lee, “Convolution-based design for real-time pose recogni- tion and character animation generation,” Wireless Communications and Mo- bile Computing, vol. 2022, 2022.

[185] H.-W. Tzeng, M.-Y. Chen, and J.-Y. Chen, “Design of fall detection system with floor pressure and infrared image,” in 2010 International Conference on System Science and Engineering. IEEE, 2010, pp. 131–135.

[186] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 2625–2634.

[187] S. Berlemont, G. Lefebvre, S. Duffner, and C. Garcia, “Class-balanced siamese neural networks,” Neurocomputing, vol. 273, pp. 47–56, 2018.

[188] C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolu- tional neural network,” in Proc. European conference on computer vision, 2016, pp. 391–407.

[189] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proc. IEEE conf. on computer vision and pattern Recognit., 2018, pp. 9446–9454.

[190] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.

[191] A. E. Ilesanmi and T. O. Ilesanmi, “Methods for image denoising using convolu- tional neural network: a review,” Complex & Intelligent Systems, vol. 7, no. 5, pp. 2179–2198, 2021.

[192] L. Fan, F. Zhang, H. Fan, and C. Zhang, “Brief review of image denoising techniques,” Visual Computing for Industry, Biomedicine, and Art, vol. 2, no. 1, pp. 1–12, 2019.

[193] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. on Medical image comput- ing and computer-assisted intervention, 2015, pp. 234–241.

[194] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recog- nition,” in Proc. IEEE conf. on computer vision and pattern recognition, 2016, pp. 770–778.

[195] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large- scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.

[196] Q. Huang, C. Hsieh, J. Hsieh, and C. Liu, “Memory-efficient ai algorithm for infant sleeping death syndrome detection in smart buildings,” AI, vol. 2, no. 4, pp. 705–719, 2021.

[197] W. Nogami, T. Ikegami, R. Takano, T. Kudoh et al., “Optimizing weight value quantization for cnn inference,” in 2019 International Joint Conference on Neu- ral Networks (IJCNN). IEEE, 2019, pp. 1–8.

[198] Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, 2014.

[199] M. Yu, Z. Lin, K. Narra, S. Li, Y. Li, N. S. Kim, A. Schwing, M. Annavaram, and S. Avestimehr, “Gradiveq: Vector quantization for bandwidth-efficient gra- dient aggregation in distributed cnn training,” Advances in Neural Information Processing Systems, vol. 31, 2018.

[200] Q. Huang, “Weight-quantized squeezenet for resource-constrained robot vacu- ums for indoor obstacle classification,” AI, vol. 3, no. 1, pp. 180–193, 2022.

[201] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficient integer-arithmetic-only inference,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2704–2713.

[202] E. J. Kirkland, “Bilinear interpolation,” in Advanced Computing in Electron Microscopy. Springer, 2010, pp. 261–263.

参考文献をもっと見る

全国の大学の
卒論・修論・学位論文

一発検索!

この論文の関連論文を見る