Suspicious Human Activity Recognition from Surveillance Videos Using Deep Learning- suspicious

Optimizing security and surveillance systems depends on human activity recognition (SHAR), which helps to identify and lower possible hazards in many contexts. This work focuses on the challenge of precisely spotting possibly suspicious human behavior by applying a novel method leveraging cutting-edge deep learning techniques. Though a lot of study has been done on the topic of SHAR, present techniques usually need to be changed with limited degrees of efficiency and accuracy. This study intends to overcome these limitations by offering a complete approach for identifying and acknowledging suspicious human activity. We want to solve the problem of erroneous and ineffective activity detection in surveillance systems by closely collecting and preparing data as well as training models. We significantly improve accuracy rates of 90.14% and 88.23%, respectively, by using Convolutional Neural Networks (CNNs) and deep learning structures, such the proposed time-distributed CNN model and Conv3D model, over present research approaches. Furthermore, YouTube videos and prediction tests on hitherto unreported test data help to demonstrate the effectiveness of our method. By means of an evaluation of the trained models on unavailability test data, we derive their accuracy and capacity to apply acquired knowledge to novel scenarios. Furthermore, the algorithms are applied to forecast questionable human behavior in a YouTube video, so proving their pragmatic value in real-life security operations. The findings of this research have significant implications for enhancing security systems and surveillance, so enabling better identification and mitigation of potential hazards in many contexts. Our approach improves the accuracy and efficiency of SHAR, so advancing the building of more dependable and resilient surveillance systems and finally increasing public safety and security.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More