Human Activity Recognition using Machine Learning Projects
Dynamic Video-based Human Activity Recognition (HAR) Research Topics is used in this research to identify and categorize the human activities like actions and activities on the basis of video. It is now being widely employed in many fields and domains. Here we provide the details that are related to this proposed strategy.
- Define Dynamic Video-based Human Activity Recognition
At the beginning stage of the research we first begin with the definition for this proposed technique. It defines the process of automatically finding and classifying human behaviors and actions on the basis of video sequences, by using computer vision techniques and machine learning.
- What is Dynamic Video-based Human Activity Recognition?
There after the definition we look for the brief description for this proposed technique. Dynamic video-based HAR is a domain of machine learning and computer vision where the methods are enhanced to automatically find and classify human behaviors and actions in video sequence. This contains taking out the similar features from the video data and using pattern recognition methods to recognize the activities like running, sitting, walking and more.
- Where Dynamic Video-based Human Activity Recognition used?
Next to the brief description we interpret where to utilize this proposed technique. This proposed Dynamic video-based HAR identifies the applications in different fields like security systems and monitoring, human-computer interaction, sports investigation, healthcare surveillance and assistive technologies for the elderly and people with disabilities.
- Why Dynamic Video-based Human Activity Recognition technology proposed? , previous technology issues
In this research we proposed the video based HAR, which looks for the expansion of modeling of spatial communication and temporal dependencies for more strong and accurate activity detection by utilizing temporal graph associations and contrast learning. Some of the existing technology issues are feature extraction difficulties, Problems in classification and Obstacles in pre-processing.
- Algorithms / protocols
For this research we employ the following methods to overcome the issues in the previous technologies. The methods that we utilized are Median Filter, Multilayer Feedforward perception networks and LSTM (Long Short-Term Memory), Histogram Equalization (HE), Whale Optimization Algorithm (WOA) and Graph Neural Networks and Homomorphic Filter.
- Comparative study / Analysis
We proposed this video based HAR in this research to overcome the issues in the existing technologies. We compared several methods in this research with existing technologies to tackle the issues.
- Unsupervised HAR techniques employed on motion capture (mocap) data that frequently utilize spatial details and reject the activity-specific information that is contained in the temporal sequences.
- Removing several frames at the same time will decrease the needed hardware cost, computational load, processing time and memory of intelligent video-based systems.
- The previous technology utilizes a video-length adaptive input data generator (stateless) whereas the last finds the stateful capacity of general recurrent neural networks but is useful in the specific case of HAR.
- It consists of body movements, high quality videos, series of poses and more activities like pointing, jogging, phone talking, stretching, walking and many more activities.
- Simulation results / Parameters
Some of the parameters or performance metrics that we utilized for this research to enhance the findings for this proposed research. The metrics that we employed for this research are Computation time, Precision, Accuracy, Recall and F-score.
- Dataset LINKS / Important URL
Our proposed technology uses the following techniques to go through the concepts or ideas that are relevant to this video-based HAR. These topics are crucial to overcome the issues in the existing technologies.
- https://www.mdpi.com/2076-3417/12/4/1830
- https://www.sciencedirect.com/science/article/pii/S0031320324000529
- https://link.springer.com/article/10.1007/s42979-023-02031-5
- https://link.springer.com/article/10.1007/s11042-022-14075-5
- https://ieeexplore.ieee.org/abstract/document/10193771/
- Dynamic Video-based Human Activity Recognition Applications
Let’s see the applications for this proposed Dynamic video-based HAR. Some of the applications are Sports analysis, Surveillance and Security systems, Assistive technologies, Healthcare monitoring and Human-computer interaction.
- Topology for Dynamic Video-based Human Activity Recognition
The topologies that to be employed for these proposed technique are Classification, Feature Extraction and Feature Representation
- Environment for Dynamic Video-based Human Activity Recognition
Now we see the environment to be employed for this proposed research are Training data, Data Acquisition, Deployment Environment, software frameworks and computational resources.
- Simulation tools
For this research Dynamic Video-based HAR, the software requirements that require for this are listed below. The developmental tool that is utilized to implement the research is Python – 3.11.4 or above version. The work is executed by using the operating system Windows 10 (64-bit).
- Results
The Dynamic video-based HAR is proposed in this research and it overcomes several existing technology issues. It is compared with the different methods/techniques or also contrasted with various performance metrics to attain correct accurate findings. This technique can be implemented by employing tools like Python – 3.11.4 or above.
Human Activity Recognition using Machine Learning Project topics & Ideas
The succeeding we provide are the research topics on the basis of Human Activity Recognition. These topics are helpful to us, when we overview or go through the concepts of information related to our proposed research
- Audio- and Video-Based Human Activity Recognition Systems in Healthcare
- Video-based Pose-Estimation Data as Source for Transfer Learning in Human Activity Recognition
- Source-Free Domain Adaptation for Millimeter Wave Radar Based Human Activity Recognition
- MARS: A Multiview Contrastive Approach to Human Activity Recognition From Accelerometer Sensor
- Human Activity Recognition System Using Angle Inclination Method and Keypoints Descriptor Network
- A Deep Learning Based Lightweight Human Activity Recognition System Using Reconstructed WiFi CSI
- A Simulation-Based Framework for the Design of Human Activity Recognition Systems Using Radar Sensors
- Dynamic Human Activity Recognition with Vision-Based Pose Estimation and Machine Learning for Various Age Groups
- Body RFID Skeleton-Based Human Activity Recognition Using Graph Convolution Neural Network
- Design of a Low-Cost and Device-Free Human Activity Recognition Model for Smart LED Lighting Control
- High-Accuracy and Fine-Granularity Human Activity Recognition Method Based on Body RFID Skeleton
- A Robust Model of Human Activity Recognition using Independent Component Analysis and XGBoost
- CSI-GLSTN: A Location-Independent CSI Human Activity Recognition Method Based on Spatio-Temporal and Channel Feature Fusion
- SecureSense: Defending Adversarial Attack for Secure Device-Free Human Activity Recognition
- Multi-Framework Evidential Association Rule Fusion for Wearable Human Activity Recognition
- GrapHAR: A Lightweight Human Activity Recognition Model by Exploring the Sub-Carrier Correlations
- Smart-Wearable Sensors and CNN-BiGRU Model: A Powerful Combination for Human Activity Recognition
- Multi-STMT: Multi-Level Network for Human Activity Recognition Based on Wearable Sensors
- Cross-Attention Enhanced Pyramid Multi-Scale Networks for Sensor-Based Human Activity Recognition
- DWOSC: Dynamic Weight Optimization and Smoothness Constraint for Sensor-Based Human Activity Recognition
- Revisiting Large-Kernel CNN Design via Structural Re-Parameterization for Sensor-Based Human Activity Recognition
- A Location-Independent Human Activity Recognition Method Based on CSI: System, Architecture, Implementation
- Radar-Based Human Activity Recognition Using Multidomain Multilevel Fused Patch-Based Learning
- Advancing IR-UWB Radar Human Activity Recognition With Swin Transformers and Supervised Contrastive Learning
- Unsupervised Human Activity Recognition Via Large Language Models and Iterative Evolution
- Adaptive Hierarchical Classification for Human Activity Recognition Using Inertial Measurement Unit (IMU) Time-Series Data
- WiLDAR: WiFi Signal-Based Lightweight Deep Learning Model for Human Activity Recognition
- An eXplainable Self-Attention-Based Spatial–Temporal Analysis for Human Activity Recognition
- EdgeActNet: Edge Intelligence-Enabled Human Activity Recognition Using Radar Point Cloud
- An Improved Deep Convolutional LSTM for Human Activity Recognition Using Wearable Sensors
- MaskCAE: Masked Convolutional AutoEncoder via Sensor Data Reconstruction for Self-Supervised Human Activity Recognition
- CapsLSTM-Based Human Activity Recognition for Smart Healthcare With Scarce Labeled Data
- Robust Human Activity Recognition via Wearable Sensors Using Dynamic Gaussian Kernel Learning
- Radar-Based Human Activity Recognition Using Dual-Stream Spatial and Temporal Feature Fusion Network
- A Systematic Review of Human Activity Recognition Based On Mobile Devices: Overview, Progress and Trends
- SDIGRU: Spatial and Deep Features Integration Using Multilayer Gated Recurrent Unit for Human Activity Recognition
- Dilated Causal Convolution Based Human Activity Recognition Using Voxelized Point Cloud Radar Data
- Learn From Others and Be Yourself in Federated Human Activity Recognition via Attention-Based Pairwise Collaborations
- ActiveSelfHAR: Incorporating Self-Training Into Active Learning to Improve Cross-Subject Human Activity Recognition
- A Self-Supervised Human Activity Recognition Approach via Body Sensor Networks in Smart City
- MLCNNwav: Multilevel Convolutional Neural Network With Wavelet Transformations for Sensor-Based Human Activity Recognition
- Too Good To Be True: accuracy overestimation in (re)current practices for Human Activity Recognition
- Data Augmentation for Human Activity Recognition With Generative Adversarial Networks
- Device Free Wireless Sensing based Human Activity Recognition Using Commercial Off-the-Shelf IoT Single-Board Computers
- STFNet: Enhanced and Lightweight Spatiotemporal Fusion Network for Wearable Human Activity Recognition
- An Efficient Human Activity Recognition In-Memory Computing Architecture Development for Healthcare Monitoring
- Wi-Fi-Based Human Activity Recognition for Continuous, Whole-Room Monitoring of Motor Functions in Parkinson’s Disease
- A Human Activity Recognition Scheme Using Mobile Smartphones Based on Varying Orientations and Positions
- Toward Lightweight End-to-End Semantic Learning of Real-Time Human Activity Recognition for Enabling Ambient Intelligence
- Fresnel Zone-Based Voting With Capsule Networks for Human Activity Recognition From Channel State Information