The project provides an action recognition model that can recognize multiple people’s actions like idle, flip, jump, selfie and walk through photos and videos. This model is based on the skeletons recognized by OpenPose, the first real-time multi-person system to jointly detect human body, hand, facial, and foot key-points (in total 135 key-points) on single images.
Many interactive media now allow players to communicate with NPCs using a keyboard, mouse, or by tapping on a mobile device. Aside from that, human-computer interaction is becoming a new trend in video games. The interactive media software will free players from resources like keyboards and provide them with a better interaction experience by recognizing their behaviors such as jumping and walking. As a result, we introduced a new action recognition model for game developers to use in this project. Our model consists of two parts: first, use OpenPose to recognize multiple people's skeletons, and then recognize the behavior of those skeletons. Idle, leap, wave, selfie, and flip are the five acts it can recognize.
“I get excited when I think about how what I'm doing can really help other people. What I'm doing is not a course assignment but a technology product that can be applied in the real world. At the same time, in the process of completing this project, CMKL provides me with rich resources, so that I don't need to worry about external factors leading to the failure of the project. Finally, I would like to say that when I encounter problems that are difficult to solve, I would like to communicate with the faculty. They give me a lot of inspiration” Siyu Zhou.