A framework for understanding and detecting harassment in socialVR
SocialVR, as experienced in immersive audiovisual environments, is a symmetrical communication medium that allows for both verbal interaction, and limited physical interaction through first-person avatars. Through a qualitative analysis of discourse among SocialVR users, this research finds examples of harassment and evidence for patterns in that harassment, advancing how we understand the current problem. In response, methods of recognizing user and environmental trends towards harassment are discussed. Informed by the qualitative data and literature on harassment in social media, natural language processing is used to classify speech as being harassment according to lexical and structural elements. When implemented by SocialVR platforms, this initial step can be added to and altered, making it an effective tool for preventing abuse among users. This research also provides a method for using convolutional neural networks to classify three-dimensional, vulgar imagery that is produced in SocialVR, narrowly targeting the most common forms of vulgarity. Using a CNN, a classification model is made, which can be used to remove unwanted images with 78% accuracy at testing. This framework includes recommendations on how data should be collected going forward, how data should be used, and the design considerations that should be made for both harassing and non-harassing alike.